This year’s audio miniconference happened last month in Lyon, sponsored by Intel. Thanks to everyone who attended, this event is all about conversations between people, and to Alexandre Belloni for organizing the conference dinner.
We started off with Curtis Malainey talking us through some UCM extensions that ChromeOS has been using. There was general enthusiasm for the changes that were proposed, discussion mainly revolved around deployment issues with handling new features, especially where topology files are involved. If new controls are added in a topology file update then some system is needed to ensure that UCM files to configure them are updated in sync or work without the update. No clear answers were found, one option would be to combine UCM and topology files but it’s not clear that this is useful in general if the configuration is done separately to the DSP development.
Daniel Baluta then started some discussion of topics related to Sound Open Firmware (slides). The first was issues with loading firmware before the filesystems are ready, we agreed that this can be resolved through the use of the _nowait() APIs. More difficult was resolving how to deal with card initialization. Currently the only complete in-tree users are x86 based so have to deal with the problems with the incomplete firmware descriptions provided by ACPI, there’s nothing standards based like we have for device tree systems, and assumptions about that have crept into how the code works. It’s going to take a bunch of work to implement but we came to a reasonable understanding of how this should work, with the DSP represented as a device in the device tree and bound to the card like any other component.
Continuing on the DSP theme Patrick Lai then lead a discussion of gapless playback with format switches, we agreed that allowing set_params() to be called multiple times on a single stream when the driver could support it was the most sensible approach. The topic of associating controls with PCM streams was also discussed, there are some old APIs for this but so little hardware has implemented them that we agreed that a convention for control names based on the stream names was probably easier to support with current userspace software.
Patrick also lead a discussion of time synchronization for audio streams, both compressed and PCM. A number of systems, especially those dealing with broadcast media, want to bring the local audio clocks as closely in sync with other system and external clocks for both long term streams and A/V sync. As part of doing this there is a desire to embed timestamps into the audio stream for use by DSPs. There was an extensive discussion on how to do this, the two basic options being to add an extended audio format which includes timestamps (in the compressed API) or additional API calls to go along with the data. The in band data is easier to align with the audio but the format modifications will make it much harder to implement in a standard way while the out of band approach is harder to align but easier to standardize. We came to a good understanding of the issues and agreed that it’s probably best to evaluate this in terms of concrete API proposals on the list.
Liam Girdwood then took over and gave an overview of the status of SoundWire. This will be reaching products soon, with a gradual transition so all current interfaces are still in active use for the time being. Immediate plans for development include a lot of work on hardening the framework to deal with corner cases and missing corners of the spec that are identified in implementation, support for applications where the host system is suspended while a DSP and CODEC implement features like hotword detection and support for dynamic routing on the SoundWire bus. Those with SoundWire hardware agreed that this was a good set of priorities.
We then moved on to a discussion of ABI stability and consistency issues with control names, lead by Takashi Iwai. The main focus was around control and stream naming where we have a specification which userspace does use but we currently only verify that it’s being followed manually during kernel development which isn’t great. We decided that a testing tool similar to v4l-complaince which is successfully used by the video4linux community would be the most viable option here, though there were no immediate volunteers to write the tool.
The next topic was virtualization, this was mainly a heads up that there is some discussion going on in OASIS around a VirtIO specification for audio.
Jerome Brunet then talked through his experiences as a new contributor implementing audio support for Amlogic SoCs. Their audio subsystems are relatively simple by modern SoC standards but still more complex than the simple DAIs that ASoC currently supports well, needing elements of DPCM and CODEC-CODEC link support to work with the current system all of which presented an excessively tough learning curve to him. Sadly all the issues he faced were already familiar and we even have some good ideas for improving the situation through moving to a component based model. Morimoto-san (who was sadly unable to attend) has been making great strides in converting all the drivers into component drivers which makes the core changes tractable but we still need someone to take up the core work. Charles Keepax has started some work on this before and says he hopes to find some time soon, with several other people indicating an interest in helping out so hopefully we might see some progress on this in the next year.
The final topic on the agenda was spreading DSP load throughout the system, including onto the sound server running on the host CPU if the dedicated DSP hardware was overloaded. Ideally we’d be able to do this in a transparent fashion, sharing things like calibration coefficients between DSPs. The main sticking point with implementing this is Android systems since Android doesn’t use full alsa-lib and therefore doesn’t share any interfaces above the kernel layer with other systems. It’s likely that something can be implemented for most other systems but it’ll almost certainly need separate Android integration even if plugins can be shared.
We did have some further discussions on a number of topics, including testing, after working through the agenda but sadly minutes weren’t being kept for those so . Thanks again to the attendees, and to Intel for sponsoring.