Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upAdd support for setting channel information #61
Conversation
|
separate PR because this is somewhat WIP and is trying something out with the traits, and I don't want to block the other PR on this. Review and land #60 first |
| @@ -92,6 +136,9 @@ pub enum AudioNodeMessage { | |||
| AudioBufferSourceNode(AudioBufferSourceNodeMessage), | |||
| GainNode(GainNodeMessage), | |||
| OscillatorNode(OscillatorNodeMessage), | |||
| SetChannelCount(u8), | |||
| SetChannelMode(ChannelCountMode), | |||
| SetChannelInterpretation(ChannelInterpretation), | |||
This comment has been minimized.
This comment has been minimized.
ferjm
Jun 27, 2018
Member
Should these setters get a Sender<()> so we can block from the consumer (DOM side in this case) until the property is actually set?
This comment has been minimized.
This comment has been minimized.
Manishearth
Jun 27, 2018
Author
Member
We don't block for the other setters. I don't see a reason to block the DOM thread on the rendering thread.
|
@sdroege so it seems like if I change the channel count audio info on the app src, there's an audible delay. This isn't introduced by the code here (I tried changing the channel count of a different node and it's fine), it seems to be because the buffered up frames get dropped (??) Is there something I should do here when the channel count changes? Wait for it to flush? |
You mean changing the number of channels takes a while to take effect? Or that there is generally latency involved if a non-default number of channels is set already from the beginning? |
|
No, changing channels causes some latency, with an audible gap of silence. |
|
So it does not only take a while to take effect, but there is actually silence for a while? How can I reproduce this to take a look myself? |
|
from the examples folder; |
|
Merging for now, the main approach was reviewed and the extra commits are cleanups or are hooking it up to the sink. Feel free to review later; but I don't want this to be causing lots of merge conflicts |
|
I'll take a look in the next days, travelling currently. I'll let you know here what I find or send a PR directly |
|
@Manishearth the problem here is that you not only change the channel configuration, but you let this change the configuration on the audio sink. This requires re-initializing the hardware, and that takes a moment. A solution for that would be to configure the audio sink with one specific format (configurable? the first one that arrives?) and then have conversion in software before the audio sink. Unrelated to that, I saw the demo video and it seems there's quite some latency. You can reduce the latency by with the |
|
Is there a way to get the max channels supported by the hardware? We can set the caps to that, and then do software conversion. |
Yes (set the sink to |
|
Yeah I guess we need to match that up with our interpretations. |
Do you have a link? And how does it work exactly? I would've assumed the WebAudio app to select what channel layout it wants to produce, not the hardware deciding what is wanted @philn how are you handling this in webkit? Short summary is that switching the numbers of channels causes a short pause and click during hardware reconfiguration. |
|
https://webaudio.github.io/web-audio-api/#ChannelLayouts WebAudio specifies what the channel layouts should be interpreted as for 1,2,4,6 channels in speaker mode. |
|
I'm not sure dynamic reconfiguration is well handled in WebKit... The audio channels are configured one time (when the src element has been constructed) from the AudioBus channels layout. Then the internal task of the element pushes data to separate |
|
That seems to be similar to what I'm planning here -- configure it based on the layout, and the destinationnode's channel count is just an intermediate mixing interface, like a 1-gain GainNode |
Manishearth commentedJun 27, 2018
•
edited
I'm trying a trait based trick for keeping message handling common. If you like this
we can use something similar for AudioScheduledSourceNode.
This is also used by the sink, and the example merges channels halfway through by forcing a mix on the sink.
based off #60
r? @ferjm