-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AudioBuffer
s remarks, questions and some proposals
#64
Comments
Hey, thanks for thinking along, it's much appreciated. I think you have summarized the three cases very well. I fully agree with 1) The I'm interested to hear your proposal for a distinction of 2 and 3. In any case, please let me know what you intend to refactor. We can surely work out something that is better than the current state |
Ok cool, thanks for your answer. What I can imagine to advance on that is:
Sounds like a plan? |
Sounds good to me! |
Hey, I open this issue because reading the code and trying to understand it, there are still some aspects that I struggle with and that I think could be clarified and maybe improved.
disclaimer: I just go through my understanding to try to be clear, I don't pretend anything is ground truth here
So, my impression is that there should be 3 different types of audio buffers while there are only 2 for now:
The buffers used to compute blocks of signal (i.e.
alloc::AudioBuffer
). Nothing special to say about this one except that it could be renamed toAudioBlock
,AudioBus
orAudioRenderQuantum
to avoid confusion with theAudioBuffer
exposed by the API (which would also lead to renamingBUFFER_SIZE
toBLOCK_SIZE
orRENDER_QUANTUM_SIZE
for consistency)The
AudioBuffer
as defined by the WebAudio API, which is only consumed by theAudioBufferSourceNode
and theConvolverNode
or returned through aPromise
byOfflineContext.startRendering
. This one should be completely loaded in memory for very fast and accurate access by the nodes that consume it,audioContext.decodeAudioData
should therefore work from a asset (e.g. some audio file) completely loaded in memory (retrieved from network (XHR call in JS land) or file system) and perform the following steps:audioContext.sampleRate
AudioBuffer
that can be consumed without further processing by e.g. theAudioBufferSourceNode
Some other audio buffer which is more related to a streaming paradigm (data is received on the fly with questions of buffering, back pressure, etc.) and can be piped to some WebAudio nodes but is defined or related to other specs such as WebRTC (microphone, audio streams from network and
MediaStreamAudioSourceNode
) or HTML (<audio>
tag andMediaElementAudioSourceNode
). In that case decoding and resampling can only be done on the fly when the chunks are received, and the WebAudio nodes do not expose anystart
,stop
orseek
possibilities.In the current implementation, my impression is that the
AudioBuffer
definedbuffer.rs
try to handle both 2. and 3. paradigms in the same way and, is finally more close to 3. and not really optimized for 2.Do this makes sens or am I mistaking somewhere here? If you think it's a good idea, I would be happy to try to propose something really dedicated to 2.
Related issues
The text was updated successfully, but these errors were encountered: