New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worker support for BaseAudioContext #2423
Comments
This is something we can do indeed. This allows, for example, scheduling Web Audio API things more precisely regardless of the main thread load.
|
That's a good point. I hope those are the only things tied to the window.
Oh, AudioBuffers are not transferrable according to the current spec? |
I suppose that technically, you could |
Just to make a note here: |
Also I didn't know By the |
WebRTC Data Channel in Workers: |
Acquiring the content is an operation in the Web Audio API spec. You can get a fresh |
That might work for AudioBufferSourceNode. How about audio buffers in the AudioProcessingEvent (as in ScriptProcessorNode)? The AudioBuffer belongs to the main thread. However, if you transfer the result of getChannelData() to the worker thread then we have a weird situation. The main thread owns the AudioBuffer, but the sample data array now belongs to the worker thread. I don't want to raise this as an formal issue in the spec (since the SPN and the event is deprecated), but I think it worths investigating. |
It's all defined. If the This works for any |
Will move this to v.next. |
As an outsider feedback, this would be tremendously useful in complex applications where both UI load and audio rendering control are pushed to the limits. |
Yes. This is one of the things that we will have at when we're done with V1. |
Would this also apply to some degree to ServiceWorkers as well? And in that way enable some kind of background playback of audio? So that there would be a way to have persistent audio playback for an app like SoundCloud without it having to have the site open and any navigation in it to happen within an SPA? And perhaps even be able to start audio from a notification shown by the ServiceWorker (announcing eg. a new podcast episode)? Or would that be a follow up issue to this issue? And if so: Would it make sense to create such a follow-up already or to wait for this issue to be solved first? My use case is: Having a Progressive Web App that includes podcasts where I want to be able to play those podcasts across page navigations as well as in the background when the app is closed. + also ideally be able to start playing the podcasts from notifications announcing new episodes. |
From the way I see this, supporting AudioContext in Worker is just a stopgap solution. Something like AudioWorker with sample-level audio manipulation (e.g. audio device callback) can actually address many issues raised by the recent debates. (I couldn't find the issue, seems like the author deleted it.) With that said, that would be definitely a V2 item. |
I disagree. Essentially, in any reasonably complex end-user application, the main thread will serve mostly as the UI thread, and should be considered unreliable from a timing perspective. Layout, style computations, etc. will add unacceptably huge blocks to the thread. To have a reliable and accurate playback experience of scheduled events, you would need at least a dedicated thread for event scheduling, which may or may not be the same as the audio rendering thread. Given the current architecture of the Web Audio API, a dedicated Worker thread would be ideal for this task. The current workaround is to schedule events ahead at least 500-600 ms, which is not the end of the world, but it feels as if one was working with an unreliable API. |
Is this being superseded by AudioWorklet or are those targeting completely different use-cases? |
This one is different, it's about using AudioContext and OfflineAudioContext from within a worker |
Action items from TPAC 2019:
|
I haven't thought it completely through yet, but I'm wondering if future support for My previous thinking was that most web audio apps will need to react to main thread UI / DOM interaction (pointer, keyboard etc.) / MIDI events anyway. Normally, you'd want to use the listener callbacks to schedule events on the But with I guess this would necessitate another polling loop of some kind inside the worker thread to pick up those values at its own pace (an With this setup, the worker thread becomes a dedicated The DOM/UI thread could then react separately as it also has a shared view on the schedule, and do so in a more precise manner (get the currently relevant timestamp values from the |
Virtual F2F:
|
This would be very helpful for my music sequencing web app (https://onlinesequencer.net/). Right now clicking the play button starts a web worker, that worker periodically posts messages back to the main thread to play the next step. But the main thread is also doing a lot of graphical stuff at the same time, so sometimes there is lag, it would perform much better to do all of the audio processing in the worker. I know about AudioWorklet but it seems very tricky to use it for scheduling like this. |
TPAC 2021: w3c/mediacapture-extensions#16 has been fixed, so we're clear to work on this now, and we'll have This is rather high priority, as large apps using the Web Audio API are more common. It should "just" be a matter, in terms of spec, of adding In terms of implementation it's a bit more involved, |
2023 TPAC Audio WG Discussion: The WG still recognizes the importance of this project and will continue to work on the necessary changes to the specification. |
Wanted to comment that a current workaround (not the easiest to implement, but possible) is to run the web audio stuff in an iframe served from a different subdomain. I found this issue after working on an audio plugin system that runs in a sandboxed same-origin iframe, which couldn't use web workers anyway, and still manages to block the main app sometimes. After some investigation I realized that same-origin iframes share their thread with the parent (regardless of sandbox status, I guess), but different-origin iframes are on separate threads . So if I did end up serving the iframe from a different origin to allow web workers, I would already probably solve the thread blocking problem without implementing a web worker. But for code that does not need to be isolated, I think OfflineAudioContext in a web worker would be very nice! |
To update on this project, I ran into a couple major issues using iframes as a workaround:
So while there are some ways to deal with this, having OfflineAudioContext available in web workers would make this architecture dramatically easier to manage. I think I'd still spawn the workers from inside of a different-origin iframe for security reasons, but having user code run inside web workers would mean I don't have to deal with killing and reloading iframes all the time. |
Our workaround for this limitation (and lack of streaming OfflineAudioContext) is a clone of the web audio api in JavaScript: https://github.com/descriptinc/web-audio-js In theory this could be used for realtime playback but we haven't tried that. |
@benwiley4000 Just wanted to say thank you! The iframe trick works indeed and no longer blocks my main thread. Before, even playing just a few sounds caused micro stutters in my game (on top notch hardware), that were definitely noticeable during gameplay. What's the state of this issue? It's been many years since this was proposed, but it's crucial for anything real-time performance sensitive to be able to detach audio processing from the main thread |
@maierfelix I'm curious what your use case is. If you're using a ScriptProcessorNode, yes this blocks the main thread, but the more modern approach (which is viable in most cases) is to use an AudioWorkletNode for custom processing. That and other AudioNodes should all be running on a separate thread normally. I specifically wanted to execute user plugin code that can have free access to the web audio API, so that's why I needed an Iframe, and also why I couldn't offload into an AudioWorkletNode. I also want to be able to kill the code if it runs too long (because I don't know who wrote it), which is why I needed a separate thread. If you're playing sound effects in your own game I would expect you have enough control over your audio pipeline to be able to implement it without script processor nodes or iframes. However now I'm suspecting you might be using a third party web audio processing library that relies on script processor node. It's usually pretty easy to tell if it's being used, at least in Chromium, because there will be a console.warn telling you it's no longer a good idea to use it. |
@benwiley4000 I'm using the AudioContext directly and mainly the ConvolverNode and HRTF PannerNode for a current prototype to ray trace sound in real-time (here is a little demo video). I'm not aware of a way to use nodes like Convolver or Panner that WebAudio provides in a Worker or Worklet, or do I miss something here? Convolver and Panner seem very expensive and even though I'm reusing (or free up) nodes after use, WebAudio becomes quite loaded over time and interfers with the responsiveness of the game (stuttering, random lag spikes etc.). Mainly the HRTF PannerNode is a bit quirky to use in Chrome and makes me feel that there might be a memory leak either on my end or in Chrome. Once I connect the Panner I get these performance problems. In Firefox everything runs perfectly fine, even after hours of spamming complex sounds! So far, transferring the workload into an iFrame in a seperate thread seems to fix most of these issues :) |
All I mean is that these web audio nodes are supposed to all share an audio
thread separate from the main rendering thread. They are set up from the
main thread but the bulk of their work is supposed to be offloaded in
parallel.
If that's not the case, it could be a browser implementation issue, or I
could be misunderstanding the thread model of web audio.
Anyway, interesting to know that using an Iframe alleviates your problem.
Ben
ETA: perhaps Chrome implements the web audio thread as a separate thread on the same process? Whereas web workers and different-origin iframes seem to be consistently implemented by Chrome as separate processes.
Le jeu. 11 janv. 2024, 10 h 59 a.m., xima ***@***.***> a
écrit :
… @benwiley4000 <https://github.com/benwiley4000> I'm using the
AudioContext directly and mainly the ConvolverNode and HRTF PannerNode for
a current prototype to ray trace sound in real-time (here
<https://www.youtube.com/watch?v=of3HwxfAoQU> is a little demo video).
I'm not aware of a way to use nodes like Convolver or Panner that WebAudio
provides in a Worker or Worklet, or do I miss something here?
Convolver and Panner seem very expensive and even though I'm reusing (or
free up) nodes after use, WebAudio becomes quite loaded over time and
interfers with the responsiveness of the game (stuttering, random lag
spikes etc.). Mainly the HRTF PannerNode is a bit quirky to use in Chrome
and makes me feel that there might be a memory leak either on my end or in
Chrome. Once I connect the Panner I get these performance problems. In
Firefox everything runs perfectly fine, even after hours of spamming
complex sounds!
So far, transferring the workload into an iFrame in a seperate thread
seems to fix most of these issues :)
—
Reply to this email directly, view it on GitHub
<#2423 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADHOD3JEA2XUEQATGTRMW4LYOAD4LAVCNFSM5E7Q73X2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOBYG42DOMZSGEZA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
This synchronous mode uses `emscripten_thread_sleep` instead of `emscripten_sleep` and is intended to run from a separate thread where blocking is allowed. In its current state, this seems to depend on WebAudio/web-audio-api#2423 The alternative would be to proxy audio context-related calls to the main thread.
This synchronous mode uses `emscripten_thread_sleep` instead of `emscripten_sleep` and is intended to run from a separate thread where blocking is allowed. In its current state, this seems to depend on WebAudio/web-audio-api#2423 The alternative would be to proxy audio context-related calls to the main thread.
|
Currently the exposure of context constructors are not defined in the spec, but we can clarify by doing this:
Furthermore, we can expose this to the WorkerGlobalScope (not Worklet):
From the short investigation, I feel like we do not have anything tied to the Window or DOM in WebAudio. This is a relatively small spec change (non-breaking architectural change) but the advantage is tremendous in my opinion.
WDYT?
The text was updated successfully, but these errors were encountered: