Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to avoid generating garbage in hot paths? #373

Closed
juj opened this issue Oct 18, 2014 · 34 comments
Closed

How to avoid generating garbage in hot paths? #373

juj opened this issue Oct 18, 2014 · 34 comments

Comments

@juj
Copy link

juj commented Oct 18, 2014

In the specification, AudioBufferSourceNodes are one off fire-and-forget objects. Profiling Emscipten-compiled pages, these continuously generate garbage when new objects need to be allocated to be able to play back data. The amount of garbage is not huge, but seems redundant. Has it been considered that Web Audio API would offer a mechanism to reuse AudioBufferSourceNodes, or some other alternative way to use the API without generating garbage in potentially hot paths?

@juj
Copy link
Author

juj commented Oct 18, 2014

For background info, see https://bugzilla.mozilla.org/show_bug.cgi?id=1083418

@cwilso
Copy link
Contributor

cwilso commented Oct 23, 2014

Note this is specific to an app that's implementing streaming themselves (with their own Ogg decoder).

I think you'd be best served, in v1, by doing this in a worker.

@juj
Copy link
Author

juj commented Oct 23, 2014

It's not possible to move Emscripten-based code to run in a worker, because most of the DOM APIs are not accessible there. Is Web Audio available in a worker?

@cwilso
Copy link
Contributor

cwilso commented Oct 23, 2014

Let me rephrase - you need a streaming API that is going to work at non-native sample rates, also without creating a lot of extraneous objects (BufferSourceNodes aren't your only problem here, ideally you'd like to optimize down your number of AudioBuffers too, I imagine). You could moderately simply create that streaming API inside a worker, then access it from the DOM.

I wasn't suggesting moving the entire project of Emscripten-compiled pages into a Web Worker. (I was referring to an Audio Worker that could implement streaming and buffering.)

@juj
Copy link
Author

juj commented Oct 24, 2014

Ah I see. However, moving to a worker would not avoid the garbage being generated, it would then just be generated in the worker. Ultimately it would be ideal not not generate garbage at all, even when playing a single audio clip and not streaming.

@joeberkovitz
Copy link
Contributor

joeberkovitz commented Oct 24, 2014

@juj actually, I think moving to a worker could avoid garbage being generated, provided that game sounds are initially at (or resampled to) the AudioContext sample rate and then dynamically mixed into the recycled output buffers provided to the AudioWorker.

Let's say that doesn't work for you, though, for sake of argument. I'm not seeing what we would do then beyond changing the spec to allow reuse of AudioBufferSourceNodes, which is a pretty big breaking change to the architecture. Those nodes are extremely lightweight. Before treating this as a V1 issue I'd love to understand what actual fraction of the undesirable GC activity being reported is actually due to audio, and the extent to which the number of nodes being GCed has a significant impact on game performance. Looking at the Mozilla bug report I did not see any quantitative information, just a statement that there are "quite a few" AudioBufferSourceNodes being generated.

@cwilso
Copy link
Contributor

cwilso commented Oct 24, 2014

Actually, they don't even need to be pre-resampled. The point of using an Audio Worker is that you own copying data to the output. The AW can recycle its input and output buffers, so there's no GC pressure from that; you (the developer) can recycle your incoming audio buffers if you like (since it's your code that's copying that data into the output buffer anyway), and you don't even need an AudioBufferSourceNode analog (you can use whatever mechanism you like to schedule buffers to be played back, you may not need a dynamically-allocated general-purpose scheduler).

In short - your GC pressure is coming from two places, I imagine -

  1. the AudioBuffers themselves. Unfortunately, AudioBuffers are not really "rewritable" in the Web Audio API; this is largely due to the web's (lack of a) memory sharing model. You can be more flexible with this in your own implementation inside an Audio Worker: for example, in a streaming situation, where you might need to be creating a lot of AudioBuffer objects (as chunks come in) in Web Audio, you can instead create a large ring buffer or memory pool and copy to already-played areas, since you can maintain this memory and transfer buffers back and forth. You should be able to implement this without generating garbage as you go.

  2. the BufferSourceNode objects. These are a byproduct of Web Audio needing a lightweight general-purpose scheduling object; again, though, you don't really need this in a more confined scenario where you're effectively writing your own scheduler.

It's possible we could write a reusable scheduling object (a BufferSourceNode that doesn't become a largely useless object after playback), but it has a lot of subtleties tied up with cross-thread memory sharing; for now, you should be able to avoid GC pressure using Audio Workers and your own scheduling.

@karlt
Copy link
Contributor

karlt commented Oct 28, 2014

I would expect similar garbage issues in applications like http://webaudiodemos.appspot.com/MIDIDrums/index.html

I don't know whether we should expect garbage to be handled better, or design the api around not creating so much garbage.

@cwilso
Copy link
Contributor

cwilso commented Oct 28, 2014

The amount of garbage collected in that app is actually fairly minimal. If
GC is causing that much disruption in that app, we've got a serious problem.

That said, I can see a few ways around this problem.

  1. "Some of the games (FTL at least) stream in background music themselves
    by decoding ogg vorbis files, so they continuously create new
    AudioBufferSourceNodes even if there are no audio sfx clips playing." Is a
    painful statement. In the future (after it's been implemented), they
    REALLY should be just using an Audio Worker to implement their streaming.

  2. We could add another node, a ReplayableBufferSourceNode - I strongly
    feel this should NOT be the normal node, because it would have drawbacks.
    It would: 1) only be able to play back once at a time 2) be able to restart
    itself. The drawback is that overlapping sounds couldn't be done with this
    node; the positive is you wouldn't need to allocate new ones if you
    maintained a list of them.

On Tue, Oct 28, 2014 at 1:08 AM, Karl Tomlinson notifications@github.com
wrote:

I would expect similar garbage issues in applications like
http://webaudiodemos.appspot.com/MIDIDrums/index.html

I don't know whether we should expect garbage to be handled better, or
design the api around not creating so much garbage.


Reply to this email directly or view it on GitHub
#373 (comment)
.

@cwilso cwilso modified the milestone: Needs WG decision Oct 29, 2014
@joeberkovitz joeberkovitz modified the milestones: Web Audio v.next, Needs WG decision Nov 13, 2014
@Prinzhorn
Copy link

Prinzhorn commented Apr 24, 2015

Unfortunately, AudioBuffers are not really "rewritable" in the Web Audio API; this is largely due to the web's (lack of a) memory sharing model.

I'm currently working with the Web Audio API and experience that they are rewritable, albeit with differences in Chrome and Firefox (see http://stackoverflow.com/questions/29842079/firefox-web-audio-api-on-the-fly-update-audiobuffer-audiobuffersourcenode). So according to what you said, is it correct that both Chrome and Firefox behave "wrong"? Is it just luck that things work as I expected them to? Safari on the iPad also works the way Chrome does.

@notthetup
Copy link
Contributor

notthetup commented Apr 24, 2015

I believe that changed very recently in Chrome 42.

http://blog.chromium.org/2015/03/chrome-42-beta-push-notifications_12.html

@cwilso
Copy link
Contributor

cwilso commented Apr 24, 2015

Chrome still has a bug where it lets you mess with the contents of the AudioBuffer; that will be protected against shortly (and then we'll be compatible with FF). Firefox has a bug where they allow you to set the .buffer more than once; presumably that will get fixed as well.

@Prinzhorn
Copy link

Prinzhorn commented Apr 24, 2015

Chrome still has a bug where it lets you mess with the contents of the AudioBuffer; that will be protected against shortly (and then we'll be compatible with FF). Firefox has a bug where they allow you to set the .buffer more than once; presumably that will get fixed as well.

Ok, so basically what I'm currently doing shouldn't work at all in any browser? I guess I have to use ScriptProcessorNode then, which is deprecated. I guess that's what living on the edge is like.

@cwilso
Copy link
Contributor

cwilso commented Apr 24, 2015

A fully compliant browser, no.

I think you probably really want an AudioWorker, yes - but there's no AudioWorker implementation yet.

@reduz
Copy link

reduz commented Sep 8, 2015

As someone porting a game engine to emscripten and writing the audio backend directly in Javascript (due to lack of thread processing with shared memory), the current situation of WebAudio is really inefficient for games. Imagine the following common case scenario:

You have footstep sfx (4 or 5 wavs) and you want to round robin 3 voices playing any of the wavs (randomly) to make it sound more natural.

The only way to do this is creating on the fly the AudioBufferSources and them free them, this seems pretty inefficient and create garbage (specially if I have a postprocess chain with filters, etc). If you guys think it's the way then fine.. but it would be nicer if audiobuffersource nodes were more reusable..

@notthetup
Copy link
Contributor

notthetup commented Sep 9, 2015

While the AudioBufferSourceNode might feel 'heavy' to use (I think the long name does affect people's perception) it should be pretty light on the memory. So creating and destroying a ABSN (as opposed to an AudioBuffer itself) shouldn't create much garbage.

Not exactly for gaming, but we did a very unscientific study of how much garbage generation Chrome could handle when creating the MultiTrigger SoundModel (https://github.com/sonoport/soundmodels). This SoundModel is usually used to create 'rapid gunfire' type of sounds. I remember being able to create something along the lines of 60+ Nodes (ABSN + Gain + Filter, etc) per second and still not have GC hits causing any scheduling issues.

Are you seeing the GC impact on performance in a specific case? I would love to hear about it.

@magcius
Copy link

magcius commented Sep 14, 2017

I'm seeing a lot of garbage being generated from my very simple test app. I already keep a pool of AudioBuffers and I'm not doing any other allocation in the hot path, yet I'm seeing quite annoying GC pauses at unfortunate moments, and Chrome's memory profiler is showing me a stairstep profile. AudioBufferSourceNodes do not appear to freed quickly.

It would be nice to have a way to prevent garbage in the hot path.

@cwilso
Copy link
Contributor

cwilso commented Sep 14, 2017

@magcius
Copy link

magcius commented Sep 14, 2017

Alright; apologies for the spam. I was only pointed to this issue tracker after I wrote that article.

@JackCA
Copy link

JackCA commented Sep 15, 2017

I just stumbled upon this thread after a web search for examples of "web audio buffer pools" -- I'm also running into GC issues from rapidly assigning source nodes and this is a top result.

Separately, I had read the blog post you're referring the other day and am now making the connection that @magcius was the other of that post itself.

@cwilso, I can empathize with the clear frustration you have -- you've put a lot of work into this API and it doesn't feel great when something like that is published.

However, do you have a constructive response to the question he's asking?

@cwilso
Copy link
Contributor

cwilso commented Sep 15, 2017

@magcius That sounded snarky on my part, I apologize. See comments on your blog post.

@sgentle
Copy link

sgentle commented Sep 18, 2017

Just for kicks, I threw together this demo of using an ABSN as a ring buffer by modifying a looped buffer while it's playing: https://demos.samgentle.com/audiobuffer-shenanigans/ - it only works because of gaps in spec compliance that are closing quickly, so get in while you can! Tested on Chrome, Safari and Firefox, though timer drift leads to some fun effects on Firefox after a little while. PS you can git clone that url if you want to.

I saw @magcius's blog post too and, though I've found the Web Audio API useful for high-level things, I agree it is difficult to go low-level with it. That unfortunately exacerbates any holes in the high-level API: "I want to do X" -> "sorry, X isn't supported yet" -> "I'll write code to do it myself" -> "oh no don't do that the performance is bad and you're relying on deprecated features". Audio Worklets are coming, yes, but they're not exactly lightweight from the perspective of just wanting to make numbers come out of a speaker.

The demo I've posted is doomed to obsolescence, but, especially given that SharedArrayBuffer just landed, it'd be nice if the spec let you do something similar that would actually work. Obviously, performance is tricky because it's all on the main thread, but so is everything else and we're used to dealing with that. Right now the only real problem with it is that you can't easily keep track of where you are in the buffer to avoid getting out of sync, but a playbackPosition property (cough #296 cough) would help...

TLDR; SharedArrayBuffer + looped AudioBufferSourceNode + playbackPosition = problem solved?

@padenot
Copy link
Member

padenot commented Sep 18, 2017

Audio Worklets are coming, yes, but they're not exactly lightweight from the perspective of just wanting to make numbers come out of a speaker.

What makes you think AudioWorklets are heavyweight?

@sgentle
Copy link

sgentle commented Sep 18, 2017

I mean in a conceptual sense. I assume Worklets would be at least as fast as memory sharing, but the amount of code needed to get one set up, putting it in an extra file, defining a class, figuring out module boundaries, WorkletNode vs WorkletProcessor, where you want to put your postMessages etc.

I'm not saying I don't like the API, actually I'm very excited to use it, but I wouldn't describe it as lightweight compared to writing some data to a buffer.

@joeberkovitz
Copy link
Contributor

joeberkovitz commented Sep 18, 2017

@sgentle Compared to trying to write to a shared buffer in the main thread and attempting to sync that with the audio thread's position, AudioWorklet's pretty simple and lightweight. It's only as complicated as is necessary to allow JS code to be packaged as a bona fide graph node. And if we provided only a bare-bones low level interface independent of graph nodes, we'd deprive developers of the ability to create efficient custom nodes.

@sgentle
Copy link

sgentle commented Sep 19, 2017

I'm not trying to deprive anyone of anything, just suggesting a solution that might fit certain low-level use cases. I feel like this is getting bogged down in the definition of simple or lightweight when, really, the concepts themselves aren't likely to be a point of disagreement, just the words we're using for them.

Imagine we're talking about file output; I say "the simplest way is to call write(), because it's just a syscall", and you say "the simplest way is to instantiate a ThreadSafeBufferedWriter, because you don't have to worry about concurrency or optimal buffer sizes". Neither of us are wrong, it's just a question of what goal you define simple in terms of. Yes, to create fully fledged graph nodes requires a carefully designed API, which you've already done and looks great. But what if you want to just dump sample data into the output?

To be clear, I agree that most people will want Worklets, and I'm not even really trying to argue on behalf of the low-level use case; #1322, this issue and elsewhere make it clear that there's some call for it. My point is that, if you want to do it, rather than adding a new API, you could let ABSN take a SharedArrayBuffer-based AudioBuffer. It's a comparatively minimal API change, keeps everything within AudioNode-land and, bonus, it accidentally works already.

@cwilso
Copy link
Contributor

cwilso commented Sep 19, 2017

Sorry for delay, family commitments and travel.

@JackCA, I do have a constructive response - it's been littered throughout the issues of this repo, and in meetings with the WG, over the past several years. In short - The Web Audio API is not the low-level, "I just wanna dump some samples out to the hardware" API. That API would be simultaneous much simpler and much easier to screw up in usage. (For example, the sample @magcius put in the blog post; it's all very well to use Math.sin() to generate samples to create a sine wave, but then developers start outputting other waveforms that way; which naively avoids bandlimiting.) . That said, the Web Audio API has been designed as if it were on top of that API - e.g., as @hoch has built out the Audio Worklets spec, it's clear that could be used as the "create one of these, and hook it to an output stream, nothing else" foundation.

I still think the WG needs to define that API, and engines need to implement it. I personally think that's probably more important, in the short term, than getting WAA 1.0 stamped as a standard. But it's still a different beast, and I would still advocate most developers shouldn't use it.

What frustrates me is the attitude of "this isn't the right API to do what I want to do [and sometimes just 'in the way I want to do it', so I don't understand who would ever use this API." Additionally, "I wouldn't describe [WAA] as lightweight compared to writing some data to a buffer" is goofy to me, because for many of the use cases [playing sample-buffer-based sounds, algorithmic waveforms, and filtering], I see it as very lightweight to developers and efficient for users. It's not perfect, and the worst part has been the missing exposure of the bedrock underneath, and I've been saying that for quite some time. This shouldn't be a divisive statement.

@magcius
Copy link

magcius commented Sep 19, 2017

To be clear, my goal here was not to antagonize the Web Audio developers and specification. I was lamenting on the apparent lack-of-progress and documenting my experiences building a (sample-accurate) playback tool, and hoping my article would bring a bit of awareness to the problems I encountered.

Yes, to build a stream of samples requires domain knowledge with bandlimiting, aliasing, and a base understanding of sampling theory. My example was picked for simplicity to generate something quickly. Doing it the way I was doing it in a real app would break in the extreme cases, but I figured that was out of scope. That said, these aren't problems so hard or complex that only browser implementers can possibly understand the concepts of bandlimiting and aliasing -- I do believe that with proper exposed bedrock, library authors would implement the Oscillators and high-level stuff on top.

That's roughly the lens I saw the existing API from. Throw in there some real concerns about cross-browser timing where I don't expect the high-level nodes to behave 100% consistently and I felt like it "missed the point". The world I envisioned was one where the browser provides the bare minimum, with functionality like Oscillator and the graph system being provided by JavaScript. I understand this isn't the world we have.

This is getting roughly off-topic now; apologies for the distraction. I'm motivated to help out in the standards process and get new solutions built.

@sgentle
Copy link

sgentle commented Sep 19, 2017

"I wouldn't describe [WAA] as lightweight compared to writing some data to a buffer" is goofy to me

It's less goofy if you read it in context, where [WAA] is actually [AudioWorklets], or more accurately [Using AudioWorklets for sample-based playback].

I can't help but observe that my last 3 comments have been trying to explain this one relatively inconsequential sentence. That's fine, but it makes me concerned that perhaps nobody read my proposed solution. I don't mind if it's a bad solution and you say so, but right now all I have by way of feedback is that I shouldn't ever use the word lightweight again. Like, message received, but any thoughts on shared memory in ABSN?

@cwilso
Copy link
Contributor

cwilso commented Sep 20, 2017

@magicus I agree absolutely that " these aren't problems so hard or complex that only browser implementers can possibly understand the concepts of bandlimiting and aliasing" - in fact, most browser implementers probably had to learn about them. However, I do think that the vast majority of web developers don't understand them - and would rather not have to. I also agree with you that "with proper exposed bedrock, library authors would implement the Oscillators and high-level stuff on top." But that's pretty much what a lot of the Web Audio API is, with the additional caveat that it cuts through a lot of missing system bedrock and security policy (e.g. since we know what code is running in it, we could elevate audio thread privileges to get better response, even though that's not currently possible to envision for arbitrary JS code). Should all that missing bedrock get exposed? Absolutely, ASAP, particularly for just the stream-processing and device-connection features. The thread privileges issue, that's going to be hard for generic cases.

I disagree that functionality like Oscillator only belongs in individual JS libraries. I do agree it should be possible to efficiently ignore the existence of the WAA Oscillator and build your own - in much the same way that I can ignore and build my own.

I did want to address one thing you said - anywhere that the nodes don't behave consistently across implementations, that's a bug - in implementation, and possibly in spec. In places where you feel a node "misses the point," I'd encourage you to explore that with the WA WG - because you may be right, it may be missing the point, or you may be misunderstanding the target scenario (in which case we may need to better define that use case in the spec).

@sgentle I think you're missing my point - I don't think AudioWorklets is the likely solution to your specific use case (GC-free playback of audio buffers), I think some form of reusable ABSNs would be the solution to that (see earlier in this issue) - which ends up as a solution not a million miles away from what you're suggesting. AudioWorklets would be a solution, but so would using the WAA and a reusable buffer playback node, but just using shared array buffers might be a non-starter (because I think that would still enable cross-thread timing issues, which are a no-no in web APIs). I don't know for sure, because that design work simply has to be done, and it's been postponed from v1. (At any rate, you should read this as "I did, actually, read your proposed solution." :) )

@sgentle
Copy link

sgentle commented Sep 20, 2017

Appreciate it. :)

Okay, I was under the impression that the Shared Memory and Atomics spec had already done the work of figuring out the cross-thread timing issues and you could just update (or make an alternate version of) AudioBuffer to use SharedArrayBuffer/CreateSharedByteDataBlock.

After doing some digging it seems that shared memory in JS is pretty well handled, but there hasn't really been any work done on the JS-browser boundary. The only current Web API code that accepts SharedArrayBuffers is WebGL, and even then that's just to copy them; it doesn't handle any kind of memory sharing between WebGL and JS-land.

It was discussed a bit in this thread: https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/d-0ibJwCS24/sv0A0aQP7nAJ
And this one: tc39/proposal-ecmascript-sharedmem#38

As far as I know, that means the current situation on using shared memory in Web APIs is: ¯\_(ツ)_/¯, making it probably not as simple (there's that word again) a solution to this use case as I imagined.

@magcius
Copy link

magcius commented May 5, 2018

I did want to address one thing you said - anywhere that the nodes don't behave consistently across implementations, that's a bug - in implementation, and possibly in spec. In places where you feel a node "misses the point," I'd encourage you to explore that with the WA WG

This should probably go in a separate discussion, but I was recently pointed by Victor Lazzarini to a preprint he wrote in 2015 about this very topic: http://eprints.maynoothuniversity.ie/9421/1/web_audio_criticism.pdf

The OscillatorNode case appears to be fixed in Firefox since https://bugzilla.mozilla.org/show_bug.cgi?id=1106649 was fixed, but the spec still does not appear to declare any sort of expectations on either the method used to generate the waveform, nor on expected output signal and harmonics.

@rtoy
Copy link
Member

rtoy commented May 8, 2018

@magcius: Please file a new issue instead of piling on this one with an unrelated topic. The method is kind of implied because the Oscillator has to use a PeriodicWave to implement, and it has to be bandlimited.

@mdjp
Copy link
Member

mdjp commented Sep 17, 2019

Closing this issue, it has lost focus and may not still be a problem with audio worklet implementations becoming available. If there are clear areas that still need to be addressed please file new issues on the V2 repo.

@mdjp mdjp closed this as completed Sep 17, 2019
V2 Preparation (DO NOT USE) automation moved this from To do to Done Sep 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

14 participants