Music prototype: adjust sound events management #48538
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Until now, there was an interesting issue with sound event management: if you had some code which generated many sounds in the future, and then adjusted the code in any way while the song was playing, once you arrived at those future sounds, they would be quite loud. This was because the code was re-executing as it was changed, and it would schedule additional Web Audio plays of the same sounds in the future.
First, some background on the model used for sound events thus far. It is tempting to maintain a central list of events, and to generate Web Audio plays just before it's time for them to play, which is indeed the idea in the popular article A tale of two clocks, in which a regular timer is used to generate Web Audio plays shortly before they are needed. We might end up using a similar model in the future.
But right now, I took a slightly different approach, which is to maintain a central list of events, but to also schedule Web Audio plays as soon as I know they are in the future, even if they aren't yet near. This allows us to use the same code for events we know are in the distant future, as well as for events that should begin immediately, such as user-triggered code that wants to play instantly.
This does mean that we are maintaining a couple sets of information at the same time: the sound events list, which is used to render the visual timeline; and the implicit set of Web Audio plays that have their own play times, though these should always match the visual timeline unless we have a bug.
Making things work properly took some new plumbing. First, the audio system now returns a unique ID for each scheduled play, which we can use to cancel that play. (We need to make sure it's possible to stop a Web Audio play that has not yet started, across all browsers and devices.). Second, when user code does generate a play, we track whether the code was under the
when_run
block or is otherwise presumed to be a user-triggered play.Then, our higher level code can do a few new things. While the song is playing, the code can be edited and the visual timeline and scheduled audio can change. But when the code is re-executed, we clear the
when_run
-generated sound events and cancel the corresponding Web Audio plays that are yet to begin (though any that are in progress should continue playing), and then the re-execution will regenerate these as appropriate. User-triggered sounds should be untouched, whether they have played already or not. And when the song is stopped, we leave thewhen_run
-generated sounds intact on the visual timeline, though user-triggered sound events are cleared.