-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposed API updates to audio functions #29
Comments
Hello Chris, What I had in my mind was something similar to your proposed solution, though your solution is more enthusiastically thought out than mine. Yet, you are way more familiar with the WebAudio API than I am. let pausedTime = 0;
const createSource = () => {
// recreate source
};
const play = async () => {
await render;
if (pausedTime > 0) {
createSource();
}
source.start(context.currentTime, pausedTime);
timeout = setTimeout(() => { stop(); }, (totalTime - pausedTime) * 1000);
};
const pause = () => {
pausedTime = context.currentTime;
stop(false);
};
const stop = (dispose = true) => {
clearTimeout(timeout);
timeout = 0;
source.stop(0);
if (dispose) {
pausedTime = 0;
}
}; Other than these, I completely agree with renaming onended. But also maybe change methods to like onsuspended, onrunning, and onclosed to comply with the audio states and add onready. We can also implement getCurrentTime() and getTotalTime(). |
Thanks for the input @ozdemirburak - I actually wasn't familiar with the suspend/resume, I will do some experimenting 😄 I had one other question though, I noticed that we have this line to trigger the stop: I think this was done because originally stop was called specifying a scheduled time In my experimentation - neither are required, the oscillator will simply stop on its own and the provided stop method in the morse-decoder api may be used if stopping sooner is required. I'm also seeing this warning in the browser, although it's seemingly not causing any issues. |
Yes, you're right; we don't need them. I don't really know how to fix it if there's no user action. So, probably okay to ignore that error. And as you said, it's still working, even without an interaction ( |
WIP: https://github.com/chris--jones/morse-decoder/tree/feature/audio-pause I sort of fixed the audio context problem by initialising it, if needed, in the play function. This won't fix it if play is triggered outside of a user interaction, but it's less likely to occur in this configuration. I have to amend my original design a little bit - I forgot totalTime was pre-calculated (no function needed, this is ready on initialisation). You can suspend and resume the audioContext to pause and play the audio, however the currentTime keeps progressing even after audio is stopped; I've worked around this for now by tracking an offset when play is triggered. Working:
To Do:
|
Looks really perfect, that's some neat progress, @chris--jones. The only problem I noticed is, if someone doesn't pass audio options, then the function doesn't work? // Doesn't work
a = morse.audio('SOS');
// Works
a = morse.audio('SOS', {
audio: {
wpm: 20,
frequency: 550, // value in hertz
onstopped: function () { // event that fires when the tone stops playing
console.log('ended');
},
}); I believe that if the input isn't too lengthy, disposal isn't a problem. While the idea of a timeout makes sense, I'm unsure about how to determine its duration. For seek, according to this comment, we need to create a new buffer node. But although this looks easy, I think it is a bit tough? I tried something like below, but it isn't working. const play = async (offset = 0) => {
var _a;
if (!sourceStarted) {
sourceStarted = true;
currentTimeOffset = (context === null || context === void 0 ? void 0 : context.currentTime) || 0;
await initAudio();
if (audioBuffer && offset > audioBuffer.duration) {
return;
}
source.start(0, offset);
(_a = options.audio.onstarted) === null || _a === void 0 ? void 0 : _a.bind(source)();
}
else if ((context === null || context === void 0 ? void 0 : context.state) === 'suspended') {
context.resume();
}
};
const seek = (seekSeconds) => {
stop();
play(seekSeconds);
}; Note: I mistakenly closed the issue with Github keyboard shortcuts. |
Ah yes, sorry I moved the audio config into a separate section as now we've got some new audio options and events that aren't oscillator related. I also realised this is after I suggested to you that we shouldn't introduce breaking changes without good reason, but I think this grouping of options makes a bit more sense😅 I do want your opinion on the options structure though. We could always put the oscillator specific config back in oscillator section and leave the other audio options outside of it for example.
I'm tempted to either not implement automatic disposal (I think we should still offer the dispose method though to clean up resource usage), or just dispose after every playback.
I'll have a look at that next, it shouldn't be that tricky! |
Sure this sounds reasonable, and I think your example would be a great solution, clean enough. Looking ahead, maybe we should also drop the ES5 support. With the 4.0.0, we might even get rid of some extra stuff and remove the dist folder. What do you think? The source you provided is perfect. And, I agree, we shouldn't make things harder than they need to be. I think offering an option for automatic disposal would be excellent. Additionally, in the end, a solution without using timeout would be awesome. By the way, it's interesting that the Web Audio API hasn't changed much since 2014. Thanks a lot. |
I think at the very least we could possibly offer an ES6 only build which would cut down on the bloat, I'm not sure if this is easy to do conditionally, but I'd hope we could set up GitHub actions and automate it.
I think there have been a few fixes and minor changes, and a decent amount has been proposed, but some of these things seem to stay in proposed state for years :) https://webaudio.github.io/web-audio-api/ |
I just realised our target in the tsconfig is already ES2017, so we're already past ES6 support (2015)
Here's an example of a GitHub actions setup that produces the release build and would push the asset if the release is tagged: https://github.com/chris--jones/morse-decoder/actions/runs/6681331395/job/18155374660 |
Perfect, these GitHub Actions are really useful. I need to delve deeper into them. Feel free to send a PR any time. |
Sorry, haven't forgotten about this - just started a new job and have been flat out. |
Not a problem, congratulations on the new job, @chris--jones. |
Although there's a decent workaround to being able to seek, pause and stop the audio by binding the wave to an audio control (see #25 (comment)), the underlying API is a little confusing and/or lacking in function:
All of these items make it difficult to build a complete UI to interface with the library.
Proposed solution
I see no reason why we couldn't just re-render the audio if necessary when someone hits play again. Their intent is obvious, and the audio becoming unplayable after stop is not obvious.
The use of the offline audio context makes it possible to keep the audio buffer in memory and re-use it!
This has the downside though of potentially keeping a large audio buffer around though, so this could be an optimisation option.
e.g. perhaps stop could take in a boolean parameter that indicates if the buffer should be disposed -
stop(true)
and otherwise you could call audio.dispose() to remove the buffer.Add seek (or setCurrentTime()) to allow playback from any point in the audio, pause would act the same as stop except that it would not reset the currentTime to 0, so a subsequent play would resume from that point.
Expose getCurrentTime() and getTotalTime() on the audio object. These would be functions because it might be expensive/unnecessary to poll and update a current time property; getTotalTime would need to wait until rendering had completed (although you could get the total duration time before even playing the audio)
One gotcha with having re-playable/pausable audio is the onended event as this will now be triggered every time the playback stops (even on pause); we could simply rename this event to onstopped and include the current time in the event object (so you could see if it was paused or completed), or make onended only trigger at the end of the audio or when disposed. It might also be useful to know when the audio has finished rendering - onready, and we could re-work the API to allow synchronous usage via this callback if that seems useful.
The text was updated successfully, but these errors were encountered: