-
-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Smart buffering #26
Comments
I noticed that the event-scheduling function EDIT: Switched to Next step: test more thoroughly, maybe with a score that has a ludicrous amount of note events. If Alda can handle it, we may as well eliminate the buffer options altogether and hard-code in 0 and 1000 as the pre- and post-buffer. |
We have gotten a couple of issues related to playback oddities, at least one of which was confirmed to be a buffering issue -- adding a pre-buffer solved it. I think needing more buffer time is still an issue for machines with less juice. We should keep this idea on the table for sure. |
Wondering whether using go blocks with parking timeouts might not be more efficient than jamming up all the agent threads - issue here could be number of CPUs |
@crisptrutski My knowledge of concurrent programming is a little lacking... would you be interested in taking the lead on this one? |
Adding this to the 1.0.0 milestone because running Alda from an uberjar makes buffering issues extra apparent. I'll take a stab at using core.async in |
So, at the moment, we are scheduling all of the events, all at once, and then saying "OK, go!" and letting the overtone.at-at scheduling pool do its thing. Maybe it might be more efficient to set up a scheduling queue (perhaps a core.async channel), put all the Alda events on the queue, in order by offset, and go ahead and start playback after we've scheduled up to a certain offset. This should keep the amount of events in the scheduling pool low (I assume they disappear once they are executed), which ought to allow it to perform better. As a bonus, we could expose the scheduling queue to the REPL (and to the server, in the very near future when we have the server/client thing going), so that you could type in a bunch of events on one line, then while that's playing, type in a bunch of events on the next line, and instead of playing them immediately, it can add them onto the queue, so that playback can continue smoothly from one line to the next. |
The one question I have is how we can determine how much "buffer time" there should be for one user vs. another. I assume something like a 2 second buffer time should be OK (i.e. start playing once 2 seconds worth of notes have been scheduled), but it would be ideal if we had some programmatic way to determine whether the current user needs more buffer time. |
@micha We were talking about this like a year ago... wondering if you have any thoughts on this? |
Thinking about this now, it occurs to me that:
In the time since we switched to JSyn (7-8 months ago), there has been no pre-buffer, and I haven't noticed any of the previous issues with delayed notes. I think JSyn is just much more accurate thanks to the fact that it handles the scheduled events in realtime on a dedicated thread. As events are scheduled, they start playing immediately, as the main (non-audio) thread continues to schedule events as resources are available. Because the audio thread only needs to worry about playing scheduled events in its queue, it is not affected by the whirlwind of scheduling happening on the main thread. I am about to make one small optimization that could help with larger scores -- sorting the events by offset before starting to schedule them. But in hindsight, I think JSyn is handling things well enough that we don't need to worry about pre-buffering. I'm preparing a release that:
|
Due to timing issues with queuing up the notes, we have to pass in a "lead-time" argument (1000 ms is the default, 3000 ms seems to work alright for longer scores). What would be ideal is if Alda determined dynamically how long to wait before starting the score, so all the user has to worry about is supplying Alda with a score to play.
With MIDI at least, there is also some guesswork in figuring out how long to wait before returning -- this is necessary because otherwise the MIDI synthesizer closes immediately after queuing up the notes. So I've written some code that figures out how long the score is and then waits for that amount of time plus 5000 ms. This seems to work OK, although it's slightly annoying if all you want to hear is one note, for example. What would be ideal is if the player exits once the score is done, plus ~1 second. (That's actually what I did initially, but it ended up needing a little extra buffer time to cover load time + the "lead-time" buffer to queue up the notes.) I wonder if there is some way we can have the player queue up an "OK, I'm done event" at the end to trigger the player to return?
The text was updated successfully, but these errors were encountered: