Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smart buffering #26

Closed
daveyarwood opened this issue Apr 15, 2015 · 9 comments
Closed

Smart buffering #26

daveyarwood opened this issue Apr 15, 2015 · 9 comments

Comments

@daveyarwood
Copy link
Member

daveyarwood commented Apr 15, 2015

Due to timing issues with queuing up the notes, we have to pass in a "lead-time" argument (1000 ms is the default, 3000 ms seems to work alright for longer scores). What would be ideal is if Alda determined dynamically how long to wait before starting the score, so all the user has to worry about is supplying Alda with a score to play.

With MIDI at least, there is also some guesswork in figuring out how long to wait before returning -- this is necessary because otherwise the MIDI synthesizer closes immediately after queuing up the notes. So I've written some code that figures out how long the score is and then waits for that amount of time plus 5000 ms. This seems to work OK, although it's slightly annoying if all you want to hear is one note, for example. What would be ideal is if the player exits once the score is done, plus ~1 second. (That's actually what I did initially, but it ended up needing a little extra buffer time to cover load time + the "lead-time" buffer to queue up the notes.) I wonder if there is some way we can have the player queue up an "OK, I'm done event" at the end to trigger the player to return?

@daveyarwood
Copy link
Member Author

I noticed that the event-scheduling function alda.sound/play! does the event-scheduling sequentially. We should be able to parallelize this, making the event-scheduling happen way faster, which might even make this issue a non-issue. Should try using (doall (pmap ...)) instead of doseq.

EDIT: Switched to (doall (pmap ...)), and it does seem to help. The pre-buffer does not seem to be necessary, so I made that default to 0 ms, and made 1000 ms the default post-buffer.

Next step: test more thoroughly, maybe with a score that has a ludicrous amount of note events. If Alda can handle it, we may as well eliminate the buffer options altogether and hard-code in 0 and 1000 as the pre- and post-buffer.

@daveyarwood
Copy link
Member Author

We have gotten a couple of issues related to playback oddities, at least one of which was confirmed to be a buffering issue -- adding a pre-buffer solved it. I think needing more buffer time is still an issue for machines with less juice. We should keep this idea on the table for sure.

@crisptrutski
Copy link
Member

Wondering whether using go blocks with parking timeouts might not be more efficient than jamming up all the agent threads - issue here could be number of CPUs

@daveyarwood
Copy link
Member Author

@crisptrutski My knowledge of concurrent programming is a little lacking... would you be interested in taking the lead on this one?

@daveyarwood
Copy link
Member Author

Adding this to the 1.0.0 milestone because running Alda from an uberjar makes buffering issues extra apparent. I'll take a stab at using core.async in alda.sound/play! when I have a minute.

@daveyarwood
Copy link
Member Author

So, at the moment, we are scheduling all of the events, all at once, and then saying "OK, go!" and letting the overtone.at-at scheduling pool do its thing.

Maybe it might be more efficient to set up a scheduling queue (perhaps a core.async channel), put all the Alda events on the queue, in order by offset, and go ahead and start playback after we've scheduled up to a certain offset. This should keep the amount of events in the scheduling pool low (I assume they disappear once they are executed), which ought to allow it to perform better.

As a bonus, we could expose the scheduling queue to the REPL (and to the server, in the very near future when we have the server/client thing going), so that you could type in a bunch of events on one line, then while that's playing, type in a bunch of events on the next line, and instead of playing them immediately, it can add them onto the queue, so that playback can continue smoothly from one line to the next.

@daveyarwood
Copy link
Member Author

The one question I have is how we can determine how much "buffer time" there should be for one user vs. another. I assume something like a 2 second buffer time should be OK (i.e. start playing once 2 seconds worth of notes have been scheduled), but it would be ideal if we had some programmatic way to determine whether the current user needs more buffer time.

@daveyarwood
Copy link
Member Author

@micha We were talking about this like a year ago... wondering if you have any thoughts on this?

@daveyarwood
Copy link
Member Author

daveyarwood commented Aug 12, 2016

So, at the moment, we are scheduling all of the events, all at once, and then saying "OK, go!" and letting the overtone.at-at scheduling pool do its thing.

Maybe it might be more efficient to set up a scheduling queue (perhaps a core.async channel), put all the Alda events on the queue, in order by offset, and go ahead and start playback after we've scheduled up to a certain offset. This should keep the amount of events in the scheduling pool low (I assume they disappear once they are executed), which ought to allow it to perform better.

Thinking about this now, it occurs to me that:

  • we're not using overtone.at-at anymore, but rather JSyn's event-scheduling system, and
  • the event queue I was describing is basically what JSyn does.

In the time since we switched to JSyn (7-8 months ago), there has been no pre-buffer, and I haven't noticed any of the previous issues with delayed notes. I think JSyn is just much more accurate thanks to the fact that it handles the scheduled events in realtime on a dedicated thread. As events are scheduled, they start playing immediately, as the main (non-audio) thread continues to schedule events as resources are available. Because the audio thread only needs to worry about playing scheduled events in its queue, it is not affected by the whirlwind of scheduling happening on the main thread.

I am about to make one small optimization that could help with larger scores -- sorting the events by offset before starting to schedule them. But in hindsight, I think JSyn is handling things well enough that we don't need to worry about pre-buffering.

I'm preparing a release that:

  • includes the optimization I just mentioned - schedule the earlier events sooner to help prevent the scenario where there are so many events to schedule, the earlier events don't get scheduled soon enough to be played.
  • uses the scheduling system to schedule the tear-down at the end of playing a score -- currently we're using Thread/sleep and a safe default "post-buffer" of 1000 ms, which is not ideal; if we use the scheduling system, we shouldn't need any post-buffer because we'll know exactly when the score is finished playing.
  • removes the pre- and post-buffer options from the server and client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants