New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

audio time features #575

merged 18 commits into from Nov 16, 2014


None yet
1 participant
Copy link

richardeakin commented Nov 11, 2014

This includes work I've done over the last couple months to improve time based functionality in the audio namespace.

Scheduling audio::Node::enable() / disable() at a specified time

The biggest addition here is that there are overloads for Node::enable(), disable(), and setEnabled() that take a double when parameter, which allows you to trigger the Nodes on or off at a precise time. You usually use it like the following:

mMyNode->enable( audio::master()->getNumProcessedSeconds() + 1 );

By precise, I mean sample accurate if needed. This comes into play mainly with audio generators, where you'd want them to start producing samples somewhere in the middle of an audio block, not at the beginning. All of the GenNodes take advantage of the new getProcessFramesRange(), which provides information for when to start or stop producing samples. These values can be ignored by Node implementors if they don't make sense (none of the effects use it as it, I don't yet see a need there), the Node itself will still be enabled or disabled at the corresponding block according to the specified when times. This scheduling is done by audio::Context in preProcess() and postProcess() steps.

Scheduling audio::SamplePlayerNode::start() / stop() at a specified time

Related, SamplePlayerNode subclasses also get nice trigger overloads. BufferPlayerNode adheres to getProcessFramesRange() so it is sample accurate, although FilePlayerNode does not. I don't believe a file based sample player needs to be sample accurate, it is a tool for playing large files streamed from disk, and because it has built in async functionality it would be difficult to add. Could re-address if the need ever came up.

audio::Param specified begin times and support for sequencers with latency

I addressed two use cases here. One was how to make an audio::Param into a sample accurate ADSR, and the other is how to strongly sync audio (ci::audio::Param) and visual (ci::Timeline) animation events. Both are achieved by enabling events to be scheduled in the future, by setting the audio::Param::Options::beginTime().

The problem this addresses is that we want to keep the audio thread non-blocking, yet we need to schedule events on it and ensure that they are processed. The only way to do this is by introducing latency - you cannot try to schedule an Event at 'audio time now' (on the main thread) and be 100% sure that it will be processed, because that time may have already passed when we get back to the audio thread.

However, if a real-time ADSR is needed, you can (with these changes) schedule an attack + decay with pretty good indication that they will fire sample accurately (or a block or two off), with something like the following:

auto gainParam = mGain->getParam();

const float attackVal = 0.9f;
const float attackTime = 0.001f;
const float decayVal = 0.5f;
const float decayTime = 0.25f;

auto opts = audio::Param::Options().beginTime( audio::master()->getNumProcessedSeconds() );
gainParam->applyRamp( attackVal, attackTime, opts );
opts.beginTime( opts.getBeginTime() + attackTime );
gainParam->appendRamp( decayVal, decayTime, opts )

I'm considering improving this in the future to ensure that both the above applyRamp() and appendRamp() calls above happen before audio processes, but one solution would require a recursive lock against the Context's mutex, and I'd like to wait until I have larger projects to test against before making that move.

I also needed to make adjustments to the audio::Param::apply() method to account for delayed events, so that the apply 'wipe out' doesn't effect any events that are within the latency period. For similar adjustments on the ci::Timeline side of things, see PR #574.


  • timeToFrame utility function: converts seconds to frames.
  • added Context::isAudioThread(), which is useful for Node implementations that need to know whether they should lock against the Context's mutex or not
  • BREAKING: removed the default = true value from Node::setEnabled() and Context::setEnabled(). Instead use Node::enable() or Context::enable() (default was added when those methods were called start() / stop() and it made more sense).

richardeakin added some commits Sep 26, 2014

Fixed bug in SamplePlayerNode when it was default constructed and its…
… ChannelMode wasn’t correctly set. Clarify ChannelModes for InputNode and SamplePlayerNode.
zero inPlaceBuffer for all Node’s without inputs, making InputNode::p…
…rocess() functions much simpler when they support processing a partial buffer. Profiling indicated the performance difference was negligible.
Added sample accurate event scheduling for Node enable / disable and …
…SamplePlayerNode start / stop, using Context as a scheduler.
added tests for delayed enable, disable, start and stop
general: added DelayNode::clearBuffers(), and being a little more vig…
…ilant about zeroing the delay line after resize

apply respects value begin time. In these cases, Event’s begin value …
…is lazily set.

use getCopyValueOnBegin(), instead of hasValueBegin(), though still s…
…et a value begin when making the Event, similar to ci::Timeline

@richardeakin richardeakin added audio and removed audio labels Nov 12, 2014

@richardeakin richardeakin merged commit 4288d9e into master Nov 16, 2014

@richardeakin richardeakin deleted the audio_time_features branch Nov 16, 2014

@richardeakin richardeakin referenced this pull request Mar 13, 2015


audio updates #742

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment