-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Audiorate ASL support #301
Comments
In 3.0 I plan to turn ASL into a C-based library with light Lua hooks to the functionality. Con:
Pro:
This tradeoff is being made as an admission that ASL can't great as a general-purpose scheduling system, while maintaining tight-timing for synthesis duties. The decision to focus on the timing-accuracy comes from the prospect of having the full norns To enable ASL to create varying waves, a number of methods will be enabled for users to interact with the data of the ASL.
Generators could include:
Step behaviours could be:
Mod behaviours could be:
These behaviours will likely need to be nested (eg increment & wrap). // Basically ASL becomes much more of a 'tiny programming language for describing modulation & waveforms', rather than just an alternate syntax for coroutines with custom timing. |
This is ASL2.0
|
|
fixed in #399 |
issue
The fundamental issue here is that after each breakpoint in an ASL we must call back into Lua. Because this happens inside the audio callback, we can't call directly into Lua as it may already be active from the event loop. Instead we cue an event that waits for Lua to process it.
The result is that there is always some delay in getting the next destination value, and thus the current audio vector just sits at the limit and waits until the next cycle (or potentially n cycles later) for the callback to have been called. We already compensate for this delay by jumping ahead the appropriate part of the waveform.
The effective result is a maximum cycle time of 2 audio-vectors (one up, one down) which makes high-quality waveforms impossible with the current system.
possible approach
ASL could be refactored such that the ASL program is compiled into a C data structure, rather than maintained as a lua table. This makes audio-rate no problem, but adds some limitations:
to
function is allowed)to
would need to be literals (not functions), so values could not be updated without recompiling the ASL.Currently the function handling just allows params to be functions which are resolved only when applying that segment to the slope library. Thus, looping ASLs will continuously resolve these calculations allowing for new values to be applied every breakpoint.
This could be ameliorated:
to
parameters, and a userdata table where a fixed number of variables can be stored. These would be directly accessible from C, and could be updated in realtime by the script writer as eg:listen.frequency
. There could be a set of fixed names with special behaviour (eg 'frequency' converts Hz->S). Care needs to be taken that each channel has separate access. These params would be compared every audio-vector, though perhaps the 'at next breakpoint' behaviour of the current system would be interesting to keep?to
, forwards the input to the output. Otherwise 'replacement' types (eg 'noise') could be allowed.reset_all
would restart, ie sync, all ASL channels)The text was updated successfully, but these errors were encountered: