Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audiorate ASL support #301

Closed
trentgill opened this issue Apr 11, 2020 · 4 comments
Closed

Audiorate ASL support #301

trentgill opened this issue Apr 11, 2020 · 4 comments
Labels
design Issue needs design consideration
Milestone

Comments

@trentgill
Copy link
Collaborator

issue

The fundamental issue here is that after each breakpoint in an ASL we must call back into Lua. Because this happens inside the audio callback, we can't call directly into Lua as it may already be active from the event loop. Instead we cue an event that waits for Lua to process it.

The result is that there is always some delay in getting the next destination value, and thus the current audio vector just sits at the limit and waits until the next cycle (or potentially n cycles later) for the callback to have been called. We already compensate for this delay by jumping ahead the appropriate part of the waveform.

The effective result is a maximum cycle time of 2 audio-vectors (one up, one down) which makes high-quality waveforms impossible with the current system.

possible approach

ASL could be refactored such that the ASL program is compiled into a C data structure, rather than maintained as a lua table. This makes audio-rate no problem, but adds some limitations:

  • ASL could no longer call other lua functions (ie. only the to function is allowed)
  • Arguments to to would need to be literals (not functions), so values could not be updated without recompiling the ASL.

Currently the function handling just allows params to be functions which are resolved only when applying that segment to the slope library. Thus, looping ASLs will continuously resolve these calculations allowing for new values to be applied every breakpoint.

This could be ameliorated:

  • Add a 'listener' type for to parameters, and a userdata table where a fixed number of variables can be stored. These would be directly accessible from C, and could be updated in realtime by the script writer as eg: listen.frequency. There could be a set of fixed names with special behaviour (eg 'frequency' converts Hz->S). Care needs to be taken that each channel has separate access. These params would be compared every audio-vector, though perhaps the 'at next breakpoint' behaviour of the current system would be interesting to keep?
  • As an extension of 'listener' type, there could be 'input' type which, for duration of the to, forwards the input to the output. Otherwise 'replacement' types (eg 'noise') could be allowed.
  • general function calls could be saved into a table in the lua asl, then called from audio callback (via the event system). This could easily blow up the system, so should be discouraged.
  • could add a set of common interactive behaviours (eg. reset_all would restart, ie sync, all ASL channels)
@trentgill trentgill added the design Issue needs design consideration label Apr 11, 2020
@trentgill trentgill added this to the 3.0 milestone Jul 15, 2020
@trentgill
Copy link
Collaborator Author

In 3.0 I plan to turn ASL into a C-based library with light Lua hooks to the functionality.

Con:

Pro:

  • Allows sample-accurate timing, and thus operation up to audiorates for oscillation capabilities.

This tradeoff is being made as an admission that ASL can't great as a general-purpose scheduling system, while maintaining tight-timing for synthesis duties. The decision to focus on the timing-accuracy comes from the prospect of having the full norns clock system running in crow. Having 'Lua things' happen at a scheduled time in the future is much more inline with that concept, and indeed the clock system can set & call ASL actions in time.

To enable ASL to create varying waves, a number of methods will be enabled for users to interact with the data of the ASL.

  • listener._ table, where lua variables are shared to the C environment for dynamic updates.
  • generator functions, where a to variable can be generated from a data-set & a pre-defined iterator behaviour

Generators could include:

  • table manipulation. Takes a table & a step-behaviour
  • value manipulation. Takes a value & a mod-behaviour

Step behaviours could be:

  • next
  • prev
  • random
  • first
  • last

Mod behaviours could be:

  • increment by n (can be negative)
  • multiply by n
  • limit n to m
  • wrap n to m

These behaviours will likely need to be nested (eg increment & wrap).

//

Basically ASL becomes much more of a 'tiny programming language for describing modulation & waveforms', rather than just an alternate syntax for coroutines with custom timing.

@trentgill
Copy link
Collaborator Author

This is ASL2.0

  • 'count' construct should have access to the iterator value
  • make sure 'lock', 'loop', 'held', and 'count' work.
  • consider adding 'weave' again

@trentgill
Copy link
Collaborator Author

trentgill commented Apr 17, 2021

  • support held{}
  • support times(n,{})
  • support dyn.instant.key = val to update val now, rather than at the next breakpoint
  • support lock{}
  • support 'Shape' type in dynamics

This was referenced Apr 26, 2021
@trentgill
Copy link
Collaborator Author

fixed in #399

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design Issue needs design consideration
Projects
None yet
Development

No branches or pull requests

1 participant