Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audiorate input processing #302

Closed
trentgill opened this issue Apr 11, 2020 · 1 comment
Closed

Audiorate input processing #302

trentgill opened this issue Apr 11, 2020 · 1 comment
Labels
design Issue needs design consideration

Comments

@trentgill
Copy link
Collaborator

At present, the input library is specifically designed for control-rate inputs. This is because per-sample processing can't be done in Lua, unless you artificially decrease the sample rate (ie with input.mode = 'stream').

Two different mechanisms are obvious to me:

  • Option 1 Add an input lib (or io lib?) ability to add input[n] to the output[n] signal. This happens in parallel to ASL
  • Option 2 Add an 'inputN' shape option (ie. in addition to 'linear', 'expo' etc) for the to function. This would replace the slope output with the input signal for the duration of the slope.

In both cases, these signals would be passed through the output quantizer (see: #292), so an audio-rate quantizer would be available.

Option 1: input->output route & mix

The first option is mainly a patch-simplifier, because it reduces the necessity for an external CV mixer module (or stack cable). Of course, if the input was forwarded to all 4 outputs, then different ASLs were running in each output, could give interesting variations of the input signal. This 'input+ASL => quantizer' is not currently possible.

It could be extended beyond this basic 'forwarding' by providing some mechanism to perform basic arithmetic on the signal (a simple multiply-and-offset (-and-slew?) would be sufficient), thus allowing more extensive CV modulation types.

Option 2: input stage replacement in ASL

This option is weirder, but inspired by the Rossum Control Forge, where envelope segments can be replaced with external signals.

This could lead to some strange & interesting envelope options.

It also enables the input to be forwarded directly to the quantizer with the ASL: to(0,'hold',input[1])

This options seems much stranger, but also probably less interesting. It makes more sense that the input would control a parameter of the ASL, rather than be spliced in literally. While this option is easier to fit into the existing ASL model, it doesn't allow both input & ASL to work together, only as a switch.

Thoughts?

Perhaps this is a bad idea, and we should just force the user to process the input in Lua with the existing input modes.

The CV input is only sampled once per audio-vector currently (~1500Hz), but we could likely increase this by a factor of 2. Still, that means the nyquist rate is only 1.5kHz which is clearly not going to work for true audio signals.

Aside: some smart person could likely rebuild the ADC driver to make it work at higher rates.

Much of this would be made unnecessary by #67 though that presents a number of different challenges too.

Perhaps this whole issue should be deferred until after further discussion on #13, or whether the idea of having audio synthesis on board is within the desired goal of this project.

@trentgill trentgill added the design Issue needs design consideration label Apr 11, 2020
@trentgill
Copy link
Collaborator Author

Closing as crow's inputs are (for now) staying focused on smart event-driven functionality. In lieu of a smart person making the ADC run at full audio sampling rate, Crow will be disappointing as an audio processor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design Issue needs design consideration
Projects
None yet
Development

No branches or pull requests

1 participant