Synthesis #13
Closed
Synthesis #13
Labels
Milestone
Comments
--- audio processors
osc{ freq = _
, wave = _ -- a string ('sine', 'pulse' etc) or a number (-1,1) to xfade
, symmetry = _
, fm_source = _
}
osc( freq <opt., wave, symmetry, fm_source> )
-- backend is chosen based on args (re: CPU)
filter( input, freq, resonance, mode, model)
volume( input, level )
gate( input, level, mode )
input( channel )
noise()
-- controls
vactrol( level, time, symmetry )
slew( input, speed, linear/expo )
-- arithmetic
add() -- or mix()
mul() -- or gain() |
|
Make a synthdef and apply it like an ASL: output[1].action =
volume( env()
, filter( freq
, mix( osc( freq )
, osc( freq )))) |
|
Feels like i'm reinventing the wheel here... perhaps we could just use another lang's syntax? Perhaps it should be fixed architecture synthesis and have a way to set the parameters from Lua such that they are applied in realtime. This would be particularly interesting if the modulation sources (LFOs/envelopes), could be ASL structures. |
|
Coming back to the initial comment up top. Fixed architecture is the way to go:
|
Open
|
Close in favour of algorithmic waveforms in ASL2 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Fixed architecture dsp functionality.
Likely needs to be a global setting to switch the module into acting as a full polysynth.
(Triangle (ramp/waveshape) -> Filter | Noise -> Amplifier). 4-voice goal. Making a 2op FM option would be maximum suggested spec.
Smoothing on the control-rate inputs (from ADCs, or MIDI, or via USB).
LFO and Envelopes should be provided, potentially by the ASL language if that's possible. Might require something far weirder under the hood, so just start with the basics here.
Use as much from JF as will fit on the flash / in ram / in cpu time.
last feature on the sales pitch hah.
The text was updated successfully, but these errors were encountered: