time parameters are confusing and limited #204
Comments
|
that is totally true, and i agree. things get complicated. http://monome.org/docs/aleph:parameters time parameters are parameter type "fixed." this means the control values, which are 16-bit signed [0x8000, 0x7fff] are converted by shifting to an arbitrary power-of-two range. (the arbitrariness is defined by the "radix" field in the param descriptor.) for aleph-lines, the output value is interpreted as 16.16 fixed-point with a maximum value of 64.0 (0x400000). the TIMER op outputs "ticks," which are 1ms. so let's say you tap it once/second, and it outputs 1000. the fixed-point scaler uses the declared radix (7) to shift this to 0x1f400, which is "proportionally" correct, but not what is really desired ( which is 0x10000. ) anyways, the best way is probably just to make a "time" parameter type, which uses an interpolated lookup table to convert control values directly to sample counts. the problem is that the table generation must either assume a given time range (and i want to extend the time range in lines, there is plenty of SDRAM left to do so), or have some mechanism for scaling the output arbitrarily (which doesn't exist.) ideally, scalers don't do anything more expensive than array lookup and shifts. but this might be where an exception is called for (lookup twice and interpolate.) we could also, of course, fudge the output of the TIMER op. it's also occured to me to simply run the application heartbeat a little faster. but then people would have a really unintuitive unit for setting the period of METRO and so on. yuck. anyways let me know if you have any suggestions or if i can explain better. the param scaling stuff is necessarily messy; it's been quite a headache getting perceptually-linear controls over all parameters with 16b inputs. at one point bees values were actually 32b without any scaling, but in many ways that was much worse. (and more expensive.) |
|
btw, using a debug build is helpful for this stuff. it will show you the literal 32-bit value being sent to the blackfin, among many other things... |
|
woops hit the wrong button there |
|
ok I had an idea but not sure whether this is technically possible? So guessing blackfin sample tics are currently either 1/44.1 ms, 1/48 ms or 1/96 ms, depending on which of the 3 standard audio sample rates are in action. It seems to me that control latency starts to become perceptible around the 5ms mark, so 1ms ticks are perfect for control values, as well as . So the base issue is neither 48, 96 or 44.1 are integer powers of 2. So... Could you clock the blackfin audio dac/adc at 64 kHz? 32kHz probably adequate for many things. I know these things are usually hard-wired for the 3 standard clock speeds, but wouldn't it be great if the blackfin could also think in milliseconds! In fact if it's possible to clock the blackfin at a variable sample rate this could even lead to some easy chorus-type effects without interpolation. So is it feasible!? |
|
alas, only 48k / 96k / 192k supported by the codec in standalone mode (and the mode is hardwired; codec produces frame interrupts on blackfin.) but multiplying ticks by 48 to get samples is not a big deal. how bout this: make a "time" parameter type, wherein the control value input is assumed to be integer ms. multiplied by 48 and sent to bfin, display stays ms. param change function in module doesn't multiply (which is good anyways.) btw, i just pushed some intermediate work to dev branch on new basic param types. (16b int, 32b fract, 16b integrator coefficients.) they still need to be added to param_scaler.c ...but they show the basic idea.. |
|
Shit, I remembered this problem: we want to address a greater range than 32k milliseconds for time params. Hence the fixed-point scaling in the first place. I was gonna add a "fine tune" delay param to next iteration of lines (+/- sample offset) ... |
|
... Sorry, phone.... Maybe the only answer is to add multiple params. Different interfaces to each time param depending on whether you want to set ms directly, arbitrarily scale to the full time range, or have greatest resolution (samples in this case, b/c lines is not interpolated.) |
|
So the times we need to deal with in order to work with anything from single samples (worst case 192 kHz) to a wagnerian digital opera (worst case four hours): 19200 * 60 * 60 * 4 == 2.7648e+09 therefore in order to work with a single parameter for any type of time 32 bit resolution would be adequate. 16 bit signed is insufficient to index even entire tracks to millisecond resolution. Seems to me that, for example, 'timer' ops should be able to measure the duration of any event that a human being is remotely likely to produce during a performance. 5.2 uS (single sample at 192kHz) sounds like more than enough resolution to work with any audio signal. I think the only thing we do in audio that might concern times under 5.2uS is interpolation, and the blackfin should be able to take care of higher inter-sample resolutions transparently. My gut feeling is that bees eventually needs to be able to pass 32 bit signed 'time' parameters around a network, but that 16 bit should be adequate for everything else. Maybe 5 uS is a non-intuitive integer to work with from control point-of-view. I'm from a physics background so weird units like microns and nanoseconds seem second nature! As a parting shot consider the following: For sure I can see the argument for splitting time into 2 16 bit params fine & coarse. However it's very difficult to see what would be a sensible resolution for 'coarse'. If we were only ever going to write 3 minute pop songs 3*60/(2^15) == 5ms. Doesn't really give much wiggle room! |
|
hmm actually that wobble would be 5us per period - so I'm way off - if you were implementing 0.1 semitone chorus effect with LFO period of 1s that would be: 1/2 * 1000 * 5 us == 2.5 ms Having said that, whilst chorus effects don't seem to require single sample resolution, I seem to remember from past experience that comb filtering can be audible with delays well under a millisecond. Now I'm coming round to your idea of 2 scaled 16 bit time params: 32 bit integers are not very useful to work with in bees - just gonna look like a telephone number! timers operator would need a multiplier output to measure over 65 seconds. I guess for most sane applications the multiplier input/outputs could go unused (obvious exception being laying down a long backing in one pass using loop pedal). Should compile a list of the canonical use cases which would have extreme demands in terms of timings. Can the following networks be routed and would they be unnecessarily complex to set up? I can think of 4 limiting cases: |
|
i think this is all totally on track. serious food for thought. a couple notes:
this will work great for variable pitch-shifting/granular stuff. but for chorus/phaser/flanger/comb/reverb effects, this might not really be sufficient. for making tuned resonators / karplus-strong, you want higher-order interpolation, or maybe sub-sample tuning by means of allpass filters (depending on the desired effects.) that is not hard, but it's a more specialized application.
|
|
anyways, to cut to the chase on this, i will add/change some params to upcoming lines-0.3.0 :
the last two would affect all time parameters. pos_read, pos_write, loop, delay, delay_fine. i think they would have to, in order to use the thing as a looper in longer buffers. the question is: would scale or offset be better, or both. msec offset gives you only up to 64k msec, which is sufficient for current lines but not for theoretical limit of sdram (even at 48k with, say, 4 channels.) so, maybe both. this would make for exceptionally clean interfaces for certain loopy/scrubby behaviors. i've been basically following the philosophy that "there is no such thing as too many params..." |
|
Thought more about this and can now see a stronger case for coarse/fine. My philosophy is 'think about things really hard until your head hurts'. 10ms (+/- 5ms quantisation error) is less than a semi-demi quaver at 300bpm, or the speed-of-sound delay between a drummer and bass player on a medium-sized stage! I have serious doubts that anyone's looper foot presses have this degree of accuracy (also switch jitter!?), though millisecond timings may be perceptible when playing a percussive instrument. 10ms / 16 bit unsigned gives just over 10 minutes loop time - if you're live looping a single pass in front of a very patient audience that is still an awful a lot! A sensible lower bound for controlling a digital instrument could be 0.1ms (midi ~ 0.5ms, right?). The only reasonable application I now can conceive which could not be implemented using a coarse (10ms) & fine (0.1ms) input/output for timer op / lines params is the tuned resonators you mentioned. Ultra-fine (1us) could be added if and when higher-order interpolation becomes available. scaling params seems to me to be inherently fiddly, requiring that you connect two inputs together to do one job. |
|
OK after all my hot air here's an actual idea:
So serial transmission for time params! n.b |
|
hm thats an interesting thought. "modal" parameter response is a little scary to me though. want to avoid two things: if there are not strong objections, i'm going to go with the plan above:
that way the state machine you suggest can still implemented in BEES. having TIMER output a special value on overflow is a very good idea; but i would just make it a separate output. so on a long interval, the (new) OVER output is an integer representing time / 32k; the extant TIME output is then the remainder. but i'm trying to think of flexibliity above all; i don't mind adding a couple operators to deal with tapping >30s intervals for example. it's totally worth trying out the scheme you suggest as a variation; maybe it will work better! |
|
If you were going to go the route I suggested, 0xFFFF would have to become a globally special parameter - the idea is the same as utf-8 and no doubt could generate the same type of brain-numbing stupid bugs out in the wild as character encoding! If people are trying to hack bees and have missed a subtle point that 0xFFFF is special, there would be much frustration and gnashing of teeth. The other issue with it is 'what does any module in the network do when presented with the overflow signal? ADD would have to store it until the 32bit word is fully loaded, then serial transmit the result. you might want ROUTE to transmit the overflow signal instantaneously (in order to avoid congestion later on). If it hit some type of trigger input you'd want it to behave the same as ADD (i.e wait until the event actually happens). I bet that transmission of a long time parameter in the above scheme through a complex bees network would cause significant congestion if not done optimally. With network congestion comes timing errors, so at that stage what's the point in sending a super-accurate time value!? |
|
so yeah in short I believe your scheme is the more pragmatic choice. My suggestion (if possible) for this type of scheme would be as follows: in order to send long time values from a TIMER to lines, make it possible to simply connect:
|
|
by some ironic twist of fate I've so far spent the whole working day getting mangled by a particularly senseless utf8 bug. I repent - keep bees beautifully 16 bit! |
|
Hi Rick, I've finally managed to work out how to set up a toolchain and have ben trying to compile the pitch_shift module but i get a lot of errors .. what am i doing wrong? (i'm totally new to all this, not even sure if this is the right place to comment on issues ? i can't see a way to comment on the page the code is on?) anyways, these are the errors i'm getting - (i've tried the master and dev branch) vagrant@aleph-dev:~/aleph/modules/pitch_shift$ make |
|
wooh! sorry I've been totally dormant on aleph. Started learning common So yeah as far as I remember I may have committed some broken stuff last looking at that error message - should it be dacs.h instead of dac.h ? If On Fri, Aug 15, 2014 at 3:27 PM, duncanspeakman notifications@github.com
|
|
ah - so after a stupid amount of upheaval in my life for past year or so I'm finally finding time again to hack on aleph. So the time question comes up again as I'm making this first really serious attempt to write a 'grains' module. Internally it uses 24x8 fractional sample indexing for all time params. It will use the concept of echoTap & scrubTap where scrubTap indexes 'on top' of echoTap to implement pitch shift both as realtime effect and when the echoTap is playing back captured audio. In terms of exposed controls I'm currently leaning toward three integer time parameters - 1/256 samples (aka t_subsample), whole samples (aka t_sample) & 64 samples (aka t_ms). this gives resolution/range of: t_subsample: 100ns/6ms can always play around with the scale of t_ms when the module is finished.
|
|
This issue was originally about fixing aleph's timebase in avr32 vs bfin so you can measure taptempo using a metro, then set, for example the loop length of lines to match that tempo. I still want this feature! Just came up with an ultimately unsatisfying but practical solution to the conundrum, illustrated in these changes https://github.com/boqs/aleph/tree/lines_timebase The thinking on this is that lines' time params should have a resolution 2ms, or 96 samples. This enables a 60 second 'line' with 2ms resolution. Without changes to BEES, this enables the desired functionality by sending output of TIMER to DIV(2). Obviously the displayed number on INS page for time params is now wrong on lines_timebase branch - I will try to mess with radix to get 2ms resolution & a display in ms before sending a pull... Any objection to this @catfact? |
with this diagnostic scene (rickvenn.com/ticktest.scn) under bees 0.5.2 and a binary built from dev branch I see that the units of time are not compatible between the module params e.g loopX and the output of metro / timer ops. Is there anything we could do about this? Seems like taptempo for delay should be a case of feeding the output of a timer straight into delayX, without any fudge factors?
The text was updated successfully, but these errors were encountered: