-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(setValueCurveAtTime): AudioParam.setValueCurveAtTime #131
Comments
Much more detailed text added in: |
I think the level of detail in the new text is good. One thing that seems to be missing though is what the value is when t < time and t >= time + duration, respectively (most likely values[0] and values[N-1], respectively). Also, the expression "v(t) = values[N * (t - time) / duration]" is effectively nearest interpolation. Is that intended? Linear interpolation seems more logical. |
(In reply to comment #2)
The idea is that the number of points in the Float32Array can be large so that the curve data is effectively over-sampled and linear interpolation is not necessary. |
(In reply to comment #3)
Looks good for t >= time + duration. As for t < time, I guess the curve is not active, so needs not be defined (?). Why don't we want linear interpolation? Linear interpolation would make the interface much easier to use, and could save a lot of memory. E.g. a plain ramp would occupy 256 KB for 16-bit precision without linear interpolation (and require a fair amount of JavaScript processing to generate the ramp). With linear interpolation the same ramp could be accomplished by a 2-entry Float32Array and minimal JavaScript processing. I don't think that linear interpolation would cost much more in terms of performance, especially not compared to e.g. the exponential ramp. |
(In reply to comment #3)
That idea assumes the user creates a 'curve' that is itself sufficiently oversampled (has way too much data than needed). |
(In reply to comment #5)
True, but in this case I think that the real use case is to use quite low-frequency signals (like various forms of ramps that run for at least 20 ms or so). For those scenarios, band-limiting should not be necessary. As long as the spec mandates a certain method of interpolation (e.g. nearest, linear or cubic spline), the user knows what to expect and will not try to make other things with it (like modulating a signal with a high-frequency waveform). Also, I think it's important that all implementations behave equally here, because different interpolation & filtering methods can lead to quite different results. E.g. a 5 second fade-out would sound quite different if it used nearest interpolation instead of cubic spline interpolation. In that respect, a simpler and more performance friendly solution (like nearest or linear interpolation) is better, because it's easier to mandate for all implementations. |
(In reply to comment #6)
I can tell you from years of synthesis experience that the resolution/quality of envelopes is crucial. This is especially true for creating percussive sounds. So undersampling will work well only if people use pre-filtered sampledata as curves and even then there is a chance not all energy will come through as the user must make sure the curve data is never played back too fast. With naive undersampling the results will become increasingly unpredictable the more of the curves features (its rough parts) fall outside the audio band frequency wise. About undersampling, after some thought i'd say that both nearest neighbor and linear interpolation could be handy. Usually such an algorithm has a balance point at .5 poits. A comparison is made to see if the value at a time is closer to the pevious or the next sample and the output will switch halfway between the samples. But then sometimes you don't want to hear these steps at all. More fancy interpolation is propably not very usefull in this case. |
redman, are you suggesting that other curves (setValueAtTime, linearRampToValueAtTime, exponentialRampToValueAtTime and setTargetAtTime) should be subject to filtering too? Not sure if you can construct a case where you get out-of-band frequencies using those, but I guess you can (e.g. an exponential ramp with a very short duration). |
(In reply to comment #8)
Certainly not! :) As for the other other parameters, it would be handy to have a 'better than linear/ramp' interpolator that can be switched on or off. |
(In reply to comment #9)
Well, I'm pretty sure that a mathematical exponential ramp exhibits an infinite frequency spectrum (i.e. requires an infinite number of Fourier terms to reconstruct properly), and that just sampling it without any filtering will indeed result in aliasing. This is also true for a linear ramp, or even a simple setValueAtTime. That's analogous to what would happen if you implemented the Oscillator node with just trivial mathematical functions (such as using the modulo operator to implement a saw wave). I guess that my point is: Do we really have to care about aliasing filters for AudioParams at all? It would make things much more complicated. If you really want to do things like Nyquist-correct sampling of a custom curve, you can use a AudioBufferSourceNode as the input to an AudioParam instead. |
(In reply to comment #10)
Well, the problem is kindof that there will be very different requirements depending on what the audioParam is controling. I'm not sure a filter would be that much more complicated. There is already a filter active on the setValue method of AudioParams. |
Here's my take: AudioParam already has several ways of being controlled:
Especially with (3) we have a pretty rich possibility of controlling the parameters, including ways which are concerned about band-limited signals. This does bring up other areas of the API which need to be concerned with aliasing: AudioBufferSourceNode: currently its interpolation method is unspecified. WebKit uses linear interpolation, but cubic, and higher order methods could be specified using an attribute. OscillatorNode: once again the quality could be controlled via attribute. WebKit currently implements a fairly high-quality interpolation here WaveShaperNode: there are two aspects of interest here:
|
(In reply to comment #12)
I'd agree with this except that it may not be clear for the user that the data should be sufficiently smooth for it to be rendered at higher speeds without artefacts.
You forgot case 4): directly setting the value without any interpolation.
Usually an envelope consists of several segments of functions that are controlled independantly.
For samples i'd suggest a FIR filter with a SinC kernel if you implement anything more fancy than linear.
Do you mean the .frequency parameter?
I agree that short curves will lead to extra degradation.
It would be super if the algorithm does oversample. |
Is there anything against the idea of having separate interpolator objects? |
(In reply to comment #14)
I don't think that there's any simple way to generically abstract an interpolator object to work at the level of the modules in the Web Audio API. Marcus has suggested an approach which is very much lower-level with his math library, but that's assuming a processing model which is very much different than the "fire and forget" model we have here. Even if there were a way to simply create an interpolator object and somehow attach it to nodes (which I don't think there is), I think that for the 99.99% case developers don't want to have to worry about such low-level details for such things as "play sound now". I've tried to design the AudioNodes such that they all have reasonable default behavior, trading off quality versus performance. An attribute for interpolation quality seems like a simple way to extend the default behavior, without requiring developers to deal with interpolator objects all the time. |
It's unclear what is the state of play for the original problem of under/mis-specification for the interpolation. It looks as though the language has been somewhat cleared up but the definition of "scaled to fit the desired duration" still seems fuzzy. |
TPAC RESOLUTION: Spec to clarify linear interpolation. If other interpolation requires a feature request is required. |
Doing an interpolation seems more useful. You can do an array with step functions is you want steps. |
Fix #131: specify linear interpolation for setValueCurveAtTime
Audio-ISSUE-39 (setValueCurveAtTime): AudioParam.setValueCurveAtTime [Web Audio API]
http://www.w3.org/2011/audio/track/issues/39
Raised by: Philip Jägenstedt
On product: Web Audio API
The interpolation of values is undefined, the spec only says "will be scaled to fit into the desired duration." The duration parameter is also completely wrong, apparently copy-pasted from setTargetValueAtTime: "time-constant value of first-order filter (exponential) approach to the target value."
The text was updated successfully, but these errors were encountered: