-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Privacy] Fingerprinting Based on outputLatency #1498
Comments
|
Rounding up to the nearest enum would greatly increase the latency. Mostly, feedback is that people want to minimse the latency or (for live work) have the exact latency |
The purpose of It's probably ok to reduce the accuracy of values to a few milliseconds, say no more than 5. That's probably good enough to synchronize things for musical applications, but I think if you're playing sine tones, you will hear some beating effect. But I'm not a audio sound/studio engineer. |
I thought the purpose of |
That's If you're playing a video that has a sine tone, and you want webaudio to use an oscillator to produce the same tone, the difference in time stamps could cause the tones to be out of phase. I guess it wouldn't beat, but you could get constructive or destructive interference. |
The spec now says:
|
@jasonanovak Does that wording satisfy your concern? |
Yes, thanks. |
As this value "depends on the platform and the connected hardware audio output device” it can be used to determine what device is being used to render the webpage and thus provides fingerprint capabilities. One way to mitigate this would be to either describe the latency using a defined set of enums or a set of defined outputLatency values.
The text was updated successfully, but these errors were encountered: