Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
299 changes: 299 additions & 0 deletions index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -663,6 +663,9 @@ The interfaces defined are:
{{AudioNode}} which applies a non-linear waveshaping
effect for distortion and other more subtle warming effects.

* An {{AudioPlaybackStats}} interface, which provides statistics about the audio
played from the {{AudioContext}}.

There are also several features that have been deprecated from the
Web Audio API but not yet removed, pending implementation experience
of their replacements:
Expand Down Expand Up @@ -1488,6 +1491,7 @@ interface AudioContext : BaseAudioContext {
[SecureContext] readonly attribute (DOMString or AudioSinkInfo) sinkId;
attribute EventHandler onsinkchange;
attribute EventHandler onerror;
[SameObject] readonly attribute AudioPlaybackStats playbackStats;
AudioTimestamp getOutputTimestamp ();
Promise<undefined> resume ();
Promise<undefined> suspend ();
Expand Down Expand Up @@ -1533,6 +1537,11 @@ and to allow it only when the {{AudioContext}}'s [=relevant global object=] has
::
An ordered list to store pending {{Promise}}s created by
{{AudioContext/resume()}}. It is initially empty.

: <dfn>[[playback stats]]</dfn>
::
A slot where an instance of {{AudioPlaybackStats}} can be stored. It is
initially null.
</dl>

<h4 id="AudioContext-constructors">
Expand Down Expand Up @@ -1769,6 +1778,22 @@ Attributes</h4>
the context is {{AudioContextState/running}}.
* When the operating system reports an audio device malfunction.

: <dfn>playbackStats</dfn>
::
An instance of {{AudioPlaybackStats}} for this {{AudioContext}}.

<div algorithm="access playbackStats">
<span class="synchronous">When accessing this attribute, run the
following steps:</span>

1. If the {{[[playback stats]]}} slot is null, construct a new
{{AudioPlaybackStats}} object with [=this=] as the argument, and
store it in {{[[playback stats]]}}.

1. Return the value of the {{[[playback stats]]}} internal slot.
</div>


</dl>

<h4 id="AudioContext-methods">
Expand Down Expand Up @@ -11536,6 +11561,280 @@ context.audioWorklet.addModule('vumeter-processor.js').then(() => {
});
</xmp>

<h3 interface lt="AudioPlaybackStats" id="AudioPlaybackStats">
The {{AudioPlaybackStats}} Interface</h3>

Provides audio underrun and latency statistics for audio played through the
{{AudioContext}}.

When audio is not delivered to the playback device on time, this causes an
audio underrun. This causes a discontinuity in the played signal, which produces an
audible "click" which is commonly called a "glitch". These glitches are bad
for the user experience, so if any of these occur it
can be useful for the application to be able to detect them and possibly
take some action to improve the playback.

{{AudioPlaybackStats}} is a dedicated object for audio statistics reporting;
it reports audio underrun and playback latency statistics for the
{{AudioContext's}} playback path via
{{AudioDestinationNode}} and the associated output device. This allows
applications to measure underruns underruns, which can occur due to the
following reasons:
- The audio graph is too complex for the system to generate audio on time,
causing underruns.
- There is some external problem causing issues. Examples of such problems are:
- Another program playing audio to the same playback device is malfunctioning.
- There is a global system CPU overload.
- The system is overloaded due to thermal throttling.

Underruns are defined in terms of [=underrun frames=] and [=underrun events=]:
- An <dfn>underrun frame</dfn> is an audio frame played by the output device
that was not provided by the AudioContext.
This happens when the playback path fails to provide audio frames
to the output device on time, in which case it will still have to play something.

NOTE: Underrun frames are typically silence.

This typically only happens if the rendering graph is underperforming.
This includes underrun situations that happen for reasons unrelated to
WebAudio/{{AudioWorklet}}s.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remove this, and make it clear in a dedicated section, possibly non-normative.

- When an [=underrun frame=] is played after a non-underrun frame, we consider
this an <dfn>underrun event</dfn>.
That is, multiple consecutive [=underrun frames=] will count as a single
[=underrun event=].
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is "this"? Is it a point in time, a duration? Both sentences seem to disagree.


<pre class="idl">
[Exposed=Window, SecureContext]
interface AudioPlaybackStats {
constructor (AudioContext context);
readonly attribute double underrunDuration;
readonly attribute unsigned long underrunEvents;
readonly attribute double totalDuration;
readonly attribute double averageLatency;
readonly attribute double minimumLatency;
readonly attribute double maximumLatency;
undefined resetLatency();
[Default] object toJSON();
};
</pre>

{{AudioPlaybackStats}} has the following internal slots:

<dl dfn-type=attribute dfn-for="AudioPlaybackStats">
: <dfn>[[audio context]]</dfn>
::
The {{AudioContext}} that this instance of {{AudioPlaybackStats}} is
associated with.

: <dfn>[[underrun duration]]</dfn>
::
The total duration in seconds of [=underrun frames=] that
{{[[audio context]]}} has played as of the last stat update, a double.
Initialized to 0.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If our definition of an underrun event is a duration, this can be the sum of the duration of all underrun events.


: <dfn>[[underrun events]]</dfn>
::
The total number of [=underrun events=] that has occurred in playback by
{{[[audio context]]}} as of the last stat update, an int. Initialized
to 0.

: <dfn>[[total duration]]</dfn>
::
The total duration in seconds of all frames
(including [=underrun frames=]) that {{[[audio context]]}} has played
as of the last stat update, a double. Initialized to 0.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we define this in terms of currentTime, latency and underrun duration?


: <dfn>[[average latency]]</dfn>
::
The average audio output latency of the {{[[audio context]]}} over the currently tracked interval, a double.

: <dfn>[[minimum latency]]</dfn>
::
The minimum playback latency in seconds of frames played by
{{[[audio context]]}} over the currently tracked interval, a double.
Initialized to 0.

: <dfn>[[maximum latency]]</dfn>
::
The maximum playback latency in seconds of frames played by
{{[[audio context]]}} over the currently tracked interval, a double.
Initialized to 0.

: <dfn>[[latency reset time]]</dfn>
::
The time when the latency statistics were last reset, a
double. This is in the clock domain of {{BaseAudioContext/currentTime}}.
</dl>

<h4 id="AudioPlaybackStats-constructors">
Constructors</h4>

<dl dfn-type="constructor" dfn-for="AudioPlaybackStats" id="dom-audioplaybackstats-constructor-audioplaybackstats">
: <dfn>AudioPlaybackStats(context)</dfn>
::
Run the following steps:
1. Set {{[[audio context]]}} to <code>context</code>.
1. Set {{[[latency reset time]]}} to 0.

<pre class=argumentdef for="AudioPlaybackStats/constructor()">
context: The {{AudioContext}} this new {{AudioPlaybackStats}} will
be associated with.
</pre>
</dl>

<h4 id="AudioPlaybackStats-attributes">
Attributes</h4>

Note: These attributes update only once per second and under specific
conditions. See the <a href="#update-audio-stats">update audio stats</a>
algorithm and <a href="#AudioPlaybackStats-mitigations">privacy mitigations</a>
for details.

<dl dfn-type=attribute dfn-for="AudioPlaybackStats">
: <dfn>underrunDuration</dfn>
::
Returns the duration of [=underrun frames=] played by the
{{AudioContext}}, in seconds.
NOTE: This metric can be used together with {{totalDuration}} to
calculate the percentage of played out media that was not provided by
the {{AudioContext}}.

Returns the value of the {{[[underrun duration]]}} internal slot.

<dl dfn-type=attribute dfn-for="AudioPlaybackStats">
: <dfn>underrunEvents</dfn>
::
Measures the number of [=underrun events=] that have occurred during
playback by the {{AudioContext}}.

Returns the value of the {{[[underrun events]]}} internal slot.

<dl dfn-type=attribute dfn-for="AudioPlaybackStats">
: <dfn>totalDuration</dfn>
::
Measures the total duration of all audio played by the {{AudioContext}},
in seconds.

Returns the value of the {{[[total duration]]}} internal slot.

<dl dfn-type=attribute dfn-for="AudioPlaybackStats">
: <dfn>averageLatency</dfn>
::
The average playback latency, in seconds, for audio played since the
last call to {{resetLatency()}}, or since the creation of the
{{AudioContext}} if
{{resetLatency()}} has not been called.

Returns the value of the {{[[average latency]]}} internal slot.

<dl dfn-type=attribute dfn-for="AudioPlaybackStats">
: <dfn>minimumLatency</dfn>
::
The minimum playback latency, in seconds, for audio played since the
last call to {{resetLatency()}}, or since the creation of the
{{AudioContext}} if
{{resetLatency()}} has not been called.

Returns the value of the {{[[minimum latency]]}} internal slot.

<dl dfn-type=attribute dfn-for="AudioPlaybackStats">
: <dfn>maximumLatency</dfn>
::
The maximum playback latency, in seconds, for audio played since the
last call to {{resetLatency()}}, or since the creation of the
{{AudioContext}} if
{{resetLatency()}} has not been called.

Returns the value of the {{[[maximum latency]]}} internal slot.

<h4 id="AudioPlaybackStats-methods">
Methods</h4>

<dl dfn-type=method dfn-for="AudioPlaybackStats">
: <dfn>resetLatency()</dfn>
::
Sets the start of the interval that latency stats are tracked over to
the current time.
When {{resetLatency}} is called, run the following steps:

1. Set {{[[latency reset time]]}} to {{BaseAudioContext/currentTime}}.
1. Let <var>currentLatency</var> be the playback latency of the last
frame played by {{[[audio context]]}}, or 0 if no frames have been
played out yet.
1. Set {{[[average latency]]}} to <var>currentLatency</var>.
1. Set {{[[minimum latency]]}} to <var>currentLatency</var>.
1. Set {{[[maximum latency]]}} to <var>currentLatency</var>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we restricting this to the latency? A developer that notices that the underrun figures increase and make changes to its processing will want to know if it increases again. OTOH the latency is typically but not always constant with hopefully a very tight stddev.


<h4> Updating the stats</h4>
<div id="update-audio-stats" algorithm="update audio stats">
Once per second, execute the
<a href="#update-audio-stats">update audio stats</a> algorithm:
1. If {{[[audio context]]}} is not running, abort these steps.
1. Let <var>canUpdate</var> be false.
1. Let <var>document</var> be the current [=this=]'s
[=relevant global object=]'s [=associated Document=].
If <var>document</var> is [=Document/fully active=] and <var>document</var>'s
[=Document/visibility state=] is `"visible"`, set <var>canUpdate</var> to
true.
1. Let <var>permission</var> be the [=permission state=] for the permission
associated with [="microphone"=] access.
If <var>permission</var> is "granted", set <var>canUpdate</var> to true.
1. If <var>canUpdate</var> is false, abort these steps.
1. Set {{[[underrun duration]]}} to the total duration of all
[=underrun frames=] (in seconds) that
{{[[audio context]]}} has played since its construction.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by definition, an AudioContext doesn't play an under-run frame, this is backwards.

1. Set {{[[underrun events]]}} to the number of times that {{[[audio context]]}}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious if this design holds up to distinguish these three cases

  1. occasional spike
  2. consistent overload -> consistent underruns (every quantum processed)
  3. periodic overload. as an example, a misaligned block based computation that ends up processing every N frames

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi!

  1. An occasional spike will manifest as a single increase in underrunDuration, and an increase in underrunEvents by 1.
  2. Consistent underruns will manifest as many small increases in underrunDuration, and many increases in underrunEvents.
  3. Similar to 1, but periodic. Since the API updates at most once per second (for privacy reasons), this might not be possible to immediately detect if N is small enough that we get several underruns per second. If N is large, it should be possible to see that the underruns occur at regular intervals. Also (if my math is correct) underrunEvents / currentTime should converge steadily towards sampleRate / N, so you could also look for that.

has played an [=underrun frame=] after a non-underrun frame since its
construction.
1. Set {{[[total duration]]}} to the total duration of all frames (in seconds)
that {{[[audio context]]}} has played since its construction.
1. Set {{[[average latency]]}} to the average playback latency (in seconds) of
frames that {{[[audio context]]}} has played since
{{[[latency reset time]]}}.
1. Set {{[[minimum latency]]}} to the minimum playback latency (in seconds) of
frames that {{[[audio context]]}} has played since
{{[[latency reset time]]}}.
1. Set {{[[maximum latency]]}} to the maximum playback latency (in seconds) of
frames that {{[[audio context]]}} has played since
{{[[latency reset time]]}}.
</div>

<h4>Privacy considerations of {{AudioPlayoutStats}}</h4>

<h5>Risk</h5>
Audio underrun information could be used to form a cross-site
covert channel between two cooperating sites.
One site could transmit information by intentionally causing audio glitches
(by causing very high CPU usage, for example) while the other site
could detect these glitches.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but only if:

  • There is a linearization point somewhere on the system (typically the audio mixer, be it in the OS or in the browser)
  • The callbacks are effectively synchronous all the way from this linearization point, without a buffer in between that could flatten load spikes (that could be because of a different AudioContextLatencyCategory).

<h5 id="AudioPlaybackStats-mitigations">Mitigations</h5>
To inhibit the usage of such a covert channel, the API implements these
mitigations.
- The values returned by the API MUST not be updated more than once per
second.
- The API MUST be restricted to sites that fulfill at least one of the following
criteria:
1. The site has obtained
<a href="https://w3c.github.io/mediacapture-main/#dom-mediadevices-getusermedia">getUserMedia</a>
permission.

Note: The reasoning is that if a site has obtained
<a href="https://w3c.github.io/mediacapture-main/#dom-mediadevices-getusermedia">getUserMedia</a>
permission, it can receive glitch information or communicate
efficiently through use of the microphone, making access to the
information provided by {{AudioPlaybackStats}} redundant. These options
include detecting glitches through gaps in the microphone signal, or
communicating using human-inaudible sine waves. If microphone access is
ever made safer in this regard, this condition should be reconsidered.
1. The document is [=Document/fully active=] and its
[=Document/visibility state=] is `"visible"`.

Note: Assuming that neither cooperating site has microphone permission,
this criterion ensures that the site that receives the covert signal
must be visible, restricting the conditions under which the covert
channel can be used. It makes it impossible for sites to communicate
with each other using the covert channel while not visible.

<h2 id="processing-model">
Processing model</h2>

Expand Down