Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does generator.mute change track states? #81

Closed
alvestrand opened this issue Feb 28, 2022 · 14 comments
Closed

How does generator.mute change track states? #81

alvestrand opened this issue Feb 28, 2022 · 14 comments

Comments

@alvestrand
Copy link
Contributor

At the moment, the setting of "mute" has two levels of "queue a task":

Unless one has been queued already this run of the event loop, queue a task to run the following steps:
- Let settledValue be this.[[isMuted]].
- For each live track sourced by this, queue a task to set a track’s muted state to settledValue.

("set a track's muted state" is a defined procedure in mediacapture-main that does not queue a task, but fires an event).

The existence of "queue a task" means that the following test snippet:

   generator.muted = true;
   assert_true(generator.track.muted)

will fail. I suppose that's the price we pay for having an event fire when we change the state.

The double dispatch means that the following test snippet:

  generator.track.onmute = <observer code goes here>
  generator.track.onunmute = <observer code goes here>
  generator.muted = true;
  generator.muted = false;

will not observe any event (because at the time the first task starts executing, "settledValue" will be false, which is "no change").

Would it be simpler to remove the outer "queue a task", so that an event is always fired when you change generator.mute, and the track's muted state changes exactly the same number of times as the generator's muted state?

I don't immediately see how to implement the "Unless one has been queued already" part either, or why its existence makes for more intuitive behavior of the API; I'd suggest dropping it.

@jan-ivar
Copy link
Member

jan-ivar commented Feb 28, 2022

I suppose that's the price we pay for having an event fire when we change the state.

Yes. We can either:

  1. change state synchronously and fire an event later (but when you get the event(s) later, state might differ), or
  2. always change the state from inside the queued task firing the event (so the event handler sees expected state)

We cannot have both.

The existence of "queue a task" means that the following test snippet:

generator.muted = true;
assert_true(generator.track.muted)

will fail.

Seems desirable. Any assumption of the opposite would likely fail to consider that this affects all track clones from this generator (which can be an unbounded number), even ones transferred to workers (on different threads).

While we could make an exception for same-thread tracks, I don't see what that would serve other than feed this assumption.

The double dispatch means that the following test snippet:

  generator.track.onmute = <observer code goes here>
  generator.track.onunmute = <observer code goes here>
  generator.muted = true;
  generator.muted = false;

will not observe any event (because at the time the first task starts executing, "settledValue" will be false, which is "no change").

This also seems desirable to me. The prose is there to avoid the following firing 1000 events:

for (let i = 0; i < 1000; i++) {
  generator.muted = true;
  generator.muted = false;
}

...because that seems very different from how hardware devices (which tend to run in the background) typically work. As a JS-based plug-in for what's normally a hardware device running in the background, being able to synchronously observe results on MST seems of low value to me (outside of tests). It seems better to me to separate timing of one side from the other.

@alvestrand
Copy link
Contributor Author

I don't understand the last argument. If I mute and unmute a track a thousand times, I expect to see a thousand mute events - not 999, not 1, not 0, but 1000. Anything else violates the principle of least astonishment.

Also, this code:

generator.muted = true;

generator.muted = false;

generator.muted = true;

generator.muted = false;

will then have a number of events fired that depends on both the content of and on the relative timing of the queued events. Again, violating the principle of least astonishment.

If we change the prose to be:

run the following steps:

  • For each live track sourced by this, queue a task to set a track’s muted state to [[IsMuted]].

we will get a totally predictable number of events. That seems desirable to me.

(I agree with not changing the state of the track synchronously.)

@jan-ivar
Copy link
Member

jan-ivar commented Mar 1, 2022

If I mute and unmute a track a thousand times, I expect to see a thousand mute events ...
If we change the prose to be:

run the following steps:

  • For each live track sourced by this, queue a task to set a track’s muted state to [[IsMuted]].

Do you mean:

That'll fire 1 event, not 1000, because of "2. If track.[[Muted]] is already newState, then abort these steps."

That's assuming generator.[[isMuted]] isn't modified by JS during the other 999 tasks, which would then fire ghost events. Action at a distance.

Or did you mean:

This would produce the following: Time --->

Task1 Task2 Task3 Task4 ...
generator.muted = true;
generator.muted = false;
generator.muted = true;
...
generator.muted = false;
onmuted
track.muted == true
generator.muted == false
onunmuted
track.muted == false
generator.muted == false
onmuted
track.muted == true
generator.muted == false
...

Let's hypothetically say each task takes 0.02 ms. That's 20 ms during which JS is led to believe the underlying device (generator) is flipping its muted state back and forth, when this isn't happening. That all happened on the same task 19.98 milliseconds ago, which wasn't enough time for any real background device to have had any back-and-forth reaction.

Letting synchronous actions on a single task generate a (potentially unbounded) series of timed events like this seems like a bad idea. It also sets up a dissonance between what the JS observer sees and what is actually happening with the generator.

@jan-ivar
Copy link
Member

jan-ivar commented Mar 1, 2022

I don't immediately see how to implement the "Unless one has been queued already" part either

We could flesh this out with a boolean [[HaveQueuedAlready]] internal slot if need be.

@alvestrand
Copy link
Contributor Author

I think there is no difference between the two alternatives, since all the "queue a task" operations are in the same execution cycle, so [[isMuted]] does not change value within the loop that queues the tasks. I was assuming that "queue a task" passes its arguments by value, not by reference. The second description makes it clear that it is pass-by-value, so that may be the better formulation of what I intended to say.

@jan-ivar
Copy link
Member

jan-ivar commented Mar 1, 2022

I was assuming that "queue a task" passes its arguments by value, not by reference

Queue a task doesn't take custom arguments, just steps. So I read "queue a task to do X" as:

  1. Queue a task to run the following steps:
    1. Do X.

If X is "Use this.[[isMuted]] for something", then "this" is dereferenced when the queued task executes.

This is similar to the forEach closure problem in JS.

The second description makes it clear that it is pass-by-value, so that may be the better formulation of what I intended to say.

Got it. But I still think that's undesirable. The track.muted API was designed to be called by the user agent, and to be observed by JS as well as downstream sinks implemented by the user agent.

With VideoTrackGenerator, we're opening up a way for JS to act as an underlying device, giving JS a new form of control surface that gives it new ways to control downstream and existing written JS. I think being conservative here means keeping some control over what JS can do in this new role, which to me means limiting things like setting muted many times in a row on the same task, which would be something downstream hasn't seen before, expands what we need to test for, and doesn't seem to serve any purpose

@alvestrand
Copy link
Contributor Author

I think being conservative here means keeping some control over what JS can do in this new role

I think we're at a fundamental difference of opinion here.

You are arguing that we should insert behavior that is hard for the JS programmer to reason about and thus program to in the name of "conservatism".

I am arguing that given that the mechanism exists, and we're giving access to it at all, we should give JS programmers the simplest model to reason about, so that they can do what they think is the Right Thing without worrying about the UA inserting arbitrary restrictions to limit what they can do with it.

This UI does not prevent a lot of behaviors that could be considered unexpected, including:

  • inserting frames of varying sizes
  • inserting frames with timestamp changes that don't correspond to real time passing
  • Inserting multiple frames in a single execution cycle

And so on and so forth.

Messing up people's mental model of what "mute a source" means ("mute all tracks, when tracks' muted state change they fire an event) because of a completely theoretical fear of the JS programmer might possibly do does not appear reasonable to me.

@guidou
Copy link
Contributor

guidou commented Mar 2, 2022

Do we even need to specify behavior of track muted state in this spec?
My initial impression is that here we should just specify that a muted generator does not produce any frames.
Track muted state and generator muted state are two different things.
The language in mediacapture-main should take care of the track muted state.

Some text from mediacapture-main for reference:

The muted/unmuted state of a track reflects whether the source provides any media at this moment.
A MediaStreamTrack is muted when the source is temporarily unable to provide the track with data.
A track can be muted by a user. Often this action is outside the control of the application.
A track can also be muted by the User Agent.

I don't see it as a necessity that muting a generator should synchronously mute all connected tracks. The UA can detect this and mute the tracks using any logic it wants (asynchronously if it so prefers).
I do see it as a problem that unmuting a generator should force-unmute all connected tracks, because that could be in conflict with the UA judging that the track should remain muted (e.g., due to user action via UI or some other UA-specific mechanism).

@alvestrand
Copy link
Contributor Author

I don't think we can leave it as implementation defined. It's a specified API, and specified APIs should have specified consequences, unless we have strong reasons not to.

I think we have consensus that the muting of tracks shouldn't be synchronous (because event firing is a synchronous operation, and that would put user JS inside the mute-a-generator processing steps - bad idea).

I also think that we haven't surfaced any reason that the "mute a track" operation defined in the mediacapture-main spec needs to change. The discussion is all about when and how to call it.

@jan-ivar
Copy link
Member

jan-ivar commented Mar 9, 2022

The UA can detect this and mute the tracks using any logic it wants (asynchronously if it so prefers).

I agree with @guidou that muting as a model is something mediacapture-main defines — and that's as something happening asynchronously in the background — and that we need to respect that model here.

I agree with @alvestrand we can't leave it unspecified, because it's JS observable (but I disagree about what's POLA).

Web developers expecting synchronous actions to influence asynchronous operations a certain way should take this twitter poll, which highlights that any assumptions they make about synchronous and asynchronous interactions are bad ones.

Applying that to our case: I don't see why it matters what the net muted state is "halfway through a JS call" (my new favorite phrasing of run-to-completion semantics).

@alvestrand
Copy link
Contributor Author

The POLA thing that worries me is this:

generator.muted = true;

"where did my mute go? There was no muted event fired!"
People don't expect actions to have no effect. The only way to debug this would be to detect that generator.muted is now false when you expected it to be true. Is that enough?

@alvestrand
Copy link
Contributor Author

This discussion died down, but reviewing it, I think I'm OK with the current language, where generator.mute and track.mute are not synchronous, and where the number of mute events fired by a series of changes to generator.muted depends on JS timing, rather than being totally predictable.

@youennf
Copy link
Contributor

youennf commented Nov 27, 2023

The current spec approach makes sense to me. Maybe we could tidy up the language like we have done for MediaStreamTrack stats with task/task id...

@alvestrand
Copy link
Contributor Author

Discussed in WG meeting on Jan 16, 2024. Decided to go with the approach currently in the spec; no change needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants