Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When to fire events triggered by setRemoteDescription. #1691

Closed
henbos opened this issue Dec 8, 2017 · 6 comments
Closed

When to fire events triggered by setRemoteDescription. #1691

henbos opened this issue Dec 8, 2017 · 6 comments
Assignees

Comments

@henbos
Copy link
Contributor

henbos commented Dec 8, 2017

In the spec, we have explored two approaches to firing events as a result of processing the result of an SRD call:

  1. Schedule to fire an event.
  2. Fire the event immediately midst-processing.

Both are problematic. 1) is problematic because when we fire later, the states of the event might no longer represent the state that caused the event to fire, and 2) is problematic because the event handler could influence the SRD algorithm.

This issue proposes handling all events triggered by SRD - including pc.ontrack, stream.onaddtrack/onremovetrack, track.onmute/onunmute - the way we handle ontrack. That is: the SRD algorithm creates and adds each event to a list, then when the algorithm is done but before the promise is resolved, we do "for each event, fire event".

For more details, here's essentially what I wrote on #1667 (comment):

Generally speaking, events are typically scheduled to fire later, so by the time they do the effects of multiple events have already occurred. I think this is bad because the event handler gets fired with objects whose state are not even guaranteed to represent the state that caused them to fire.

Firing the events before the last step is not only important because we want the events to fire before the promise resolves, but because if we don't then calling SRD twice or doing other things before the events fire will yield track information in the event handlers that don't correlate with the event that caused them to fire.

I also argue that we cannot fire events midst-processing, or else the event handler can both inspect and influence the state of processing algorithm. Something as simple as "for each transceiver, fire an event on the transceiver" would be problematic because the event handler could modify the set of transceiver (or any other state of the PC that the processing algorithm might use) while we are iterating them.

What if you close the pc or stop or add a track in one of the event handlers? The SRD algorithm would have to handle substeps of the algorithm failing. Even if we know we are safe for now, this is not resilient to future changes and implementations are error prone if they don't think about this stuff, like copying or making sure any container is iterable even if it's modified during iteration or else we might crash.

I argue we must:

  1. Fire all events after any processing steps.
  2. Have fired all events before the last step (before promise is resolved).

So, if a stream with two tracks are both removed, stream.onremovetrack fires twice and in both events stream.getTracks().length == 0, even on the first track's removal event.
The guarantee is: the state of any object in an event is up-to-date with the result of the SRD call. Meaning the stream is empty.

This is consistent with ontrack. If you add two tracks to a stream, inspecting the stream in the first ontrack yields it contains two tracks. If this wasn't the case you would also have a confusing problem:

// Imagine streams only contain the tracks whoose ontrack has fired.
pc.ontrack = (e) => {
  // Shared streams are processed multiple times for each ontrack,
  // we don't know which event represents the final state of the
  // stream, and this is sensitive to the machinery. It's better
  // that each object in any event fired represent the end-result
  // of the SRD.
  processStreams(e.streams);
};

Guaranteeing the state of the objects represent the end of the SRD is easy to understand, consistent with other event mechanisms and safer and easier to implement.

People should write code handling the end-result, not half-way results. They shouldn't have to care about whether adding or removing tracks from streams is implemented as a an iteration of smaller operations or as an all-or-nothing operation.

@jan-ivar
Copy link
Member

So, the remaining issue here are the addtrack and removetrack events on the streams. To synchronize them completely (detaching them from their state-change), I think violates POLA if the listener looks at stream.getTracks().length, as I argue in #1667 (comment). cc @annevk

I think we can get most of the way there, without violating POLA, by deferring both the addition/removal of the tracks as well as their related event firing until the end of the algorithm, right before we fire the track events we've gathered similarly.

Basically, push the tracks on two lists (removeList and addList), then later for each track in removeList remove the track and fire the removetrack event, and then ditto for the addList.

Since these lists are copies, they should be safe to walk. Listeners of these events will find everything in place, except for the tracks which are being removed/added one by one.

Listeners of the track events will see everything in place, as before (unaffected by this).

@annevk
Copy link
Member

annevk commented Dec 20, 2017

I think if you could design this again you'd have a single event potentially representing multiple operations. But the state things are in now it would make the most sense indeed to make each event represent the current state if possible (and seems like the reentrancy issues can be dealt with given @jan-ivar's comment). (Note that if you ever decide to have multiple "synchronous" events for a single state change you cannot really guard against it anymore for the subsequent events; XMLHttpRequest has this problem. DOM has the other problem though we're still somewhat hoping for mutation events to go away...)

@henbos
Copy link
Contributor Author

henbos commented Dec 28, 2017

#1691 (comment) sounds pretty safe to me. Though you could do something weird like remove a track that is about to be removed. While the list being iterated is a copy and is safe, our algorithm attempting to remove the track would abort. Similarly, an attempt to mute a track could fail if an event handler already muted it or something like that. It's pretty weird and probably doesn't matter, but any assumptions about the steps being successful would have to be removed if we defer the operations.

@henbos
Copy link
Contributor Author

henbos commented Dec 28, 2017

But what about this?

let processedStreams = new Set();
pc.setRemoteDescription() with "stream: { track1, track2 }"
pc.ontrack = e => {
  e.streams.forEach(stream => {
    if (!processedStreams.has(stream)) {
      // New remote stream!
      processStream(stream);
      processedStreams.add(stream);
    }
  });
}

This will fire ontrack twice, once for "track1" and once for "track2", both are added to "stream".
If defer all operations, not just their events firing, then when track1's event fires stream.getTracks() will only contain track1. Only in the second event is stream.getTracks() representing the complete picture. And we have no idea of knowing when the last stream has been added.

processStream() would not be able to look at stream.getTracks() without getting confusing results. Any application wanting to process remote streams and be able to use getTracks() would have to:

  1. Add all new streams to a list.
  2. After SRD has resolved, then handle the processing of all new streams.

What about POLA-violation "That's not what the remote stream looks like!" at processStream()? There was never a remote stream that only contained track1; the event describes details of the machinery, not the final result.

@henbos
Copy link
Contributor Author

henbos commented Dec 28, 2017

Or in the case of stream.addtrack if there is an existing stream that gets two tracks added to it. Though that would be less surprising than the ontrack example since the stream is already known.

@jan-ivar
Copy link
Member

jan-ivar commented Jan 3, 2018

@henbos The track events are already fired last, when everything is in place, so this can't happen.

What I suggest in #1691 (comment) is basically this order:

  1. For each track in muteTracks, set the muted state to the value true.
  2. For each stream+track pair in removeList, remove the track track from stream and fire "removetrack" event on stream.
  3. For each stream+track pair in addList, add the track track to stream and fire "addtrack" event on stream.
  4. For each trackEvent in trackEvents, fire a track event named "track" with trackEvent at the connection object.

1 and 4 are already in the spec. With the media capture algorithms already idempotent, there's no chance of failure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants