Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

editorial: media api introduction #1240

Closed
fippo opened this issue May 24, 2017 · 13 comments · Fixed by #2427
Closed

editorial: media api introduction #1240

fippo opened this issue May 24, 2017 · 13 comments · Fixed by #2427

Comments

@fippo
Copy link
Contributor

fippo commented May 24, 2017

http://w3c.github.io/webrtc-pc/#rtp-media-api

The RTP media API lets a web application send and receive MediaStreamTracks over a peer-to-peer
connection.

This is odd. Tracks and streams are an abstraction that is used by this API to describe how media is sent over a p2p connection.

Tracks, when added to a RTCPeerConnection, result in signaling; when this signaling is forwarded to a remote peer, it causes corresponding tracks to be created on the remote side.

This is even more weird. It is correct but boiled down a bit too much.
If an application adds a track to a peerconnection it signals its intent to transmit media over that connection. This triggers the negotiation process described in 4.7 because we need to negotiate how this media is going to be transmitted over the network in a way that the other peer understands. As part of the negotiation, signaling messages are exchanges between the two applications which make the stream pop up at the remote side. No data is sent until the p2p connection is up though

Shall I try to come up with something along those lines?

@taylor-b
Copy link
Contributor

I'd hold off on doing anything until we settle how tracks are signaled. See issue #1161.

@aboba
Copy link
Contributor

aboba commented Jan 8, 2018

@taylor-b Now that Issue #1161 has been closed, can we move forward?

@aboba
Copy link
Contributor

aboba commented Jan 8, 2018

@fippo Can you submit a PR?

@taylor-b
Copy link
Contributor

Now that Issue #1161 has been closed, can we move forward?

There are still discussions ongoing about track ID signaling (rtcweb-wg/jsep#842), but I guess the introduction could ignore that aspect.

@fippo
Copy link
Contributor Author

fippo commented Jan 10, 2018

Actually with the transceiver model

when this signaling is forwarded to a remote peer, it causes corresponding tracks to be created on the remote side.

is not even true anymore as addTransceiver('audio') will generate an SDP that creates a track on the remote side. At least that is my understanding of transceivers and jsep now (and I still don't think this is a good idea)

@taylor-b
Copy link
Contributor

Is it just the default direction of "sendrecv" you have a problem with? Or more generally, do you not like that it's possible to have a remote track pop up on the receive side before one is set on the send side?

Anyway, I agree the "corresponding tracks" sort of language is misleading, since tracks don't map 1:1 between the send/receive side. It would be good if this were explained somewhere, though it's not exactly straightforward so I don't know if the intro is the best place.

@fippo
Copy link
Contributor Author

fippo commented Jan 10, 2018

I think (vaguely; trying to narrow it down) my concern is at the receiving side and breaking currently valid assumptions that you can show the video element after onaddstream + ice connected. While you can do stupid things such as stopping a track without signaling or adding a stopped track in the old model (and/or current implementations) that is not done very often.

Granted, the spec currently does not give developers any indication that in ontrack the video stream is not ready either so showing it after iceconnectionstatechange (or readyStateChange for video) is just the lore. Is the introduction the right place for that?

@fippo
Copy link
Contributor Author

fippo commented Jan 10, 2018

Figured it out and all of a sudden transceivers makes sense. Thanks @taylor-b and also @jan-ivar for listening to hours of rambling on IRC.

With transceivers the lore for when to show a video changes. This moves from the pc iceconnectionstate (which is only valid for initial connection anyway) to the track.onunmute. Which makes sense once one understands the model. But the spec does not explain it at all.

http://w3c.github.io/webrtc-pc/#rtcrtpreceiver-interface step 7 says the initial state of a receivers track is muted. This makes sense because initially the track has not received data.
It also says "See the MediaStreamTrack section about how the muted attribute reflects if a MediaStreamTrack is receiving media data or not." This links to terminology which refers to mediacapture-main. But mediacapture-main doesn't really explain things in a way which is applicable to remote tracks over a network connection which makes this not very useful.

So it seems we need

  1. a quick summary of what i said above for the intro section of the spec
  2. an example showing the onunmute
  3. more spec-text in step 7 says why the muted state is true initially and when that changes. Possibly this belongs into the description of the track attribute.

https://jsfiddle.net/jkzn0y5x/1/ shows this in Firefox 59 with transceivers (and it adds a muted attribute).
Unfortunately, running the same fiddle in Chrome shows that its initial muted state is false which means onunmute will never fire. Which currently makes it impossible to show people "this is how you are supposed to write that code" because they will say "but it does not work in chrome".

@aboba
Copy link
Contributor

aboba commented Mar 15, 2018

@fippo Can you submit a PR?

@fippo
Copy link
Contributor Author

fippo commented Mar 15, 2018

@alvestrand
Copy link
Contributor

The PR in #1832 doesn't seem to do everything that's suggested in the discussion. @fippo are you still planning to submit a PR here?

@fippo fippo removed their assignment Jul 8, 2019
@fippo
Copy link
Contributor Author

fippo commented Jul 8, 2019

I'm still abstaining from PRs so unassigned myself. This is still an issue though but its editorial only so feel free to close.

@alvestrand alvestrand added this to To do in WebRTC 1.0 to PR Aug 22, 2019
@henbos henbos removed this from To do in WebRTC 1.0 to PR Nov 7, 2019
@henbos
Copy link
Contributor

henbos commented Dec 19, 2019

Let's update the existing example to include the onunmute. @jan-ivar can you make a PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants