New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RTX APT capability? #444
Comments
But again: These things happen because we are considering RTX a codec rather than a "real-codec feature". |
@ibc we could reverse this and make each codec announce it's corresponding RTX payload type instead of having apt inside the RTX codec and announcing RTX as a codec So in CodecCapability / CodecParameters would have an rtxPayloadType value (optionally) and remove the RTX codec entirely. |
That's exactly what I meant with "real-codec feature" :) And we can do the same with FEC. |
Not sure if I understand the problem. RTX is codec-unaware, so either it is supported on RTPSender/RTPReceive or not. So in terms on capabilities, if RTX is supported, it is supported for all codecs. Another thing is if you want to use it for some specific codecs or not (associating an ssrc). |
Yep. The point is that, in case the browser supports RTX, every real codec retrieved via |
Indeed |
In theory RTX is universal for all codecs, but in practice, it's not always offered for all codecs. So there needs to be a way to make it optional property if we do it this way. |
Just don't set the corresponding |
@ibc that's all true, but I'm being specific about that because if we are going to issue any "fixes" we need to be 100% clear about everything to the dotted i and crossed t. |
@pthatcher do you have an opinion on this issue? I think this is applicable to FEC too. |
Related WebRTC 1.0 Issue: w3c/webrtc-pc#548 |
Just to be clear, I'm in favour of this change to make RTX and FEC as options under the "main" real codecs but I think we need to get buy in for such a change. I find implicit cross reference meanings between codecs and their RTX/FEC counterparts in the codec list is very problematic and cumbersome (.e.g. making sure clock rate matches, settings match and ambiguity where multiple matches are possible, RED listed with or without payload types, multiple REDs with ambiguous choices, etc). I further think this is applicable to both RTX and FEC. Plus, it's not just an RTX preferred payload type. RTX also has For example:
|
@robin-raymond IMHO that's exactly the way to go. Then we can also refactor current
|
You need too support simulcast.. :) |
what I don't understand is why don't we set the codec inside the encoding:
Then we don't need to reference the payolad type anymore as it would be inside the codec parameters, right? |
@murillo128 true. And that would lead to a better layered structure ("encodings" is the transport of "medias"). No more cross references. |
|
@murillo128 @ibc if you put the codecs inside the encoding you lose a feature. Right now you can define the codec list and have a "latch all" where a stream can match by payload type alone. This also allows change of codec to occur in simulcast without problem if you switch from one codec to another. |
None of them seems a feature to me, but a bad side effect. "Catch all" means that you need to find the codec, find the payload type Also, if I am not mistaken, changing codec in simulcast requires changing
|
@murillo128 no, the encoding list for a catch all is empty. This allows a receiver to just name the codecs and start receiving without specifying anything about the encoding and any stream coming in will automatically latch by payload type. Plus a changed codec in a simulcast stream will be handled by the same mechanism. It's basically a way to auto-fill new encodings and it's not a side effect or point of failure as it was a design consideration and one of the reasons why this choice was made originally. |
@robin-raymond I'm afraid you are usually making the syntax of May we please talk about the syntax of We need an API that let the developer check the browser's capabilities and another API that provides the developer with a full |
I was referring to their usage in rtpsender (and I think @ibc also) That is what happens when having the same objects for encoding and decoding |
Yes. I always mean the stuff that the developer must pass to the |
For example: if ORTC supported H264 SVC (which AFAIK means multi-stream with different SSRCs and probably different PTs), how is the user supposed to provide Just letting the browser "fill SSRC values here and there" at the time RTP is sent is not the solution. |
Aren't we trying to address too many different topics in a single issue? ;) |
Yep... one thing leads to the other.. |
This thread isn't just about the sender... The receiver is also involved. Routing via PayloadType alone was a desired feature in the design. |
I'm not worried about how the receiver ORTC browser will route incoming RTP (that is its business). I'm worrier regarding how the sender browser, the sender user, or a SFU will generate those RTP parameters. BTW: Routing just via PT is not valid if the browser receives multiple streams from multiple peers over the same transport (SFU scenario). |
@ibc the encoding params in combination with the codec work just fine in that scenario. I'm not saying the cross referencing is "the best way possible", but it does allow to describe things accurately and it does work. The original bug filed was regarding the cross referencing between "media codecs" and "feature" codecs and "feature" codecs and encodings; not "encodings" cross reference standard "media codecs" which I view as a different concern. As for routings via PT and SFU, if you expect packets to come in on the same port and you are multiplexing multiple ICE usernameFrag/passwords then you have to have special demux rules to handle multiple parties. ORTC does not expect multiple streams by different parties on the same physical socket address. If you are already demuxing multiple parties prior to coming into any receiver object (which you'd have to do due to DTLS anyway), then PT is a valid routing rule even for an SFU. |
Shouldn't we move the encoding parameter change to a different issue? The |
It already supports this... I'm referring to your statement:
That statement implies something that is not correct given the context. Defining just the list of codecs with payload types is perfectly legal and doable for a SFU as well as a client. In SDP the codecs are defined by the m line so all streams within the same m line have the same meaning for their payload type. Thus routing by payload type is perfectly legally and the meaning of each stream can be understood by the payload type. Obviously the SSRC is used to demux multiple streams. My point is that simply filling in a list of codecs and having the streams separated is entirely doable in both client a SFU because of the shared meaning for the payload type. If you do something funky where there is multiple meanings for the same payload type on the same transport then things go funny and my point is that should not be possible anyway the way things are defined in the API. So if you have a receiver with rtp parameters with just a list of codecs, the receiver is perfectly capable of creating dynamic encodings on the fly based upon the codec payload type. Obviously the SSRC plays a role in demuxing here between streams. |
And to be further clear, it's possible for multiple receivers to be attached to the same transport with different PT meanings. But you have to have routing definitions in those cases to remove ambiguity or you could cause mismatches. If you had VP8 PT as 101 and H264 as 101 on the same transport, with nothing other than the payload type defined for the routing then things go a bit funky. If you have an SSRC or a muxid or something else that clears up the usage then the routing is fine. The context here is a single receiver having a list of codecs and the meaning of the codecs is thus shared across all encodings, or possible no encodings listed and thus auto-interpretation based on payload type alone. Again, I think the focus of this bug needs to be able cross referencing between "feature" codecs and media codecs and not the cross referencing between codec list and encoding parameters. If this doesn't clear things up then please file a more appropriate bug about that topic. |
Thanks for the clarification, I was a bit afraid :)
(Let's assume the "receiver" is the SFU)
Imagine Alice is a ORTC browser that does not signal SSRC, and Bob a WebRTC 1.0 browser that does not support A "workaround" would be: The SFU receives the list of codecs of Alice and, based on them, it assumes the exact number of SSRCs Alice will send. So the SFU can create a SDP offer for Bob with those SSRCs and should rewrite the SSRC field of the RTP packets received from Alice. That would work... it is just that I don't think that it is possible to know the exact number of SSRCs we are going to receive by just having the list of remote codecs. |
Sure. The problem is that, clearly, ORTC invites to not signal the sending SSRC.
Agreed. |
In order to close the original issue, i would like to propose to add a new dictionary for the rtx parameters that would be added to each RTCRtpCodecParameters that supports rtx:
|
Sorry for being late to the conversation. I have a few thoughts, and I'm not sure how redundant or different they are, since it's such a long conversation and it's hard to tell what the current state is.
|
Right. The problem is that, somehow, we need a proper way to tell the browser whether we want to send RTX or not and, if so, we need a proper way to retrieve the browser chosen parameters regarding RTX (basically the RTX PT associated to each sending media codec), and also a proper API to override them.
I disagree. We should not design a SDP based API. If so, I don't understand the purpose of ORTC.
Same as above: the user should be able to retrieve the browser chosen PT for FEC, and should be able to override it if desired (this is optional as long as he can retrieve it before media is sent). |
@pthatcher - Thanks for the feedback on this one.
Yup.
I think we are in agreement, and yes, being out of sync needlessly is bad. Perhaps we should instead take this to 1.0 group and describe that it's an annoying cross referencing "feature" codecs (and actually is not needed and causes issues when combined with clock rates, channels, etc, as described at the top of this bug report). If they agree and fix then we can all agree and fix.
LOL. "not dumb like RTX". True, but in the context of the codec, the value could be a PT within the context of the codec just like RTX, except that the same PT value could be set the same across multiple codecs. Just a nice little clean-up to avoid cross referencing codecs needlessly... I've already done the cross referencing logic in my implementation but I'd happily rip it out :) |
Take into account that RTX not only has a payload but a rtxtime, also if we remove it as a codec, we need a way to set the rtx ssrc. |
|
Long thread above, so I'm including where this was described above how it could be done, and @ibc redefined a bit more explicitly: |
To summarize: RTX (and maybe FEC too) cause ambiguities as codecs and maybe better served to be aspects of existing codecs rather than full separate and independent codecs. Since this would break 1.0 conceptually, I'm going to label it as such. I think an appropriate explanation of the issue should be brought to the 1.0 group and see if they have interest in addressing the problem then we can have a combined effort to resolve in both groups. |
This issue requires a change to the WebRTC 1.0 object model. An Issue has been filed against WebRTC 1.0: w3c/webrtc-pc#548 |
Closing, this needs to be taken up in the WebRTC Working Group as per my recent message to mail list: |
Since we don't have a capability for RTX APT, a single RTX codec gets listed allocating a single preferredPayloadType. The trouble is that payload type is not relevant because an RTX codec is needed per media codec that supports RTX. Likewise, there's no guarantee the engine supports RTX for each media codec.
Currently a developer would need to disregard the RTX codec, and allocate their own payload per RTX codec per media codec they wish to use with RTX. This is certainly doable but problematic. There is currently no way for the developer to know which media codecs support RTX or not.
I suspect this issue may exist for RED + ULPFEC, FlexFec too possibly. Although there's an advantage that each of those only need to allocate a single codec per clock rate (as the payload contains the original media codec) whereas that's not true of RTX payloads.
The text was updated successfully, but these errors were encountered: