Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion over duplicates in setCodecPreferences? Why allow them? #2955

Closed
jan-ivar opened this issue Apr 16, 2024 · 10 comments · Fixed by #2958
Closed

Confusion over duplicates in setCodecPreferences? Why allow them? #2955

jan-ivar opened this issue Apr 16, 2024 · 10 comments · Fixed by #2958

Comments

@jan-ivar
Copy link
Member

I spotted a dubious claim in the setCodecPreferences algorithm:

  • "4. Remove any duplicate values in codecs. Start at the back of the list such that the order of the codecs is maintained; the index of the first occurrence of a codec within the list is the same before and after this step."

The claim about indexes remaining the same seems wrong. E.g. with codecs [A, A₁, B] removing A₁ results in [A, B] necessarily altering the index B.*

We should also link "duplicate" to the codec match algorithm.

But why allow duplicates in the first place? This seems to only invite trouble.


*) This is implementation language, e.g. c++, operating on infra or WebIDL types, so JS concepts like sparse arrays shouldn't apply.

@aboba
Copy link
Contributor

aboba commented Apr 16, 2024

I think it is trying to say that duplicates are removed from the end of the list, not the beginning. So removing duplicates from [A, B, C, A₁, B₁] results in [A, B, C] not [C, A₁, B₁]. But the index of first occurrennce doesn't always remain the same. Removing duplicates from [A, B, B₁, A₁, C] results in [A, B, C], which changes the index of the first occurrence of C.

Removing the duplicates seems preferable to throwing. I'd prefer not to link "duplicate" to the codec match algorithm because the match algorithm has issues:

  1. The advertised level-id/profile may not reflect what can actually be decoded (e.g. by FFMPEG). So sending a higher level-id/profile than the receiver indicated it could handle might not result in a decode failure.
  2. Because of problem 1, mismatches detected by the "match algorithm" may be imaginary.
  3. level-assymetry-allowed is currently not taken into account in determining negotiation faliure. Although differing values of level-assymetry-allowed are treated as distinct profiles, there are some practical problems with declaring negotiation failure if the values don't match. What is sent is not coupled to what is received, due to congestion control. Also, limiting the sender based on a pessimistic receiver advertisement can be too restrictive. So in practice differing settings of level-assymetry-allowed doesn't currently lead to negotiation faliure. Presumably this means that two profiles differing only in the value of level-assymetry-allowed can interoperate...

@jan-ivar
Copy link
Member Author

Agree we only need to remove "the index of the first occurrence of a codec within the list is the same before and after this step."

Removing the duplicates seems preferable to throwing.

Why? Where would duplicates come from?

@jan-ivar
Copy link
Member Author

jan-ivar commented Apr 16, 2024

I'd prefer not to link "duplicate" to the codec match algorithm because the match algorithm has issues:

If we don't link to it then what does "duplicate" mean (these are dictionaries after all)? We need to fix those issues in the match algorithm then I think (or disallow duplicates).

In my mind, the starting point for this API seems to be:

const {codecs} = RTCRtpReceiver.getCapabilities("video");
// sort codecs by e.g. mimeTypes like "video/H264", "video/VP9"
videoTransceiver.setCodecPreferences(codecs);

And in this model, any duplicate would appear to be a programming error. When would it not be?

@aboba
Copy link
Contributor

aboba commented Apr 17, 2024

@jan-ivar Does "duplicate" imply a match on codec, level-id/profile and level-assymetry-allowed? Since distinct values result in treatment as a separate format, there is some logic to that approach. But requiring an exact match on each parameter in a universal "match algorithm" is problematic if level-assymetry-allowed isn't taken into account in other matching applications, such as determining negotiation faliures.

@jan-ivar
Copy link
Member Author

In my mind, the following needs to hold regardless of domain details:

  1. the match algorithm needs to distinguish all entries returned from RTCRtpReceiver.getCapabilities()
  2. IOW it by definition contains no "duplicates"
  3. The following must be valid/succeed
videoTransceiver.setCodecPreferences(RTCRtpReceiver.getCapabilities("video"));
audioTransceiver.setCodecPreferences(RTCRtpReceiver.getCapabilities("audio"));

Those seem to be the APIs the match algorithm are in service of.

A quick dump of getCapabilities() reveal:

  {
    "clockRate": 90000,
    "mimeType": "video/H264",
    "sdpFmtpLine": "level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42001f"
  },
  {
    "clockRate": 90000,
    "mimeType": "video/H264",
    "sdpFmtpLine": "level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=42001f"
  },

This suggests they are distinct codecs to me. Does that match your understanding?

@jan-ivar
Copy link
Member Author

Oh I see, I appear to have left out another possible input source for setCodecPreferences:

transceiver.setCodecPreferences(receiver.getParameters().codecs);

Is that right? If so, please see my questions in #2956.

@jan-ivar
Copy link
Member Author

Since there are multiple input sources then I also see where duplicates may come from, so I concede that point.

@aboba
Copy link
Contributor

aboba commented Apr 17, 2024

I agree that

  {
    "clockRate": 90000,
    "mimeType": "video/H264",
    "sdpFmtpLine": "level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42001f"
  },
  {
    "clockRate": 90000,
    "mimeType": "video/H264",
    "sdpFmtpLine": "level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=42001f"
  },

Are distinct codecs, because packetization-mode differs between them. What I am not sure about is when these profiles are considered to "match":

  {
    "clockRate": 90000,
    "mimeType": "video/H264",
    "sdpFmtpLine": "level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42001f"
  },
  {
    "clockRate": 90000,
    "mimeType": "video/H264",
    "sdpFmtpLine": "level-asymmetry-allowed=0;packetization-mode=1;profile-level-id=42001f"
  },

@jan-ivar
Copy link
Member Author

But requiring an exact match on each parameter in a universal "match algorithm" is problematic if level-assymetry-allowed isn't taken into account in other matching applications, such as determining negotiation faliures.

This seems like a naming problem, since algorithms need to be excplicitly invoked and aren't universal. Let's just call it the "codec dictionary match" algorithm for now.

The codec dictionary match algorithm was added in #2847 because codecs aren't exposed as platform objects, and thus A == previousReadOfA fails in both JS and c++ (solving only c++ btw; folks in #2845 didn't want to go the interface route. E.g. the interface route would have allowed us to modify properties like A.sdpFmtpLine over time without affecting the identity of A, if that is interesting. Just mentioning it. It was dismissed for legit compat concerns).

What I am not sure about is when these profiles are considered to "match":

The spec says the two dictionaries are distinct codec inputs to setCodecPreferences(), which makes sense to me as them also being distinct in either getCapabilities or getParameters is a precondition.

I don't read anything in the spec that says the "codec dictionary match" algorithm affects negotiation, as it does not appear to be called from there AFAICT. If there is implicit language somewhere that suggests it, we should identify and clarify it.

@jan-ivar jan-ivar self-assigned this Apr 17, 2024
@jan-ivar
Copy link
Member Author

Happy to discuss this at next meeting but since I've conceded that we probably need to tolerate duplicates given the two possible input sources, I'll also try to put up a PR to see if clarify things is all we need.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants