Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transparently handle incoming distinct-SSRC RTX packets #2592

Merged
merged 3 commits into from
Sep 29, 2023
Merged

Transparently handle incoming distinct-SSRC RTX packets #2592

merged 3 commits into from
Sep 29, 2023

Conversation

adriancable
Copy link
Contributor

Support incoming RTX packets on separate RTP stream if video/rtx track is used. They are handled transparently by TrackRemote.Read() with some additional attributes being returned (rtx_ssrc, rtx_payload_type, rtx_sequence_number) if the packet is an RTX packet, so the caller can distinguish RTX if needed.

Replaces PR #2519.

@adriancable
Copy link
Contributor Author

Per the discussion in #2519, this implements the 'transparent' approach to handling RTX packets on a separate RTP stream, as preferred by @jech and @kcaffrey. Please take a look, feedback welcome.

@codecov
Copy link

codecov bot commented Sep 28, 2023

Codecov Report

Attention: 51 lines in your changes are missing coverage. Please review.

Files Coverage Δ
constants.go 100.00% <ø> (ø)
track_remote.go 80.95% <60.86%> (-5.42%) ⬇️
rtpreceiver.go 59.20% <10.63%> (-7.71%) ⬇️

... and 2 files with indirect coverage changes

📢 Thoughts on this report? Let us know!.

@jech
Copy link
Contributor

jech commented Sep 28, 2023

So this is rewriting the packet transparently before it reaches the application? Perhaps it would be better to provide a utility function that does the rewriting, and let the application do the rewriting if desired?

@adriancable
Copy link
Contributor Author

adriancable commented Sep 28, 2023

@jech - the argument is that the reception of RTX packets should 'by default' be transparent to the caller. This is already the case for same-SSRC RTX, so this PR extends that principle to distinct-SSRC RTX. (Otherwise we could just implement a separate ReadRTX() function like the original PR.) The way this PR is implemented, everything 'just works' as far as the caller is concerned if there's an RTX track, with no code changes needed. I think that is nice.

The rewriting doesn't lose any information, since the 'overwritten' info is made available to the caller in the attributes if desired. So I believe that the rewriting approach has no downsides (except a little overhead, which should be small).

Copy link

@kcaffrey kcaffrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems good to me, but I only have passing familiarity with this part of the code base, so it's probably better to wait for someone else to review if you wanted more confidence!

track_remote.go Outdated Show resolved Hide resolved
rtpreceiver.go Outdated Show resolved Hide resolved
@jech
Copy link
Contributor

jech commented Sep 29, 2023

the argument is that the reception of RTX packets should 'by default' be transparent to the caller.

I understand the purpose. I'd like more information on the cost.

  1. Are we discarding any data when rewriting the packet? Is any of the data we discard impossible to recover?
  2. How many allocations are being done while rewriting the packet? If you give me the raw packet, I can copy the data into my data structures without incurring any extra allocations.

As a general rule, I tend to be cautious about library code doing too much magic behind my back. So if you're arguing in favour of doing magic, you need to justify that the magic is not prohibitively expensive.

@adriancable
Copy link
Contributor Author

adriancable commented Sep 29, 2023

@jech - the approach taken by this PR follows your own recommendation from the original PR, where you said e.g.

What about the following API: the existing Read/ReadRTP functions return either an original or an RTX packet, indifferently;

we need to retrofit the existing Read functions to return RTX packets when available, as this is the only way to avoid modifying all existing clients.

Achieving this requires some 'background magic' since as you know RTX packets have a different payload structure. The costs of the magic (which are only needed for RTX packets - there is no cost for non-RTX packets) were analysed by @kcaffrey and myself and there are comments in the @kcaffrey's review notes here with the numbers, so you can take a look there. It is almost certainly possible to optimize further e.g. reduce allocations by doing all work within a single buffer. I am very supportive of this, but we don't need to achieve the optimum in the first shot. Right now we don't handle RTX on a separate RTP stream at all - this is a big functional gap and we gain a lot by closing it in a reasonable (and transparent) way, which this PR achieves.

To your first question, we're not discarding any data when rewriting the packet. All the data that's moved out is made available via the attributes returned by the Read() call.

@adriancable
Copy link
Contributor Author

@jech - I've rewritten the logic to handle RTX packets allocationless, i.e. there's no extra unmarshal/marshal and everything is done inline in the returned slice. Hope this addresses your concern.

@Sean-Der
Copy link
Member

This is amazing @adriancable thank you so much for doing this!

I am in support of merging this into master when you get tests/lints to pass. This weekend I will cover it with a bunch of tests.

@adriancable adriancable merged commit e570531 into pion:master Sep 29, 2023
17 of 18 checks passed
@cnderrauber
Copy link
Member

There might have a data race in the allocationless buffer, when the TrackRemote.Read reads a rtx packet from the channel and copying the data from the buffer, the buffer can be written by the goroutine of receiveForRTX at same time.

@adriancable
Copy link
Contributor Author

@cnderrauber - thanks for your fix!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants