Skip to content

Conversation

@ggarber
Copy link
Contributor

@ggarber ggarber commented Dec 1, 2022

When there is packet loss in the client -> SFU path (the producer) and that SAME packet loss is also detected in the SFU -> client path (the consumer) the bandwidth estimation shouldn't be affected because that's not some packet loss related to the congestion in that downlink path.

With the current TCCClient configuration used in mediasoup that packet loss in the RRs is used to detect congestion creating big bandwidth estimation drops when the network of rest of the users is perfectly fine.

This is configurable in webrtc gcc implementation with the feedback_only flag. I think it makes sense to have "feedback_only = false" in a browser but I don't think it is reasonable for a SFU. For a SFU it would be better to use the packet loss detected only with the FeedbackControl RTCP messages and use RR messages just to estimate the RTT.

I'm attaching the graphs of a real users session where you can see how the bandwidth estimation for almost EVERYBODY went down quickly just because ONE sender was having some packet loss in its uplink. Hope it helps to understand the issue.

Screenshot 2022-12-01 at 23 02 53

Screenshot 2022-12-01 at 23 02 59

@ibc
Copy link
Member

ibc commented Dec 1, 2022

There is an ongoing BWE refactor in a separate branch. Please comment or target that PR instead.

@ggarber
Copy link
Contributor Author

ggarber commented Dec 1, 2022

There is an ongoing BWE refactor in a separate branch. Please comment or target that PR instead.

Is that going to be merged soon and it is stable enough?

This looks like a huge issue to be honest so maybe you want to have this fix independently of the other one. For sure we will use a fork with this patch for now and we are only providing it so that other people can also benefit from having a quick fix and dramatically improve the video quality in common user networks.

Thank you!

@ggarber
Copy link
Contributor Author

ggarber commented Dec 1, 2022

And looking at the other PR it looks completely orthogonal to this change. That other PR doesn't change this setting or the TransportCongestionControlClient at all: https://github.com/versatica/mediasoup/pull/922/files#diff-503258e0a413224c755f234372fcaac2bdced8b81c5547936f4783758855da94L42

@ibc
Copy link
Member

ibc commented Dec 1, 2022

And looking at the other PR it looks completely orthogonal to this change. That other PR doesn't change this setting or the TransportCongestionControlClient at all

That's exactly what I ask you to comment about this in that PR to be taken into consideration.

@ggarber
Copy link
Contributor Author

ggarber commented Dec 2, 2022

FYC Some more info after more testing

Test scenario: A session with 10 participants and 1 of them has an uplink with limited bandwidth and packet loss (a typical wifi network)

This is the bandwidth estimation with current version of mediasoup:

Screenshot 2022-12-02 at 21 33 27

This is the bandwidth estimation with this PR applied:

Screenshot 2022-12-02 at 21 34 59

@ibc
Copy link
Member

ibc commented Jan 16, 2023

@sarumjanuch this PR is being merged in current v3. Please take it into account in your branch.

@ibc ibc merged commit 064d0a6 into versatica:v3 Jan 16, 2023
@ibc
Copy link
Member

ibc commented Jan 16, 2023

Released in 3.11.5. Thanks.

@ibc
Copy link
Member

ibc commented Jan 16, 2023

@sarumjanuch this PR is being merged in current v3. Please take it into account in your branch.

Ignore please, I see it's already done in your branch: https://github.com/versatica/mediasoup/pull/922/files#diff-7970ea9cf2c8d0acf7c47c4f86ea6cb8cb753f91bc2b661c467f3fed32a418c7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants