Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More fine grained filtering NACKs after a key frame. #2159

Merged
merged 2 commits into from
Oct 19, 2023

Conversation

boks1971
Copy link
Contributor

There are applications with periodic key frame.
So, a packet lost before a key frame will not be retransmitted. But, decoder could wait (jitter buffer, play out time) and cause a stutter.

Idea behind disabling NACKs after key frame was another knob to throttle retransmission bit rate. But, with spaced out retransmissions and max retransmissions per sequence number, there are throttles. This would provide more throttling, but affects some applications. So, disabling filtering NACKs after a key frame.

Introducing another flag to disallow layers. This would still be quite useful, i. e. under congestion the stream allocator would move the target lower. But, because of congestion, higher layer would have lost a bunch of packets. Client would NACK those. Retransmitting those higher layer packets would congest the channel more. The new flag (default enabled) would disallow higher layers retransmission. This was happening before this change also, just splitting out the flag for more control.

There are applications with periodic key frame.
So, a packet lost before a key frame will not be retransmitted.
But, decoder could wait (jitter buffer, play out time) and cause
a stutter.

Idea behind disabling NACKs after key frame was another knob to
throttle retransmission bit rate. But, with spaced out retransmissions
and max retransmissions per sequence number, there are throttles.
This would provide more throttling, but affects some applications.
So, disabling filtering NACKs after a key frame.

Introducing another flag to disallow layers. This would still be quite
useful, i. e. under congestion the stream allocator would move the
target lower. But, because of congestion, higher layer would have lost
a bunch of packets. Client would NACK those. Retransmitting those higher
layer packets would congest the channel more. The new flag (default
enabled) would disallow higher layers retransmission. This was happening
before this change also, just splitting out the flag for more control.
@@ -315,7 +315,9 @@ func (r *RTPStatsReceiver) SetRtcpSenderReportData(srData *RTCPSenderReportData)
timeSinceFirst := srData.NTPTimestamp.Time().Sub(r.srFirst.NTPTimestamp.Time()).Seconds()
rtpDiffSinceFirst := srDataCopy.RTPTimestampExt - r.srFirst.RTPTimestampExt
calculatedClockRateFromFirst := float64(rtpDiffSinceFirst) / timeSinceFirst
if math.Abs(float64(r.params.ClockRate)-calculatedClockRateFromLast) > 0.2*float64(r.params.ClockRate) || math.Abs(float64(r.params.ClockRate)-calculatedClockRateFromFirst) > 0.2*float64(r.params.ClockRate) {

if (timeSinceLast > 0.2 && math.Abs(float64(r.params.ClockRate)-calculatedClockRateFromLast) > 0.2*float64(r.params.ClockRate)) ||
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was logging on very small time differences like 4 ms. Setting a min of 200 ms.

@@ -235,6 +232,8 @@ func NewWebRTCReceiver(
})
w.connectionStats.Start(w.trackInfo)

w.streamTrackerManager = NewStreamTrackerManager(logger, trackInfo, w.isSVC, w.codec.ClockRate, trackersConfig)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drive-by, related code was split, just putting them together.

@boks1971 boks1971 merged commit 0d74771 into master Oct 19, 2023
2 checks passed
@boks1971 boks1971 deleted the raja_no_filter_rtx branch October 19, 2023 19:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants