Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High priority bypass CC #1685

Merged
merged 7 commits into from
May 3, 2024
Merged

High priority bypass CC #1685

merged 7 commits into from
May 3, 2024

Conversation

huitema
Copy link
Collaborator

@huitema huitema commented May 1, 2024

Experimental API to allow "high priority" streams or datagram to bypass congestion control.

This has almost no effect on the current tests. We may need more testing to understand why.

Lack of effect may be due to conjunction of rate control and classic "windows based" congestion control. As implemented, this bypasses the test "in flight < cwin", but not the pacing or "rate control".

@huitema huitema requested a review from suhasHere May 1, 2024 18:42
@huitema
Copy link
Collaborator Author

huitema commented May 1, 2024

Fixing the implementation, including making sure to also bypass pacing / rate control.

Testing using the "bad wifi" scenario, in which the transmission undergoes a series of frequent suspensions. The test simulates three media flows, in order of priority: audio, LD Video, HD Video. The bypass is only applied to audio and LD video.

Comparison with previous results is as follow:

version base with loss control with loss control and bypass
Audio latency average 42781 42713 42646
Audio latency max 240622 237887 237887
SD video latency average 45490 44903 44149
SD video latency max 237967 237967 237967

There is almost no change in audio latency, but there is an improvement in the average delay of LD video. However, we have to be cautious with such results -- in may just be that small changes in scheduling have cascading effects, which in our case appear to be positive. Or, if we are more optimistic about this change, it may be that congestion control slows transmission a bit too much, and bypassing it improves performance.

The "max" latency is the largest delivery delay for a video frame through the test. Notice that it does not change, whatever we do. It is dominated by the duration of the longest suspension event.

@huitema
Copy link
Collaborator Author

huitema commented May 1, 2024

The current implementation shows that there is some value, but I think the main value of this API is to provide an assurance that high priority data will continue flowing, even if the congestion control algorithm is in a weird state. In short, as a safety check against mistakes in congestion control. But there are two issues: it is very easy to misuse, and it prevents early detection of path disconnect.

If we let this API in the code, someone is going to use it and run file transfer at full speed, blowing everybody else out of the network. That would create a bad reputation for picoquic. We probably need a built in rate limiter of some kind, so that no more than some amount of packets are sent in this mode. We need to decide what the limit should be. Maybe 1 Mbps? Then we would have to add a that test to the implementation.

The disconnection detection is currently based on timers. These timers follow that RACK logic, these timers only fire in if the last packet sent has not been acknowledged for a long time. But with CC bypass, we could have an audio source adding a slow drip of short packets. The path disconnection would only happen after the end of the idle timeout. We need to add a test of what happens there.

@huitema
Copy link
Collaborator Author

huitema commented May 3, 2024

Applying a max pacing rate and a max quantum (bucket size) for packets sent at priorities that bypass congestion control. The pacing rate is set at 1 Mbps (125000 bytes/sec), and the bucket size to 2560 bytes. This is mostly a code safety feature, so that even if an application mistakenly tries to send everything at high priority, it will not actually send too much data and will still play nice with other nodes sharing the bottleneck.

There is a small effect on performance. We might consider increasing the bucket size if that really is a problem.

version base with loss control with loss control and bypass with 1Mbps pacing
Audio latency average 42781 42713 42646 42206
Audio latency max 240622 237887 237887 237887
SD video latency average 45490 44903 44149 44290
SD video latency max 237967 237967 237967 237967

Other wise, this is ready.

picoquic/pacing.c Outdated Show resolved Hide resolved
@huitema huitema merged commit 01dd305 into master May 3, 2024
11 checks passed
@huitema huitema deleted the priority_bypass_cc branch June 5, 2024 20:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants