-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High priority bypass CC #1685
High priority bypass CC #1685
Conversation
Fixing the implementation, including making sure to also bypass pacing / rate control. Testing using the "bad wifi" scenario, in which the transmission undergoes a series of frequent suspensions. The test simulates three media flows, in order of priority: audio, LD Video, HD Video. The bypass is only applied to audio and LD video. Comparison with previous results is as follow:
There is almost no change in audio latency, but there is an improvement in the average delay of LD video. However, we have to be cautious with such results -- in may just be that small changes in scheduling have cascading effects, which in our case appear to be positive. Or, if we are more optimistic about this change, it may be that congestion control slows transmission a bit too much, and bypassing it improves performance. The "max" latency is the largest delivery delay for a video frame through the test. Notice that it does not change, whatever we do. It is dominated by the duration of the longest suspension event. |
The current implementation shows that there is some value, but I think the main value of this API is to provide an assurance that high priority data will continue flowing, even if the congestion control algorithm is in a weird state. In short, as a safety check against mistakes in congestion control. But there are two issues: it is very easy to misuse, and it prevents early detection of path disconnect. If we let this API in the code, someone is going to use it and run file transfer at full speed, blowing everybody else out of the network. That would create a bad reputation for picoquic. We probably need a built in rate limiter of some kind, so that no more than some amount of packets are sent in this mode. We need to decide what the limit should be. Maybe 1 Mbps? Then we would have to add a that test to the implementation. The disconnection detection is currently based on timers. These timers follow that RACK logic, these timers only fire in if the last packet sent has not been acknowledged for a long time. But with CC bypass, we could have an audio source adding a slow drip of short packets. The path disconnection would only happen after the end of the idle timeout. We need to add a test of what happens there. |
Applying a max pacing rate and a max quantum (bucket size) for packets sent at priorities that bypass congestion control. The pacing rate is set at 1 Mbps (125000 bytes/sec), and the bucket size to 2560 bytes. This is mostly a code safety feature, so that even if an application mistakenly tries to send everything at high priority, it will not actually send too much data and will still play nice with other nodes sharing the bottleneck. There is a small effect on performance. We might consider increasing the bucket size if that really is a problem.
Other wise, this is ready. |
Experimental API to allow "high priority" streams or datagram to bypass congestion control.
This has almost no effect on the current tests. We may need more testing to understand why.
Lack of effect may be due to conjunction of rate control and classic "windows based" congestion control. As implemented, this bypasses the test "in flight < cwin", but not the pacing or "rate control".