New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[reno] better handling of high-BDP connections #469
Conversation
Below are the loss observations when sending at fullspeed from Tokyo to West Coast for ~15 seconds: The first column is current time in milliseconds. Ordinary Reno:
High-BDP Reno:
As can be seen, ordinary Reno does not have chance to recover from loss. Compared to that, high-BDP reno is testing the peek every ~1 second. |
return; | ||
uint32_t count = cc->state.reno.stash / cc->cwnd; | ||
cc->state.reno.stash -= count * cc->cwnd; | ||
increase = count * max_udp_payload_size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there's a large ack that would have result in count > 1
, but calc_highbdp_inrrease
return something like max_udp_payload_size? Current code adopts the latter, while it should adopt the former. (credit to @janaiyengar).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the fix, this looks great, thanks @kazuho !
Closing in favor of #470. |
Adds a flag called
highbdp_mode
that changes the increase ratio.The increase ratio of ordinary Reno is 1 mtu per round-trip (i.e. CWND being acked).
When in high-BDP mode, the CC uses exponential increase during congestion avoidance phase, where the increase rate is calculated so that it would reach the send rate at which the loss was observed every ~1 second. This is roughly equal to what Cubic does.