Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow-Start Overshoot w/ loss-based congestion conrol #86

Closed
goelvidhi opened this issue Aug 31, 2021 · 1 comment · Fixed by #111
Closed

Slow-Start Overshoot w/ loss-based congestion conrol #86

goelvidhi opened this issue Aug 31, 2021 · 1 comment · Fixed by #111
Assignees

Comments

@goelvidhi
Copy link
Collaborator

@goelvidhi goelvidhi commented Aug 31, 2021

Markku Kojo said,

The larger decrease factor of 0.7 seems unadviseable also if
used in the initial slow start with loss based congestion
control (w/ Not-ECT traffic); packets start getting dropped
when a TCP sender has increased cwnd in slow start such that
the available network bandwidth and buffering capacity at the
bottleneck is filled, but the TCP sender continues sending
more packets for one RTT doubling cwnd and hence also the number
of packets inflight before the congestion signal reaches the sender.
Now, even if the sender uses the standard decrease factor of 0.5,
the cwnd gets reduced only to a value that equals to the cwnd just
before (or around) the congestion point. That is, the network is
still full when the sender enters fast recovery but we do not
expect more drops during fast recovery in a deterministic model.
Only in congestion avoidance after the recovery, the sender
increases cwnd again and gets a packet drop that takes the
sender to a normal sawtooth cycle in an ideal case. So, the
convergence time from slow-start is expexted to be fast though
in reality loss recovery does not always work ideally with
such many drops in a window of data.

However, if the sender applies decrease factor of 0.7, it
continues in fast recovery with a 40% higher cwnd than what is
the available network capacity. This is very likely to result in
significant number of packet losses during fast recovery, and
very likely to result in loss of retransmissions. So, it is no
wonder that so many people have been very concerned about the
slow-start overshoot and the problems it creates.
It is very obvious that applying decrease factor of 0.7 in
the initial slow start is likely to extend the convergence
time from the slow-start overshoot significantly. Or, do we
have data that shows that such concern is unnecessary?
Also, a number of new loss-recovery mechanisms have been
introduced maybe mainly because of this?
I would hesitate recommending decrease factor of 0.7 when
a congestion event occurs during the initial slow start.

@goelvidhi goelvidhi self-assigned this Sep 1, 2021
@bbriscoe
Copy link
Contributor

@bbriscoe bbriscoe commented Sep 15, 2021

Same response as for Issue #85.

I suggest #85 and #86 are folded into one issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants