Skip to content
This repository has been archived by the owner on Aug 28, 2023. It is now read-only.

Replace modelled TCP Reno window approach with AIMD emulation #20

Closed
larseggert opened this issue Nov 18, 2020 · 9 comments · Fixed by #24
Closed

Replace modelled TCP Reno window approach with AIMD emulation #20

larseggert opened this issue Nov 18, 2020 · 9 comments · Fixed by #24
Assignees
Labels
design Normative change relative to RFC8312 or earlier bis versions

Comments

@larseggert
Copy link
Contributor

Yuchung Cheng wrote:

I'd recommend replacing the modelled TCP Reno window approach in
section 4.2 with an AIMD emulation (Linux's approach).

In our experience, TCP-friendly regions are the predominant mode of
(Linux) Cubic for any regular Internet connection. IOW Cubic is often
"Reno" unless the loss rate is abysmal. The modelled approach is based
on a simple bulk transfer where modern network applications are mostly
structured traffic (burst, idle, repeat). Under such traffic
structures the model has two issues:

The model assumes cwnd overshoot causes losses that are repaired in
one round of fast recovery. In reality, the losses are often due to
bursts to short messages, causing more rounds and even timeouts to
repair. So the overall loss rate "p" tends to be higher than the ideal
model, causing the model to underestimate the window (hence runs in a
more conservative Reno). Instead Linux's approach is to simply emulate
Reno AIMD based on the number of packets per ACK. This also avoids
square-root operation.

@lisongxu
Copy link
Contributor

I agree, and this is reasonable, as CUBIC is based on time t but AIMD is not. Thanks

@goelvidhi
Copy link
Contributor

I am not sure what's the AI here - do we need to modify Eq. 3?

@lisongxu
Copy link
Contributor

lisongxu commented Nov 19, 2020

Eq 3 is fine. But we need to change Eq 4 to update W_est for each ACK, instead of using that t/RTT.

Thanks

@goelvidhi
Copy link
Contributor

goelvidhi commented Nov 19, 2020

In my earlier comment, AI = Action Item.

In Apple's implementation, I use bytes_acked / cwnd per ACK received instead of t/RTT. That ensures once the entire congestion window is acknowledged, the increase is 1MSS. I didn't file an issue for this as I thought this is an implementation choice. Does the below look like it:

On every ACK,
W_est = W_max*beta_cubic +
                   [3*(1-beta_cubic)/(1+beta_cubic)] * (bytes_acked/cwnd)

@lisongxu
Copy link
Contributor

How about the following?

At the beginning of a congestion avoidance stage,

    W_est = cwnd

On every ACK,

   W_est += [3*(1-beta_cubic)/(1+beta_cubic)] * (Segments_acked/cwnd)

@larseggert larseggert added the design Normative change relative to RFC8312 or earlier bis versions label Nov 19, 2020
@goelvidhi
Copy link
Contributor

Yes, that's how an implementation would do it.

With this proposal, I think #2 would be good to address as well. I think when we use segments_acked/cwnd instead of t/RTT, the W_est growth after it reached W_max, should use alpha_aimd = 1.

@goelvidhi goelvidhi self-assigned this Nov 19, 2020
@rscheff
Copy link

rscheff commented Nov 19, 2020

FYI: freebsd is following the rtt-based tcp-friendly approach. However, we are not particularly fond of this due to the interaction with app-limited/discontinous data availablilty. Changing this into a bytes_acked/cwnd approach, which removes the RTT during that region, sounds good.

@yuchungcheng
Copy link

I like Lisong's proposal to use Segments_acked or Rscheff's Bytes_acked. We have changed Linux Cubic several years ago to perform well under the prevalent ACK-thining/compression world (notably for cable and wireless networks). Sometimes we get one ACK for more than one hundred segments.

https://www.spinics.net/lists/netdev/msg314082.html

@lisongxu
Copy link
Contributor

Thanks, @yuchungcheng . Could you please take a look at issue 14 for a bug that Google fixed?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
design Normative change relative to RFC8312 or earlier bis versions
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants