Skip to content
This repository has been archived by the owner on Aug 28, 2023. It is now read-only.

Sec 5.4 #94

Closed
larseggert opened this issue Aug 31, 2021 · 11 comments · Fixed by #123
Closed

Sec 5.4 #94

larseggert opened this issue Aug 31, 2021 · 11 comments · Fixed by #123
Assignees

Comments

@larseggert
Copy link
Contributor

Markku Kojo said:

Sec 5.4

The text correctly states that CUBIC fills queues faster than AIMD TCP and increases the risk of standing queues. Then it proposes queue sizing and AQM as a solution, which is odd. Applying AQM to keep the queues shorter of course decreases the RTT (delay) seen but it does not help with standing queues (they remain standing but are just shorter).

@lisongxu lisongxu self-assigned this Sep 1, 2021
@bbriscoe
Copy link
Contributor

The discussion under issue #89 is relevant here.

AQMs help to reduce and in some cases remove the standing queue. An AQM target has to allow for different amplitude sawteeth for different RTTs. In true cubic mode, the brief downward excursions mean that utilization is less sensitive to an AQM target that is lower than the amplitude of the whole sawtooth - it only causes minor underutilization. So the combination of AQMs and true Cubic mode really does help remove standing queues, not just reduce them.

Anyway, the draft doesn't say that AQMs remove standing queues.

So, I think the draft is correct to point to AQM as a solution to the case of large buffers under 'difficult environments', and the authors might want to add something about the lower sensitivity to under-configured AQM targets, as above.

Nonetheless, I would criticize this section for not really addressing the 'difficult environments' that RFC5033 was talking about:

  • Radio: Cubic is pretty poor on fast adaptation, but still better than Reno. So I don't think we're going to find a reference that praises Cubic in this respect. Therefore, I suggest a sentence admitting Cubic isn't much better than Reno over radio links would be appropriate here. Slides 2 & 3 of an iccrg presentation we did summarize the wider problem of the slow adaptation of Cubic in radio environments, but again, other CCs are little better - the next slide moves on to Compound TCP for instance. A reference to [Liu16] would be worthwhile - it's the best paper I've read on the problems TCP CCs have over mobile (and Cubic specifically), and it identifies one major contributor to the problems (receive-window related so not specific to Cubic).
  • Multipath: No issue here relative to Reno AFAICT. No need to mention.
  • Tunnels, L2 AQMs, etc: No issue here for a loss-based CC. And any issues with ECN would be no different for Cubic relative to Reno. No need to mention.
  • High BDP: Surely this would be the place to call out Cubic's main strength.
  • Significantly slow links: No issue here relative to Reno AFAICT (assuming issue Lower bound for congestion window (drops or classic ECN) #83 is resolved).

[Liu16] K. Liu and J. Y. B. Lee, "On Improving TCP Performance over Mobile Data Networks," IEEE Transactions on Mobile Computing, 2016.

@larseggert
Copy link
Contributor Author

@bbriscoe: would you want to propose a PR that captures your considerations?

@bbriscoe
Copy link
Contributor

No, not really. I've given some pointers above that should help. But I need to draw a line between helping vs turning rfc8312bis into my day job (and now becoming my evening job too).

@larseggert
Copy link
Contributor Author

Understood. @lisongxu self-assigned this a while ago, so I'll let him do a PR based on your input.

@larseggert
Copy link
Contributor Author

@lisongxu would you prepare a resolution?

@lisongxu
Copy link
Contributor

Yes. Thanks, @larseggert

@lisongxu
Copy link
Contributor

lisongxu commented Oct 12, 2021

Thank you all for the discussion. @markkukojo @bbriscoe @vidhigoel-apple @larseggert How about the following revised Section 5.4? The revised parts are indicated by the bold font.

There is decade-long deployment experience with CUBIC on the Internet. CUBIC has also been extensively studied by using both NS-2 simulation and testbed experiments, covering a wide range of network environments. More information can be found in [HKLRX06].

Same as Reno, CUBIC is a loss-based congestion control algorithm. Because CUBIC is designed to be more aggressive (due to a faster window increase function and bigger multiplicative decrease factor) than Reno in fast and long-distance networks, it can fill large drop-tail buffers more quickly than Reno and increases the risk of a standing queue [RFC8511]. In this case, proper queue sizing and management [RFC7567] could be used to mitigate the risk to some extent and reduce the packet queuing delay.

Similar to Reno, the performance of CUBIC as a loss-based congestion control algorithm suffers in networks where a packet loss is not a good indication of bandwidth utilization, such as wireless or mobile networks [Liu16].

[Liu16] K. Liu and J. Y. B. Lee, "On Improving TCP Performance over Mobile Data Networks," IEEE Transactions on Mobile Computing, 2016.

@larseggert
Copy link
Contributor Author

@lisongxu, thanks for the proposal - I made PR #123 for it.

@larseggert
Copy link
Contributor Author

@markkukojo, please review #123. I would like to close this issue.

@markkukojo
Copy link

I'll try to review and answer this and the other pending issues ASAP. Unfortunately, I have very limited cycles for this right now so my apologies for the delay. After this week I hopefully could allocate more cycles.

@bbriscoe
Copy link
Contributor

WFM

larseggert added a commit that referenced this issue Oct 22, 2021
* PR of @lisongxu's suggestion for #94

Fixes #94.

* Add changelog entry

* Update draft-ietf-tcpm-rfc8312bis.md

Co-authored-by: Vidhi Goel <goel.vidhi07@gmail.com>

* Suggestions from @goelvidhi

* Update draft-ietf-tcpm-rfc8312bis.md

Co-authored-by: Vidhi Goel <goel.vidhi07@gmail.com>

Co-authored-by: Vidhi Goel <goel.vidhi07@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants