Conversation
0808c46 to
3043f90
Compare
junhochoi
left a comment
There was a problem hiding this comment.
I think we need to add a new hook (e.g. cwnd_undo()) at CongestionControlOps because when we need to undo cwnd, there can be CC-dependent state changes as well. e.g. in CUBIC I'd like to reset w_max when it happens. e.g. linux tcp has undo_cwnd as a cc hook.
There was a problem hiding this comment.
I think we can use some of existing experiences -- I was working on a similar work (better to coordinate)
- packet threshold: TCP-NCR(RFC4653) defines a new Limited Transmit. (mentioned in quicwg/base-drafts#3572) While I think TCP-NCR itself is not much useful for QUIC, but the idea of changing dupthresh up to flightsize/MSS -- which means we can remove hard coded MAX_PACKET_THRESHOLD), allowing pkt_thresh up to flight_size/MAX_DATAGRAM_SIZE
- delay threshold: RACK (https://tools.ietf.org/html/draft-ietf-tcpm-rack-08) mentions how to detect and define a delay threshold for reordering. I'd prefer to implement RACK's
RACK_update_reo_wnd()which is gradually increasing reordering threshold delay instead of using fixed 5/4 x RTT. QUIC recovery draft already includes many idea of RACK but not this part.
Have you tested with |
0b7493c to
ff6719e
Compare
1a0503e to
3399c85
Compare
0415612 to
1135e74
Compare
f2049a6 to
c1bd7eb
Compare
3399c85 to
3f4c50b
Compare
3f4c50b to
55935d0
Compare
55935d0 to
8f98857
Compare
|
Rebased and updated based on #893. |
8f98857 to
150ddce
Compare
150ddce to
36ecfbe
Compare
36ecfbe to
072bb64
Compare
The idea is to increase packet and time reordering thresholds when spurious losses are detected (i.e. when a packet previously declared lost is acked). In addition, the congestion control state is rolled back to its state before the last congestion event when a spurious loss is detected. This is similar to what Chrome currently implements.
072bb64 to
d5da22f
Compare
junhochoi
left a comment
There was a problem hiding this comment.
I think we still need to tune the algorithm (or threshold. ideally we need to increase a time/packet threshold based on the previous state) for some case (e.g. in a high bandwidth, the benefit is lower), but still better when there is a reodering happening. 👍
Dynamic time threshold logic from #470 was accidentally removed. The goal is to make time-based detection less sensitive when a spurious loss is detected.
Dynamic time threshold logic from #470 was accidentally removed. The goal is to make time-based detection less sensitive when a spurious loss is detected.
Dynamic time threshold logic from #470 was accidentally removed. The goal is to make time-based detection less sensitive when a spurious loss is detected.
Dynamic time threshold logic from #470 was accidentally removed. The goal is to make time-based detection less sensitive when a spurious loss is detected.
* fix: adjust time-based loss detection threshold on packet reordering Dynamic time threshold logic from #470 was accidentally removed. The goal is to make time-based detection less sensitive when a spurious loss is detected. * refactor: cleanup test * fix: fix spurious count logic in gcongestion and add test. Recovery logic was returning the wrong spurious loss count if no new packets were acked. This fixes the bug. I also added a new test for time-based loss detection, specifically testing that the value increases after a spurious loss event.
This implements adaptive reordering threshold as suggested in:
https://quicwg.org/base-drafts/draft-ietf-quic-recovery.html#section-5.1.1.
The algorithm is similar to what Chrome uses as well.
I did a quick test on my laptop with
netem delay 10ms 1ms 10%and this seems to improve things a fair amount:Before:
After:
Though we will need more rigorous tests in lab. In particular, I'm not super confident the code to undo the cwnd update is correct. We will also need a time reordering unit test.
This depends on #468 and #469.