Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better algorithm for stream flow control #733

Open
agrover opened this issue Jun 16, 2020 · 2 comments · May be fixed by #1868
Open

Better algorithm for stream flow control #733

agrover opened this issue Jun 16, 2020 · 2 comments · May be fixed by #1868
Labels
enhancement New feature or request p3 Backlog

Comments

@agrover
Copy link
Contributor

agrover commented Jun 16, 2020

What we have now is the thing we did that was fast to implement, rather than a fully thought-out implementation. We should improve our algorithm, when possible.

from #726 (comment):

We have an estimate of BDP we might use to tune this. We should aim for BDP*2 at a minimum if we have an algorithm like this, otherwise we will need more frequent sending of updates.

@mxinden
Copy link
Collaborator

mxinden commented Apr 19, 2024

Status quo neqo

Static connection flow-control limit:

const LOCAL_MAX_DATA: u64 = 0x3FFF_FFFF_FFFF_FFFF; // 2^62-1

Static stream receive window limit:

const RX_STREAM_DATA_WINDOW: u64 = 0x10_0000; // 1MiB

Static stream send window limit:

pub const SEND_BUFFER_SIZE: usize = 0x10_0000; // 1 MiB

Other implementations

Potential solution

Implement auto-tuning based on Google's QUIC design document:

Algorithm

  • As above, the flow control window update triggered when:
    available window < max window size / 2,
    where available window = max receive window offset - bytes consumed
  • to realize auto-tuning, add the following logic just before issuing a window update
    • keep track of time interval between subsequent window updates, call this since_last_update.
    • if ( since_last_update < RTT * trigger factor ) then max window size = MIN(max window size * increase factor, upper bound).
      • trigger factor is 2
      • increase factor is 2
  • As above, window update sets:
    max_received window offset += (max window size - available window)

https://docs.google.com/document/d/1F2YfdDXKpy20WVKJueEf4abn_LVZHhMUMS5gX6Pgjl4/edit#heading=h.j3nq4bn2eaud

Used in quic-go.

Also I implemented this for a muxer on top of TCP in the past libp2p/rust-yamux#176.

Alternatives

More research needed. Pointers very much appreciated.

mxinden added a commit to mxinden/neqo that referenced this issue Apr 25, 2024
This commit adds a basic smoke test using the `test-ficture` simulator,
asserting that on a connection with unlimited bandwidth and 50ms round-trip-time
Neqo can eventually achieve > 1 Gbit/s throughput.

Showcases the potential a future stream flow-control auto-tuning algorithm can have.

See mozilla#733.
@martinthomson
Copy link
Member

I believe that a better approach would be not to double the window, but to increase the window by an amount equal to the amount consumed since the last increase if the time taken to consume the windows is less than one RTT (increased by a small fudge factor to allow slack for reporting delays). This is similar to the increase in Reno CC. The effect then would be that the window increases less rapidly and maybe more often, but it would end up with a commitment that is closer to a 2x factor of BDP, rather than being 4x or more.

@mxinden mxinden linked a pull request May 2, 2024 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request p3 Backlog
Projects
Development

Successfully merging a pull request may close this issue.

4 participants