-
Notifications
You must be signed in to change notification settings - Fork 18k
x/net/http2: investigate Server default stream & conn flow control values #16512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
FWIW, h2o uses a 16MB initial window and apache uses a configurable initial window that defaults to the standard default of 2^16-1. I believe the GFE uses 1MB. Abstractly, the "right" value is probably min(MaxBufferPerRequest, LinkBDP), where MaxBufferPerRequest is user-configurable and LinkBDP is the connection's BDP (perhaps estimated from the current TCP cwnd). For now, it's probably sufficient to use 1MB per-stream and a larger number per-connection (100MB? 1GB?), then close this bug and wait for someone to ask for a knob. |
The above numbers are per-stream. The GFE uses 1MB per-connection as well. Say we pick a constant 1MB per-stream. If we continue to used a fixedBuffer for each request body, then we might as well set the connection's send window to NumOpenRequestBodies * 1MB since we'd be allocating a 1MB buffer per request anyway. If we want more control over the connection's send window, we should use something smarter than a fixedBuffer. |
CL https://golang.org/cl/35118 mentions this issue. |
CL https://golang.org/cl/37226 mentions this issue. |
CL https://golang.org/cl/37400 mentions this issue. |
fixedBuffer was a bad idea for two reasons: 1. It was fixed at a constant 64KB (the current default flow-control window) which wastes memory on the server when clients upload many small request bodies. 2. A follow-up CL will allow configuring the server's connection and stream receive windows. We want to allow individual streams to use varying amounts of the available connection window. This is not possible when each stream uses a fixedBuffer. dataBuffer grows and shrinks based on current usage. The worst-case fragmentation of dataBuffer is 32KB wasted memory per stream, but I expect that worst-case will be rare. In particular, if the declared size of a stream's request body is under 1KB, then the server will not allocate more than 1KB to process that stream's request body. Updates golang/go#16512 Fixes golang/go#18509 Change-Id: Ibcb18007037e82518a65848ef3baf4937955ac9d Reviewed-on: https://go-review.googlesource.com/37400 Run-TryBot: Tom Bergan <tombergan@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
CL https://golang.org/cl/37500 mentions this issue. |
Updates http2 to x/net/http2 git rev 906cda9 for: http2: add configurable knobs for the server's receive window https://golang.org/cl/37226 http2/hpack: speedup Encoder.searchTable https://golang.org/cl/37406 http2: Add opt-in option to Framer to allow DataFrame struct reuse https://golang.org/cl/34812 http2: replace fixedBuffer with dataBuffer https://golang.org/cl/37400 http2/hpack: remove hpack's constant time string comparison https://golang.org/cl/37394 Updates #16512 Updates #18404 Change-Id: I1ad7c95c404ead4ced7f85af061cf811b299a288 Reviewed-on: https://go-review.googlesource.com/37500 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
Upload performance is poor when BDP is higher than the flow-control window. Previously, the server's receive window was fixed at 64KB, which resulted in very poor performance for high-BDP links. The receive window now defaults to 1MB and is configurable. The per-connection and per-stream windows are configurable separately (both default to 1MB as suggested in golang/go#16512). Previously, the server created a "fixedBuffer" for each request body. This is no longer a good idea because a fixedBuffer has fixed size, which means individual streams cannot use varying amounts of the available connection window. To overcome this limitation, I replaced fixedBuffer with "dataBuffer", which grows and shrinks based on current usage. The worst-case fragmentation of dataBuffer is 32KB wasted memory per stream, but I expect that worst-case will be rare. A slightly modified version of adg@'s grpcbench program shows a dramatic improvement when increasing from a 64KB window to a 1MB window, especially at higher latencies (i.e., higher BDPs). Network latency was simulated with netem, e.g., `tc qdisc add dev lo root netem delay 16ms`. Duration Latency Proto H2 Window 11ms±4.05ms 0s HTTP/1.1 - 17ms±1.95ms 0s HTTP/2.0 65535 8ms±1.75ms 0s HTTP/2.0 1048576 10ms±1.49ms 1ms HTTP/1.1 - 47ms±2.91ms 1ms HTTP/2.0 65535 10ms±1.77ms 1ms HTTP/2.0 1048576 15ms±1.69ms 2ms HTTP/1.1 - 88ms±11.29ms 2ms HTTP/2.0 65535 15ms±1.18ms 2ms HTTP/2.0 1048576 23ms±1.42ms 4ms HTTP/1.1 - 152ms±0.77ms 4ms HTTP/2.0 65535 23ms±0.94ms 4ms HTTP/2.0 1048576 40ms±1.54ms 8ms HTTP/1.1 - 288ms±1.67ms 8ms HTTP/2.0 65535 39ms±1.29ms 8ms HTTP/2.0 1048576 72ms±1.13ms 16ms HTTP/1.1 - 559ms±0.68ms 16ms HTTP/2.0 65535 71ms±1.12ms 16ms HTTP/2.0 1048576 136ms±1.15ms 32ms HTTP/1.1 - 1104ms±1.62ms 32ms HTTP/2.0 65535 135ms±0.96ms 32ms HTTP/2.0 1048576 264ms±0.95ms 64ms HTTP/1.1 - 2191ms±2.08ms 64ms HTTP/2.0 65535 263ms±1.57ms 64ms HTTP/2.0 1048576 Fixes golang/go#16512 Updates golang/go#17985 Updates golang/go#18404 Change-Id: Ied385aa94588337e98dad9475cf2ece2f39ba346 Reviewed-on: https://go-review.googlesource.com/37226 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
What do other servers pick as their default conn- and stream-level flow control values?
Expose user tunables?
The text was updated successfully, but these errors were encountered: