-
Notifications
You must be signed in to change notification settings - Fork 18k
x/net/http2: max connections overflow #39389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/cc @bradfitz @tombergan |
Prevent the client trying to establish more streams than the server is willing to accept during the initial life time of a connection by limiting maxConcurrentStreams to 100, the http2 specifications recommended minimum, until we've received the initial SETTINGS frame from the server. After a SETTINGS frame has been received use the servers MAX_CONCURRENT_STREAMS, if present, otherwise use 1000 as a reasonable value. For normal consumers this will have very little impact, allowing a decent level of concurrency from the start, and for highly concurrent consumers or large bursts it will prevent significant number of rejected streams being attempted hence actually increasing performance. Fixes golang/go#39389
Change https://golang.org/cl/236497 mentions this issue: |
My one concern is the number of connections opened prior to upgrading the MAX_CONCURRENT_STREAMS. You described a situation whereby 250+ requests were processed before the initial SETTINGS. This would create at least 4 additional connections prior to adjusting the count. I am looking at other implementations both client and server to get a better idea of what defaults are being used. Envoy is 1 << 29, nginx is 128, Java went from 100 -> Integer.MAX_VALUE. There are more examples of using a larger default over a smaller default. |
I think our example is very much worse case scenario being a load test which was specifically designed to request as many connections at startup as configured which was 10k resulting in 100 - 2000 Given this ensuring we stay within the limit seems more important in real world scenarios. If you compare this to http 1 all would be individual connections, hence in the grand scheme of things 4 connections is not a huge number. So I guess the question is keep min at |
In my use case, I suddenly have thousands of requests that I must perform at once. We've lowered the default value to 1 to avoid seeing I have seen servers in the wild that default to I see in the code that I see at least two possibilities: waiting for the |
Are there any restrictions that preventing setting a default value for clients in What if the origin server (or remote server) does have the ability to handle the concurrency, and they will never return a What would be the best practice on that? |
Prevent the client trying to establish more streams than the server is willing to accept during the initial life time of a connection by limiting `maxConcurrentStreams` to `100`, the http2 specifications recommended minimum, until we've received the initial `SETTINGS` frame from the server. After a `SETTINGS` frame has been received use the servers `MAX_CONCURRENT_STREAMS`, if present, otherwise use `1000` as a reasonable value. For normal consumers this will have very little impact, allowing a decent level of concurrency from the start, and for highly concurrent consumers or large bursts it will prevent significant number of rejected streams being attempted hence actually increasing performance. Fixes golang/go#39389 Change-Id: I35fecd501ca39cd059c7afd1d44090b023f16e1e GitHub-Last-Rev: 0d1114d3a558cefed17008aba3e4a4d7b2ad3866 GitHub-Pull-Request: golang/net#73 Reviewed-on: https://go-review.googlesource.com/c/net/+/236497 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Trust: Brad Fitzpatrick <bradfitz@golang.org> Trust: Joe Tsai <joetsai@digital-static.net> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
Highly concurrent requests to a http2 endpoint.
Sample reproduction code:
In our tests we ran the above with
-max=1000
to trigger the issue as the server hasMAX_CONCURRENT_STREAMS=256
.What did you expect to see?
All requests succeed.
What did you see instead?
http2: server sent GOAWAY and closed the connection; LastStreamID=533, ErrCode=PROTOCOL_ERROR
Debugging so far
After doing some debugging the issue is down to the fact transport configures
maxConcurrentStreams
to 1000 and with lots of very quick requests as generated by the above a connection ends up getting more than the max streams before it processes thesettings
header from the server which includesMAX_CONCURRENT_STREAMS=256
.The result of this is that all streams > 256 get refused with a mixture of:
RST_STREAM ErrCode=REFUSED_STREAM
GOAWAY ErrCode=PROTOCOL_ERROR
The former is handled by
http2canRetryError
but go away isn't as there are two distinct types ofGOAWAY
errorshttp2errClientConnGotGoAway
andhttp2GoAwayError
and it'shttp2GoAwayError
I'm seeing.Which begs the question should http2GoAwayError be handled hence that fix is simply:
With this in place I now see retries don't get
GOAWAY
errors but start gettingunexpected EOF
.Finally if I change
maxConcurrentStreams
to 256 as desired by the server everything works perfectly.Reading the HTTP2 spec it seems valid for the client to send additional frames after its connection preface and in particular before its received the servers connection preface so setting up additional streams is valid.
Given all this I wonder if setting a lower
SETTINGS_MAX_CONCURRENT_STREAMS
such as the min recommended by the spec of100
should be done until a settings frame has been received, as which it could be upgraded to either the value sent or the current default if noSETTINGS_MAX_CONCURRENT_STREAMS
is present. This should have minimal impact on normal operation but in high concurrent situations avoid significant extra overhead of trying to setup streams which will never seen.To put this into context I've seen the go client hit 500 streams before it processes the settings frame, so in our case which is talking to a cloudflare endpoint means up to half the streams need to be retried on different connections.
Thoughts?
The text was updated successfully, but these errors were encountered: