-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Size http2 buffers to allow concurrent streams #67902
Conversation
/sig scalability |
@kubernetes/sig-api-machinery-pr-reviews @kubernetes/sig-scalability-pr-reviews /cc @deads2k |
cc @jpbetz |
Thanks @liggitt ! /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: liggitt, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Automatic merge from submit-queue (batch tested with PRs 67694, 64973, 67902). If you want to cherry-pick this change to another branch, please follow the instructions here. |
does it make sense to add a test for this? |
I have a reproducer for a golang issue queued up, but exercising this bug requires an aggregated API server whose rest handler makes a call to an aggregated API… pretty involved for a bug I'd rather see fixed by the runtime defaults |
…2-upstream-release-1.9 [1.9] Automated cherry pick of #67902: Size http2 buffers to allow concurrent streams
…2-upstream-release-1.10 [1.10] Automated cherry pick of #67902: Size http2 buffers to allow concurrent streams
…2-upstream-release-1.11 [1.11] Automated cherry pick of #67902: Size http2 buffers to allow concurrent streams
http/2 requests from a given client multiplex over a single connection via streams, chopped up into frames.
The amount of data the client is allowed to send for a given stream and for the overall connection before acknowledgement is determined by the server's MaxUploadBufferPerStream and MaxUploadBufferPerConnection settings respectively, both defaulting to 1MB.
The number of concurrent streams a client is allowed to send over a single connection is determined by the server's MaxConcurrentStreams setting, defaulting to 250.
We observed a starvation issue with the kube aggregator's proxy client if handling of a POST through the aggregator to a backend server exceeded the 1MB buffer size AND the backend server required a second POST request through the aggregator to be handled before it could drain the first request's body.
Logically, if concurrent streams are allowed in a single connection, the connection buffer should be MaxUploadBufferPerStream*MaxConcurrentStreams to allow individual streams to make progress even when one stream is blocked.
This PR shrinks the
MaxUploadBufferPerStream
size to 256kb (which is still large enough to allow all the resources we saw in our test clusters to be sent in a single frame), and grows the MaxUploadBufferPerConnection to accomodate concurrent streams.I'm also opening a golang issue, reproducer, and fix for the defaults for this