Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stream updates happen very slowly when the clients hit the MaxConcurrentStreams of the k8s API. #456

Closed
krasi-georgiev opened this issue Aug 27, 2018 · 4 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@krasi-georgiev
Copy link

golang/net@1c05540
after this commit when the client reaches the MaxConcurrentStreams any updates using the stream happen very slowly (1-2 per minute) so on systems with constant changes the client can never catch up.

Currently if the http2.Transport hits SettingsMaxConcurrentStreams for a
server, it just makes a new TCP connection and creates the stream on the
new connection. This CL updates that behavior to instead block RoundTrip
until a new stream is available.

as per the commit message the old behaviour was that the client creates a new connection and after golang/net@1c05540 is just waits until there is available streams.

Not sure if we can have some configurable settings in the k8s client or the http2 package needs to allows some tuning.

more details about the issue as part of troubleshooting prometheus/prometheus#4528

@liggitt
Copy link
Member

liggitt commented Aug 27, 2018

that certainly seems like behavior the http/2 package should allow choosing between

@liggitt
Copy link
Member

liggitt commented Aug 27, 2018

see also a buffer starvation issue we just found and opened a fix for: kubernetes/kubernetes#67902

that's only relevant if the blocked requests have bodies (POST/PUT, etc), so it may not be related to your issue

@krasi-georgiev
Copy link
Author

krasi-georgiev commented Aug 27, 2018

thanks I will open an issue with the http lib.

I haven't looked, by I think these are all get requests so don't think this hits the buf starvation bug and it all works as expected when I start minikube with:

minikube start --extra-config=apiserver.http2-max-streams-per-connection=10000

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants