Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nghttp2 alpine v1.28.0 does not load balance request #1086

Closed
truongdo opened this issue Dec 14, 2017 · 3 comments
Closed

nghttp2 alpine v1.28.0 does not load balance request #1086

truongdo opened this issue Dec 14, 2017 · 3 comments
Milestone

Comments

@truongdo
Copy link

truongdo commented Dec 14, 2017

Hi,

I'm not sure if this is a bug of nghttp2 v1.28.0 or it is because of alpine packages.
But the problem I'm facing is that the nghttp2 v1.28.0 does not load balance grpc requests across multiple backends. All requests go to only one specific backend.
The problem disappear when downgrading to nghttp2 v.1.22.0.

Here is the alpine repository that I used to install nghttp2:
https://pkgs.alpinelinux.org/packages?name=nghttp2&branch=&repo=&arch=&maintainer=

@tatsuhiro-t
Copy link
Member

I think f507b5e changes the behaviour.

In short, the current implementation prefers existing backend connection. HTTP/2 multiplexes requests in a single connection. It also has limits about the number of concurrent requests.
Currently nghttpx reuses the existing connection if it does not saturate concurrent requests limit.
Keeping and establishing multiple TCP connections is huge burden, and load testing shows that this algorithm shows 3 times faster performance.

But if backend server is slow, and/or backend server advertises large concurrent streams (e.g., 1000), then it is possible that only one server is used.

I'm ok to revert the change if it voids the purpose of load balancing.

@truongdo
Copy link
Author

In my test setup, I have 2 backends, one with a very good connection to nghttp2, one with a slow connection. I haven't tested it carefully, but it seems like all requests go to the one with the good connection.

You mentioned that the current implementation is 3 times faster than before but if all connections go
to one backend, it is a much bigger problem. So I think it is better to revert the change.

@tatsuhiro-t
Copy link
Member

Reverted a4e27d7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants