Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Feature Request : Load balancing - Proxy - Cluster #775
h2o is an amazing tool !
Today it can challenge Varnish with the cache aware server push feature (http2-casper) that brings cache logic and memory consumption to the client. That is a great improvement.
Tomorrow, with the ability to proxify requests to a cluster of backends, it can even challenge load balancers like HAProxy or NginX.
Here is a proposal of a very basic and naive declaration of a cluster :
Bonus : we can even listen from a cluster :
...But that supposes to separate
Here is an alternative, more extensible but also more verbose syntax:
Note : I wrote "node" but it could be "server", "entry" or whatever more semantic.
Thank you for the suggestion.
FWIW, you can configure your DNS server to return multiple IP addresses (in your case
But I agree that we can improve this a lot by directly supporting features for load balancing within H2O.
Thank you for the clarification.
My understanding is that the general answer to load balance a set of server processes on a same machine is to create a single daemon (governed by systemd etc.), that binds to a Unix socket file and then forks multiple worker processes listening to the same unix socket. That way, idle worker processes can pick up incoming connections.
I'd suggest following that way, since it would improve responsiveness of your web site than the current approach.
This is a road blocker for me to use H2O in production, instead of Nginx. I want to take advantage of the HTTP/2 server push feature H2O offers. Right now, I have this setup in Nginx that needs to be reproduced in H2O:
What solution do you recommend?
I like the solution with
now imagine if it could also check if each of proxy.reverse.url host is alive and/or retry in RR fashion given 5 seconds timeout or, in m dreams, even fingerprint two requests to distinct hosts and respond with quickest ;)
+1 for the OT of @dtruffaut because h20 performance appears to throttle other proxy applications as evident from the benchmarks here: https://www.techempower.com/benchmarks/#section=data-r15&hw=ph&test=fortune
@kazuho san: no matter how fast h2o is as a server if one has to deploy slower frontend load balancer , the entire purpose of h2o as backend servergets defeated, right?! Thus, the feature is very desirable for production deployment.