New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modifying/Replacing Swarm Mode Load Balancer Service #23813
Comments
I think you're supposed to have a single load balancer container in your swarm cluster. This way it'd always go to this container, since there's only one. |
Ah, but requests from load balancer would still go to different hosts... |
ping @mrjana |
SRV records from the docker daemon's DNS would allow one to build such an external router. So would a routing policy framework/plugin, one imagines. |
But isn't it an TCP level load balancing?
As opposed to Layer 7 load balancer, you have no control over it. |
It's also called "persistence" or "affinity": |
I was assuming Layer 7 as that is what I am used to building (and would require externally accessible ports) but if IPVS supports stickiness (which I would be surprised if it did not) I would be surprised if there wasn't a relatively easy way to configure such. |
It might be a good idea to provide an explicit API endpoint in addition to DNS when dealing with service discovery. While DNS works, it can cause problems for many clients that don't obey TTLs. On the other hand one could build tools around grabbing the list of backends from SRV records directly, but this may not work well for applications that have multiple ports. |
We're using a custom consul + nginx setup for load balancing. Making the whole load balancing component of swarm optional would do the trick for us. |
We would just need to expose scheduler options from IPVS to the API more. |
@kelseyhightower I agree the additional API endpoints on top of DNS SRV records is a real nice to have (a la Consul). As for services exposing multiple ports, I am drawing a blank coming up with an elegant solution. I would like to hear what the Docker folks have to say on it. |
@BenHall We would very much want to support sticky sessions (based on source IP) but it just isn't part of the api yet. In the mean time you could easily configure your service to run in DNS RR(so when you do a A query for the service name you get all the backend IPs) and then standup an nginx to use that information for do whatever way you would like to choose your backends. |
@dweomer We do have an option to provide a name in for ports so we can do SRV lookup on _<port_name>._tcp.<service_name> or retrieve it using APIs. So yes it is possible to support multiple ports in the future. The design is already in place. Just is not exposed in api/ui yet. |
@mrjana what about swarm's routing mesh? How can it be disabled in swarm mode? As far as I understand from the demo, even if you choose a static IP address (provided by DNS) it would still balance packages between multiple containers. Did I misunderstand something? |
@Vanuan You could do two things.
With that now, you can drive the L7 LB configuration by querying the service name (of your actual service) and since you started the service as Long story short you don't have to use the swarm's routing mesh if you don't need it. |
@mrjana Is there some documentation as to how to use the VIP allocated for the service? Specifically how to use this information with external LBs? I could run ipvs on the master node itself, so it would be nice to see an example that ties in the vip information seen in docker service inspect and ipvsadm commands. Additionally, I would like to try a L4 LB, like F5 or ELB on AWS, but in all those cases, the VIP doesn't make much of sense. I can configure external LBs with my worker nodes private IP and service port to load balance traffic. That should work fine, right? Again, I would like to understand the VIP as seen in docker service inspect and it's association with ELB configuration as I mention here. |
@rhim VIP is only useful within the cluster. It has no meaning outside the cluster because it is a private non-routable IP. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the If you would like to use an L7 LB you either need to point them to any (or all or some) node IPs and PublishedPort. This is only if your L7 LB cannot be made part of the cluster. If the L7 LB can be made of the cluster by running the L7 LB itself as a service then they can just point to the service name itself (which will resolve to a VIP). I feel this is a very a common question that a separate blog article is needed to clarify things and explain recipes on how exactly to integrate an external LB to the services in the cluster. |
@mrjana : the blog post will be fantastic! |
@mrjana: I don't really feel comfortable with having a single point of failure in a - apart from that - highly available cluster infrastructure. Isn't there a way to address a single container out of a service from outside the cluster? (or disable the routing between nodes for some services... so you could run the LB as a global service and use the nodes' ip-adresses in the external LB) This would allow to have multiple load balancers inside the cluster while still ensuring sticky sessions. |
It'd be cool if Docker had a builtin simple on or off option for sticky sessions based on IP address |
Check out https://github.com/stevvooe/sillyproxy for an example of L7 load balancer integration. |
Question on SO requesting this - http://stackoverflow.com/questions/41587128/browser-services-container-in-docker-swarm-mode/41609205#41609205 - and I've noticed a few open source projects starting (https://github.com/vfarcic/docker-flow-proxy) Is there any roadmap/timeline for this? |
Docker 1.13 introduces a Keep in mind that as a consequence, only a single task of that service can run on a node. On docker 1.13 and up; the following example creates a docker service create \
--name=myservice \
--publish mode=host,target=80,published=8080,protocol=tcp \
nginx:alpine Contrary to tasks that publish ports through the routing mesh, CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
acca053effcc nginx@sha256:30e3a72672e4c4f6a5909a69f76f3d8361bd6c00c2605d4bf5fa5298cc6467c2 "nginx -g 'daemon ..." 3 seconds ago Up 2 seconds 443/tcp, 0.0.0.0:8080->80/tcp myservice.1.j7nbqov733mlo9hf160ssq8wd Hope this helps |
Thanks for the update! This might be getting slightly off topic, happy to create an separate issue instead, but hoping it will save someone else running into the same problem. I tried your suggestion on Docker for AWS.
The container starts as expected and the port mapping looks ok to me
but I cannot access the web server from the worker node.
The web server is working though as Did I miss anything? UPDATE: It works if I curl from the manager node instead. I recall something about Docker for AWS running the shell in a container, I suppose that affected the outcome. |
@datacarl thanks for the report; I'm not completely familiar with the docker for aws internals, so could you open an issue in the Docker for AWS issue tracker? https://github.com/docker/for-aws/issues |
Sure! docker-archive/for-aws#7 |
@thaJeztah thanks a lot for the option mode=host |
@gabrielfsousa did you tried it? |
on stack deploy didnt test |
@Vanuan Can you please explain if both your tasks are required for your work around solution?
So I need to disable docker swarm LB as well. My situation is this: external Nginx LB for 3 docker swarm nodes <-> jwilder/proxy nginx service containers with ssl termination <-> nginx web heads service containers one per docker swarm node I cannot use host mode because I have over 50 instances of the same service listening on each node. They are all separate apps. Using host mode only allows one instance of a port to be exposed on each node. Any ideas on how to disable docker swarm lb in this case? I need the docker swarm node endpoint to send the request to the container on that exact docker swarm node and not lb to another node regardless of container health. I am running into an issue were the docker swarm lb sends to healthy containers that are responding with 504 gw errors which are likely due to jwilder/proxy but force updating the service fixes the 504 gw errors (eventhough all the containers in the service were reporting healthy) |
@dreusskis I think this is meant for @mrjana |
Please I need advice or suggestion on how to achieve this setup about using Docker-swarm to orchestrate containers. I am placing 3 containers as an Nginx/Haproxy Load balancer but an API gateway and each on a manager node that will be running with the host network as the same as a Consul Server in which with the help of a Consul-template and Registrator the Nginx configuration file can be reloaded to have list of healthy containers IP and Port. The API gateway which runs with host network needs to join each and every existing application overlay networks given I have different set of overlay networks for specific kind of microservice containers to get attached to. So, I am using Now the API Gateway can reach out to web server containers of different microservices running on other host that are in their own respective overlay networks. I want to be able to modify the Swarm Loadbalancer IPVS to send http request to API gateways on manager Node, and replies from microservices through their webservers to the api gateways should directly be sent to the user via the app gateway on the manager nodes; some sort of direct server return. And when the microservice wants to communicate with another microservice it should be an http request that will eventually come in as an external request that will hit the swarm ipvs load balancer and pass through the normal process to an available api gateway then to an available webserver meant for the very microservice it wants to run to do a task. I need a direction to enable me know how to set up this kind of configuration. |
Let me close this ticket for now, as it looks like it went stale. |
Is it possible to modify or replace the load balance service in Swarm Mode?
Motivation: One of my applications require "sticky-sessions". Bouncing around between the nodes would cause it errors.
Ideal World: I can use the
docker service
functionality customise the routing setup allowing for sticky-sessions.Less Ideal World: Some way I can combine the
docker service
command line with nginx-proxy or similar allowing me to define my own routing approach.The text was updated successfully, but these errors were encountered: