Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Drain a backend server #41
Many load-balancers have this feature. Example for AWS: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-conn-drain.html
I would like to be able to set /backends/awesomebackend/servers/server2/drain 1
The effect would be that no new clients should be sent towards this backend. Ideally the proxy should communicate back when no existing clients remain.
Deployment server could launch the new environment, set the old environment to "drain=1" and that way we could deploy without disturbing existing connections.
This sounds a bit out of scope for an auto-configured reverse-proxy, but if you're going to implement #5. Then maybe consider this.
A workaround currently possible (at the price of increased latency) is to enable retrying and make sure there are enough surplus servers available to compensate for the "draining" server.
Specific to the Kubernetes provider, the canonical way to achieve draining is to have a readiness probe implemented by the backing pods. Support from Traefik won't be necessary.
Marathon also supports readiness probes (called readiness checks there), but some degree of support might be needed in Traefik.
referenced this issue
Apr 25, 2017
Would it be possible to add a --drain=true flag to a Docker swarm service which Traefik picks up and doesn't route any new requests to. The use case would be:
added a commit
Aug 22, 2017
I'm wondering what the current status of this issue is?
Is there a way to achieve this using the current version of Træfik, or are there any plans on implementing this functionality?
My use case is Docker Swarm.
I just looked through the source code for the Docker provider and I noticed this comment https://github.com/containous/traefik/blob/master/provider/docker/docker.go#L159.
I was thinking about implementing an events listener in Træfik for Docker, instead of the one today that polls a list of services every 15 seconds (by default), that is, if there is no one working on this issue.
I'm just wondering if building the support right into the Docker provider is the right way to go around this or if there needs to be a more generic solution (if it's at all possible to do in a proper way)?