-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swarm does not consider health checks while updating #23962
Comments
What happens if you |
@cpuguy83 |
@otbe So it sounds like swarm is not taking into account the "starting" state specifically. |
@cpuguy83 yes I think so. Thats what I observe while using |
Testing this myself, I'm not sure that swarm is taking into account healthchecks at all atm. |
It is not mentioned explicitly in the docs, but it was my expectation. :) When exiting with 1 it works for me. Swarm will reschedule unhealthy containers. This is my most wanted feature because it allows native zero downtime deployments for the first time. |
It's also important for update to take into account health check because if the image code has a bug you now just released an unhealthy image to your whole service which means downtime. It would be nice if update would quit if a container gets an unhealthy status response to avoid this scenario. |
@otbe maybe you could try As far as I know, in stead of checking health_status of the container (e.g. |
@runshenzhu |
If @runshenzhu is right and the 'move-forward' indicator is |
I built docker from master, which should include the PR (moby/swarmkit#1122) mentioned, but it still takes down the containers, no matter the health-check status.... :(
I created a stack like this:
if I then update the service:
It does not wait for the health-check, but iterates through all tasks while the one at hand is still in |
@ChristianKniep health check in docker build-in swarm mode is different from that in swarmkit. It hasn't enabled in docker build-in swarm mode. The first step is to implement health check in container's running state, as described in #24139 |
@runshenzhu Ahh, OK... I thought that this change will directly effect the eninge - that it is a dependency of the engine, but understood. Thanks for the clarification. |
@runshenzhu: Will this be fixed by #24545? |
@aaronlehmann Yes. Once #24545 gets merged, swarm service updating will take health check into account. |
@runshenzhu: Great! I went ahead and edited #24545 to include "Fixes: #23962" so that this issue will be closed automatically when that PR is merged. |
@aaronlehmann oh, thanks! I didn't know this great feature of github. |
Hi,
Im playing around with docker 1.12 and the
HEALTHCHECK
directive in dockerfiles.What I do is:
docker swarm init
Dockerfile
check_running.sh
Build some images from this Dockerfile:
Start a service with 5 replicas
Wait some time (5s) until all replicas are healthy. Now I want to update the image to v2. The update procedure should be rolling with a parallelism of 1. So I run:
From my understanding swarm should execute the update based on this algorithm:
But what I get is:
Sometimes all replicas are in starting state which results in a downtime of this service. My expectation was that I can obtain "native" zero downtime deployment with
HEALTHCHECK
and rolling updates on my swarm cluster :)Whats wrong with my attempt?
Thanks!
Update: Sorry for closing/opening this issue. My mobile is somehow broken today :/
The text was updated successfully, but these errors were encountered: