-
Notifications
You must be signed in to change notification settings - Fork 5.4k
depends_on Should Obey {{ State.Healthcheck.Status }} Before Launching Services #3754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes, we should probably do this. It will involve:
There are some important questions to answer, e.g.
|
I like the idea of not joining a network until healthy also, that would allow for auto joining of containers via the service dns record. Is it possible to also have a mechanism where containers would leave the service dns record if the healthcheck is failing? I think this portion might be a question for docker engine. |
Yes, that's the problem with implementing it in Compose - its state can't be monitored. I think that suggests we shouldn't look at a container's health before connecting it to the network. |
I think that the health state of a service is separate from whether it is connected to the network. A service might need to connect to other services first before it becomes healthy. The health check should also be done at the service level, not container level. It makes more sense now to think in terms of Services than containers, specially with Docker 1.12 (Service has become a 1st class citizen). A consumer of the service should not have to care that the service is made up of 1,2 or dozens of containers. What counts is that the service as a whole is considered ready to accept requests. |
Quoting @dnephin in #3815 (comment):
I agree, so next step here would be to open a PR against Engine to implement that functionality if it doesn't already exist. It's important to get it into Engine first and as early as possible, because we have a policy of supporting at least two versions of Engine. If we can get the API endpoint into Engine 1.13, then we can get healthcheck support into Compose 1.10. |
What about using the event system to do that? You can filter based on |
I tweaked service.py and parallel.py and was able to make one container wait for another until it is healthy. Basically every container, which has dependencies on other containers (as I see it dependencies are inferred from links, volumes_from, depends_on... --> get_dependency_names() method -- line 519 in service.py) will wait until those containers are healthy. Regarding API, docker-compose uses docker-py and health check can be performed as follows (in service.py):
Then in parallel.py I just added one more check for firing the producer for the object. Iteration for the pending loop looks like this now:
Can anyone from the maintainers comment on it? It works with up command, down command also no issue. However, it might break something or? |
This test case hungs =\ |
I don't think we need to use the healthcheck for I think it would be good to only use healthchecks for |
Sounds like a good idea @dnephin |
Is anyone working on this issue? |
Any progress on this? |
This issue should be re-opened, as it is not working:
When executing:
this reports as if the RabbitMQ server is not started. The container status of the |
@lucasvc the healthchecks will not help you ensure start up order of your system. That is a design decision. System components should be implemented in a way that they can reconnect/retry if something is not up (or died). Otherwise, the system will be prone to cascading failures. P.S. The issue will not be re-opened, the feature is behaving as expected. |
@earthquakesan i must be missing something, the OP states
what is the feature doing for us now? |
also, to be clear, we're not using this feature to keep our applications waiting on their dependencies. i have a docker-compose.yml defining an app under test, a database, and a "tester" container that tests the app. the app under test will handle waiting for and reconnecting to the database, that's no problem. the issue is the "tester" container. i'd like for it to just run it seems unnecessary for a tester container, which is just running hope this makes sense. |
Please refer to the docs on how to declare health check dependencies: |
it's unfortunate they pulled the |
@ags799 I've been following the discussions here and the timeline was as follows:
As the most up-to-date (and the best) way to deploy docker-compose is with Here is a better "wait-for-it" version. The code is licensed under MIT, so feel free to reuse it. |
@shin- about
Why I should maintain two healthchekcs, when |
I'm not sure what you mean ; there's no need to maintain two healthchecks. |
Yes, the |
Waiting for a container to become ready does seem like it should be native functionality. Including a |
it would need something like kubernetes readinessProbes, but "waitForItProbes" which doesn't really solve the problem, because if container A dies/restarts after signaling "ready" to container B, then container B will anyway crash because A is not available I suggest closing this issue, because the only way to "do" this, is to have "wait-for-it.sh" or similar setup. |
Hello! Where i can read about this secret knowledge? May be issue, blog post, official doc? Thanks! |
Is it possible for the
depends_on
parameter to wait for a service to be in a "Healthy" state before starting services, if a healthcheck exists for the container? At the moment in 1.8.0-rc2 thedepends_on
parameter only verifies that the container is in a running state, via {{ State.Status.running }}. This would allow for better dependency management in docker-compose.The text was updated successfully, but these errors were encountered: