Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for service health checks #827

Closed
Daniel-ltw opened this issue Apr 27, 2017 · 11 comments · May be fixed by #4444 or #1325
Closed

Add support for service health checks #827

Daniel-ltw opened this issue Apr 27, 2017 · 11 comments · May be fixed by #4444 or #1325

Comments

@Daniel-ltw
Copy link

Daniel-ltw commented Apr 27, 2017

There are options with docker service that could be useful to users but are not available.

--health-cmd          Command to run to check health
--health-interval     Time between running the check (ns|us|ms|s|m|h)
--health-retries      Consecutive failures needed to report unhealthy
--health-timeout      Maximum time to allow one check to run (ns|us|ms|s|m|h)

Could we please have them so the rolling updates could work better for users?

@Daniel-ltw
Copy link
Author

Daniel-ltw commented Apr 27, 2017

As suggested by @deviantony open a new ticket.

#825

@deviantony deviantony changed the title New service options Add support for service health checks Apr 28, 2017
@deviantony
Copy link
Member

I've created a separate issue for the support of update-monitor in #829

@akusei
Copy link

akusei commented Jun 13, 2020

This seems like a fundamental feature that's missing from portainer. Will this be added? This issue has been open for 3 years and there's still no way to add custom health checks or adjust intervals.

I just started using portainer and based on this issue, how many outstanding issues there are and the fact that this issue has been stale for 3 years, not to mention the large number of unmerged PRs, makes me a little hesitant to actually use this is my environment

@ncresswell
Copy link
Member

ncresswell commented Jun 13, 2020 via email

@akusei
Copy link

akusei commented Jun 13, 2020

The docker healthcheck functionality isn't considered a core function of docker? I wouldn't consider that an enhancement at all. Kubernetes support though, I would consider that more of an enchancement than healthchecks. You guys have done an amazing job with this but without feature parity with docker from a docker management tool, this just won't work in anything more than a fairly simple deployment

@deviantony
Copy link
Member

We're now considering this issue and will add it into our backlog.

Requirements

  • Add the ability for a user to configure a healthcheck when creating or updating a service

  • Add the ability for a user to configure an already existing healthcheck associated to a service

  • Display the status of a healthcheck for services

Required changes

Service list view

Introduce a new Healthcheck column in the service datatable

Portainer - 2020-10-22T194147 146

If the service do not have a configured healhtcheck, display "-". If the healthcheck is OK, display a Healthy green label and if the Healthcheck is KO display an Unhealthy red label.

Service creation view

Update the Command & Logging tab and add a new Healthcheck section:

Portainer - 2020-10-22T195210 771

This section should be composed of:

  • An information message with the following text: "Configuring a healthcheck for this service will allow Swarm to automatically restart a container marked as failing."

  • Healthcheck input (text input)

  • Interval input (text input, should support human friendly duration format e.g. 30s, 5m...)

  • Timeout input (text input, should support human friendly duration format e.g. 30s, 5m...)

  • Retries (number input)

Service details view

  • Introduce a new Healthcheck link to the quick navigation panel below Restart policy

Portainer - 2020-10-22T195533 302

  • Introduce a new panel in the view allowing the user to configure the healthcheck (see mockup below for style and texts)

Portainer - 2020-10-22T200539 472

@Xuorg
Copy link
Contributor

Xuorg commented Nov 10, 2020

@deviantony To define healthcheck status, we need to compute it from all service's containers' statuses. We can only fetch all swarm containers with agent proxy. Which behaviour do you want for non agent connection ?
Unhealthy color is orange throughout the app, your spec uses red for unhealthy. should I keep red as speced or use orange to keep unicity ?

@deviantony
Copy link
Member

deviantony commented Nov 10, 2020

To define healthcheck status, we need to compute it from all service's containers' statuses. We can only fetch all swarm containers with agent proxy. Which behaviour do you want for non agent connection ?

@Xuorg actually ignore my mockup about showing the status of the healthcheck associated to the service, it's irrelevant in Swarm.

The entire purpose of the Swarm healthcheck is to let Swarm re-schedule tasks when one becomes unhealthy.

I think the best would be to provide a visual indicator to see if a Healthcheck is actually set on a service. I would recommend re-using something similar to the resource pool UI/UX for that one:

Portainer-local (28)

Unhealthy color is orange throughout the app, your spec uses red for unhealthy. should I keep red as speced or use orange to keep unicity

This is now irrelevant and can be ignored.

@joefernwright
Copy link

Hi dev Team, when this can be made available?

My User Story:
I am not able to use '--no-healthcheck' option when running a Node-Red container via Portainer. When I execute the following via SSH, the '--no-healthcheck' option works fine, which means the container gets listed in Portainer and runs without healtcheck enabled (status "running"):

docker run -it -p 1880:1880 --no-healthcheck -v /srv/dev-disk-by-uuid-95f9376f-4b40-4f41-a85b-615712219901/data/nodered:/data --name node-red nodered/node-red

When the same container is stopped and re-started (via Portainer), the no-healthcheck is not used any more and the container gets status "healthy", which means healtcheck is enabled. I want to use '--no-healtcheck' option as Node-Red container has healtcheck enabled by default and it causes excessive logging (Syslog), e.g:
May 23 13:41:14 nebulonserver systemd[1]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.iPwfl5.mount: Succeeded. May 23 13:41:45 nebulonserver systemd[968]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.CBunCG.mount: Succeeded. May 23 13:41:45 nebulonserver systemd[1]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.CBunCG.mount: Succeeded. May 23 13:42:16 nebulonserver systemd[968]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.86GJsi.mount: Succeeded. May 23 13:42:46 nebulonserver systemd[968]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.c1IvKK.mount: Succeeded. May 23 13:43:17 nebulonserver systemd[968]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.YILx1f.mount: Succeeded. May 23 13:43:47 nebulonserver systemd[1]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.26eNHT.mount: Succeeded. May 23 13:43:47 nebulonserver systemd[968]: run-docker-runtime\x2drunc-moby-00d4120caead6372631b47f1e0d98c2c0c138679fb21f63b15f930a60e58b4e3-runc.26eNHT.mount: Succeeded.

This will kill my SSD drive quicker than slower...

@kinsi55
Copy link

kinsi55 commented Aug 29, 2022

Bumping this as I experience the exact same problem right now - I looked around all of Portainer but couldnt find any way to disabe the healthchecks of containers, quite the surprise

@libook
Copy link

libook commented Jul 21, 2023

I need to overwrite the health check cmd while creating or updating containers.
Is this feature still going on?

@portainer portainer locked and limited conversation to collaborators Jul 27, 2023
@jamescarppe jamescarppe converted this issue into discussion #9337 Jul 27, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
8 participants