New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss: Support user-defined healthchecks in docker image #25829

Closed
dchen1107 opened this Issue May 18, 2016 · 8 comments

Comments

Projects
None yet
7 participants
@dchen1107
Member

dchen1107 commented May 18, 2016

Docker is adding support for user-defined healthchecks as part of Dockerfile, and the proposal is at moby/moby#22719. We should think about if we want to support that feature? If yes, how?

cc/ @thockin @bgrant0607 @kubernetes/goog-node @derekwaynecarr

@derekwaynecarr

This comment has been minimized.

Member

derekwaynecarr commented May 19, 2016

My initial bias is that health checks are a layer above the container
runtime interface, but if we did support it, it must be reflected in the
pod spec and not hidden from view when introspecting the pod.

We have interfaces today that ask the user to add health checks if not
specified on the pod spec, it would be confusing for them to be hidden in
the image definition alone.

I don't think it makes sense to inherit health checks from previous image
layers. I also would find it odd if we supported both pod spec liveness
probes and container image health checks without unifying them as I would
expect a consistent set of events around liveness failures in the
experience.

On Wednesday, May 18, 2016, Dawn Chen notifications@github.com wrote:

Docker is adding support for user-defined healthchecks as part of
Dockerfile, and the proposal is at moby/moby#22719
moby/moby#22719. We should think about if
we want to support that feature? If yes, how?

cc/ @thockin https://github.com/thockin @bgrant0607
https://github.com/bgrant0607 @kubernetes/goog-node
https://github.com/orgs/kubernetes/teams/goog-node @derekwaynecarr
https://github.com/derekwaynecarr


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#25829

@yujuhong

This comment has been minimized.

Contributor

yujuhong commented May 19, 2016

I agree with what @derekwaynecarr's points. I have also expressed similar concerns in the brief discussion today. It's not desirable to have kubelet running health checks for a container without explicitly expressing it in the pod spec.

@bgrant0607

This comment has been minimized.

Member

bgrant0607 commented May 19, 2016

Trying the Socratic method. :-)

Why are any container properties represented in image metadata?

What would we have done if this had been supported by Docker from the beginning?

What did we do for other container properties, such as ports?

How are ports different from entrypoint and command?

Why don't we use Docker's restart policy even though it mimics ours?

What would we do about readiness probes if we supported Docker's health checks?

What are the other facets of an application's management/introspection surface? Would they make sense as image properties?

What about other runtimes?

Why did Docker add this feature?

@fejta-bot

This comment has been minimized.

fejta-bot commented Dec 15, 2017

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

fejta-bot commented Jan 14, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

fejta-bot commented Apr 16, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

fejta-bot commented May 16, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

fejta-bot commented Jun 15, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment