New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add event when a readiness probe becomes healthy after being unhealthy #53817

Open
djschny opened this Issue Oct 12, 2017 · 10 comments

Comments

Projects
None yet
6 participants
@djschny

djschny commented Oct 12, 2017

FEATURE REQUEST:
/kind feature

What happened:
A readiness probe logs an event when it changes from a success to a failed state. However it never adds an event when it goes back to ready (and being included in traffic for any Services it is part of).

What you expected to happen:
An event message to be logged stating that the probe is healthy again and ideally the amount of time the pod was unhealthy.

How to reproduce it (as minimally and precisely as possible):
Run the following YAML and then watch the events. The Deployment creates a pod that alternates availability every 30 seconds.

https://gist.github.com/djschny/a220dc4b828efaa05e6cccfff6130579

Anything else we need to know?:
Appears that does not appear to be bug from my research based upon that there is not an attempt in the code to log a successful event message:

https://www.google.com/url?hl=en&q=https://github.com/kubernetes/kubernetes/blob/v1.7.6/pkg/kubelet/prober/prober.go%23L94&source=gmail&ust=1507908211149000&usg=AFQjCNF1JnFEPiS8fDs04QruwdaT87oRag

Environment:

  • Kubernetes version (use kubectl version): 1.7.6
  • Cloud provider or hardware configuration**: n/a
  • OS (e.g. from /etc/os-release): n/a
  • Kernel (e.g. uname -a): n/a
  • Install tools: n/a
  • Others:
@djschny

This comment has been minimized.

Show comment
Hide comment
@djschny

djschny Oct 12, 2017

It is not clear to me what the appropriate group/label is so I have intentionally left both off so it can be triaged appropriately by someone who knows the appropriate group/label.

djschny commented Oct 12, 2017

It is not clear to me what the appropriate group/label is so I have intentionally left both off so it can be triaged appropriately by someone who knows the appropriate group/label.

@cblecker

This comment has been minimized.

Show comment
Hide comment
@cblecker

cblecker Oct 12, 2017

Member

/sig node

Member

cblecker commented Oct 12, 2017

/sig node

@wackxu

This comment has been minimized.

Show comment
Hide comment
@wackxu

wackxu Oct 13, 2017

Contributor

I would help fix it

Contributor

wackxu commented Oct 13, 2017

I would help fix it

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot commented Jan 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Feb 11, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Feb 11, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@cblecker

This comment has been minimized.

Show comment
Hide comment
@cblecker

cblecker Feb 12, 2018

Member

/remove-lifecycle rotten

Member

cblecker commented Feb 12, 2018

/remove-lifecycle rotten

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 13, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented May 13, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@djschny

This comment has been minimized.

Show comment
Hide comment
@djschny

djschny May 13, 2018

/remove-lifecycle stale

djschny commented May 13, 2018

/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Aug 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Aug 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Sep 10, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

fejta-bot commented Sep 10, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment