Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
When api server /healthz fails, put info for diagnosis into its log. #56856
Is this a BUG REPORT or FEATURE REQUEST?:
I had a problem in my cluster which caused the
I logged in to the master node running both and had a look at the log that catches the API server's
What I found there was (after unquoting)
What you expected to happen:
I hoped to find diagnostic information in the log. (The
How to reproduce it (as minimally and precisely as possible):
Whether this is reproducible, I don't know, but here is what we did: We messed up our cluster by running an application POD with a bad health check of its own. This was "deployed" for a long time. On that (non-master) node,
But any way to mess up the cluster sufficiently so the API server
Anything else we need to know?:
I checked the source code and found, in
I speculate "detailed checks" means another REST resource. So I checked the API ref. But there, I could not find anything pertinent.
From a user experience point of view: A failing
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@aknrdureegaesr I think
Issues go stale after 90d of inactivity.
If this issue is safe to close now please do so with
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.