-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube_pod_container_status_waiting_reason strange behaviour #468
Comments
Yes this is the correct behavior, because kube-state-metrics always reflects the state of the Kubernetes API, so because the Kubernetes API is changing the state of the Pod, so is the metric of the respective Pod. You can solve this by using |
ok, but when you say that kube-state-metrics always reflect the state of the K8S API I don't really understand why my pod state is not stucked in "ImagePullBackOff" state. Because when I do a
So why |
It might be due to the version of kube-state-metrics you are using, the |
ah ok .... I'am using Edit : That solved the problem! Thank you |
Hi,
I'am using prometheus to watch the state of running pods on my k8s cluster. I'm using the
kube_pod_container_status_waiting_reason
metric to do this.For the test purpose, I create a deployment with a non-existing image in it to force error to raise:
Then, on my prometheus UI, I launch this query :
during the first minute I have this result :
So,
kube-state-metric
reports that my pod is in "ContainerCreating" stateThen, during about 1 minute I have this result :
kube-state-metric
reports that my pod is now in "ErrImagePull" state (as expected)My problem is that this status does not persist more than 1 or 2 minutes, because if I refresh my query, I have a "no data" response while my deployment is still in ImagePullBackOff state.
Is it a normal behaviour?
Thank you for your help
The text was updated successfully, but these errors were encountered: