New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubelet doesn't update static pod status #61717
Comments
I don't think this is a bug. The fact that it has the |
I confirm that this behaviour is still happening. Having pods with status |
/sig node |
I agree with @yujuhong that this is working as intended. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@dashpole @yujuhong I understand that the source of truth is currently the API server and that details can be retrieved there. We have however found this to be a scaling challenge. In our use case, we are a monitoring application that runs as a DaemonSet. Monitoring systems such as ours need to continuously refresh the PodList because Kubernetes operators want a real time feedback on application health via metrics, logs and traces. Having each node that is monitoring query the Kubernetes apiserver to get each of the Pod statuses results in high load on the API server. This is exacerbated and becomes difficult to scale as clusters grow to a large number of nodes. We have however found that having each node query its own Kubelet API Other benefits of using the kublet API are that: If there is another approach or API that you’d suggest we’re happy to rethink how we collect this data. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I also meet this issue. |
Reporting the status as "Pending" might be consistent, but it's incorrect. If the pod is created statically and is health checking OK, it's not "Pending". The Kubelet API is almost very useful but the status flag seems to make it much less so since it's not possible to tell from the |
Hi team, I'm reaching out to see if we can revisit this issue. We need this to be able to monitor pods locally without having agents reach out to the apiserver. Currently the pod is in a wrong state (Pending when it's actually Running), and we're missing all status information (pod IP, container statuses, etc.). We would be happy and willing to update #57106 to solve the issue if we get the green light that this will be accepted. |
I think as long as we fix the test failures the change caused, we can re-introduce it. Feel free to assign me on a PR which accomplishes this, and make sure the reboot test passes. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
In the kubelet, when using
--pod-manifest-path
the kubelet creates static pods but doesn't update the status accordingly in the PodList (kubelet/pods
).You can find that kind of Pods with the following annotation:
The status will stay like that:
What you expected to happen:
The status is updated.
How to reproduce it (as minimally and precisely as possible):
Use a static pod.
Anything else we need to know?:
I tried to fix it in #57106 but the test-grid continuously failed at the reboot phase (see #59889).
I had to revert it (#59948, #59892).
I'm creating this issue to track this bug.
The problem in the e2e tests are probably from this function.
This is an old output of the failure: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-reboot/20693?log#log
The text was updated successfully, but these errors were encountered: