-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod running on NotReady node all the time #98511
Comments
@CaoDonghui123: There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:
Please see the group list for a listing of the SIGs, working groups, and committees available. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@CaoDonghui123: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I find the Doc
I'm not sure if this needs to be fix |
there are two vars |
please try the support channels: /kind support |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@lunhuijie I think it has nothing to do with these two vars. I add this in YAML
it may fix it |
what 's my means is that's a normal scenes。 |
you mean if I wait for 5m0s, the status of pod had been changed? I wait one-night pod Still running |
a mistake para of kubelet will cause it happen (i want to set nodeStatusUpdateFrequency = 4s but set nodeStatusUpdateFrequency = "4s"), when i cat logs of kubelet, i found nothing, at this time, pod will not change its status. Check your paras again, if there are no more problem ,maybe you can propose another issue and remind me. thanks a lot! |
What happened:
I have three nodes. when I shutdown
cdh-k8s-3.novalocal
,pods running on it all the timeWhat you expected to happen:
Pod status change to Unknown
How to reproduce it (as minimally and precisely as possible):
step:
Anything else we need to know?:
yaml
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: