Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Static pod terminating status can not be bring back #22625

Closed
xinxiaogang opened this issue Mar 7, 2016 · 5 comments
Closed

Static pod terminating status can not be bring back #22625

xinxiaogang opened this issue Mar 7, 2016 · 5 comments
Labels
area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@xinxiaogang
Copy link
Contributor

If the mirror pod of static pod be deleted via apiserver, the mirror pod status would stuck at Terminating. Even restarting kubelet does not work.

kubelet should sync the running static pod status to mirror pod.

@adohe-zz
Copy link

adohe-zz commented Mar 7, 2016

@kubernetes/sig-node

@Random-Liu
Copy link
Member

@xinxiaogang Now kubelet ignores all modifications to the mirror pod as long as the annotation remains unchanged. There are more mirror pod related discussion in #16627, you can follow up if you are interested, :)

However, I think what you suggested is reasonable somehow.

@yujuhong except for the annotation change, if the Deletiontimestamp of the mirror pod is set, shouldn't we recreate it?

@Random-Liu Random-Liu added area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. labels Mar 7, 2016
@xinxiaogang
Copy link
Contributor Author

Thanks @Random-Liu . I have a PR for the fix and it works in our cluster now. #22626

Actually, all of our static pods (etcd, apiserver, etc) are running well, but the problem of Terminating stuck is, the endpoints would be picked out of my service and no traffic would be dispatched to these Terminating pods.

@Random-Liu
Copy link
Member

@xinxiaogang Thank you very much for the fix. :) We'll review it.

@yujuhong
Copy link
Contributor

yujuhong commented Mar 7, 2016

@xinxiaogang, as a quick workaround, you can force delete the mirror pods (kubectl delete pod --grace-period=0 <pod_name>), and kubelet will recreate all of them.

@yujuhong except for the annotation change, if the Deletiontimestamp of the mirror pod is set, shouldn't we recreate it?

Yes, we should do it. Ideally we should resolve the mirror pod issue #16627 altogether, but this seems like a relatively isolated fix.

Thanks @Random-Liu . I have a PR for the fix and it works in our cluster now. #22626

Thanks for the PR!

k8s-github-robot pushed a commit that referenced this issue Mar 7, 2016
- During `kubelet` `syncPod`, check mirror pod `DeletionTimestamp` value to determine whether re-create mirror pod for running static pod.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

4 participants