Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vsphere seed: Rebooting nodes requires manual detaching of volumes #1571

Closed
alvaroaleman opened this issue Jul 17, 2018 · 3 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@alvaroaleman
Copy link
Contributor

When using a vsphere seed and rebooting a node that runs one or more pods with attached PVs, that node can not be started anymore because the cloud provider does not remove the volume binding from the old node even thought the pod will get rescheduled to another node.

This then means that the vsphere instance can not be started anymore until an operator manually removes the binding to the node inside vsphere.

Upstream issue: kubernetes/kubernetes#63577

A possible quickfix would be to implement a systemd unit that taints and drains nodes prior to rebooting them and removes the taint on startup.

@alvaroaleman alvaroaleman added the kind/bug Categorizes issue or PR as related to a bug. label Jul 17, 2018
@mrIncompetent
Copy link
Contributor

Looking at kubernetes/kubernetes#63413 (comment) we might just simply wait for 1.12 to land?

@alvaroaleman
Copy link
Contributor Author

Yes, we should potentially communicate this for the time being to Kubermativ + Vsphere users and leave this issue open as internal tracking issue but not act on it.

@mrIncompetent
Copy link
Contributor

Closed in favor of kubermatic/docs#44

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants