You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using a vsphere seed and rebooting a node that runs one or more pods with attached PVs, that node can not be started anymore because the cloud provider does not remove the volume binding from the old node even thought the pod will get rescheduled to another node.
This then means that the vsphere instance can not be started anymore until an operator manually removes the binding to the node inside vsphere.
Yes, we should potentially communicate this for the time being to Kubermativ + Vsphere users and leave this issue open as internal tracking issue but not act on it.
When using a vsphere seed and rebooting a node that runs one or more pods with attached PVs, that node can not be started anymore because the cloud provider does not remove the volume binding from the old node even thought the pod will get rescheduled to another node.
This then means that the vsphere instance can not be started anymore until an operator manually removes the binding to the node inside vsphere.
Upstream issue: kubernetes/kubernetes#63577
A possible quickfix would be to implement a systemd unit that taints and drains nodes prior to rebooting them and removes the taint on startup.
The text was updated successfully, but these errors were encountered: