-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Force restart VM expose the risk of data corruption #9830
Comments
I believe this is expected behavior. What do you mean by |
I think the issue is that it behaves like I think, as outlined by @xiesheng211, going with a grace period of 1 would be more what users would understand as a force-restart: If they system is unhealthy (e.g. node unresponsive), the pod will still be stuck until a force delete, in any other case, it would still restart practically right away. |
I see your point. We don't wait for confirmation that the Pod was deleted. |
@xpivarc Does it sound reasonable to internally on the pod delete transform the 0 request into a 1 request? |
What happened:
When VM get force restarted with GracePeriod == 0, the virt-launcher will get force deleted.
Code link
This could cause potential corruption if triggered when the node is under partition, which causes two VMs running at same time.
What you expected to happen:
SIGKILL should succeed when tearing down the VM.
One way to workaround is to set GracePeriod = 1, which is similar workaround as kubectl (link)
cc. @rmohr
The text was updated successfully, but these errors were encountered: