Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attached PVC vmdks get deleted when a node is removed #180

Closed
RaceFPV opened this issue Apr 5, 2019 · 6 comments
Closed

Attached PVC vmdks get deleted when a node is removed #180

RaceFPV opened this issue Apr 5, 2019 · 6 comments
Assignees

Comments

@RaceFPV
Copy link

RaceFPV commented Apr 5, 2019

When a node is removed via the provider, there are cases where the still attached PVCs can get deleted from disk as well, causing a loss of data and the vmdk file. This then causes k8s to fail to bring up the container on a new node as the underlying PVC is gone.

This can be resolved by having the vm deleted but not deleting all of the PVC vmdk files, or by flagging the PVC vmdks to not delete with vm on creation.

This has cost the loss of data including sql databases etc without warning as there are no prompts or stop telling you what actions will occur on node removal/deletion.

@codenrhoden
Copy link
Contributor

/assign @dvonthenen

@dvonthenen
Copy link
Contributor

dvonthenen commented Apr 10, 2019

@RaceFPV Some clarification... you said "When a node is removed via the provider"... what provider are you talking about?

Trying to understand how the VM is being deleted since we don't have any functionality that deletes VMs.

If you can provide steps to recreate (like using any commands like kubectl or etc), that would be great.

@LinAnt
Copy link

LinAnt commented Apr 11, 2019

Note that this is probably an issue with the in tree cloud-provider.

Drain a node.
Node says it is drained.
The VMDKs remain attached to the node.
If you delete the node the VMDKs are deleted as well, which is expected.

This issue has been present in the in tree cloud provider forever it seems like.

rancher/rancher#15065
rancher/rancher#18221

I think there is a KEP for 1.15 about fixing the drain command :)

@dvonthenen
Copy link
Contributor

This repo is for the out-of-tree or external CCM and CSI driver. I would recommend that you re-file this issue on the k/k repo so the correct people get informed of the bug.

@dvonthenen
Copy link
Contributor

/close

@k8s-ci-robot
Copy link
Contributor

@dvonthenen: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants