Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

VMDK file not deleted when deleting K8s PVC - because vmdk still attached to worker node #351

Closed
guillierf opened this issue Oct 26, 2017 · 9 comments
Assignees

Comments

@guillierf
Copy link

I used the 'Guestbook Application with Dynamic Provisioning' example
(https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/guestbook.html)

steps:
1/ kubectl create -f redis-sc.yaml
2/ kubectl create -f redis-master-claim.yaml -f redis-slave-claim.yaml
3/ kubectl create -f guestbook-all-in-one.yaml

PV, PVC and VMDK were correctly created on step 2/

now, I did:

$ kubectl delete pvc redis-master-claim redis-slave-claim
persistentvolumeclaim "redis-master-claim" deleted
persistentvolumeclaim "redis-slave-claim" deleted

kubectl get pv gives:

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-050e8a4c-ba7d-11e7-9288-005056841f8e 2Gi RWO Delete Failed default/redis-master-claim thin-disk 9m
pvc-051077ed-ba7d-11e7-9288-005056841f8e 2Gi RWO Delete Failed default/redis-slave-claim thin-disk 9m

on vCenter, I see many attempts to delete the VMDK files but they fail because the VMDK files were still attached to worker nodes.

so looks like the process of detaching VMDK disk to worker node is missing here.

@guillierf
Copy link
Author

more info:
the root cause is because I didn't delete the redis-master and redis-slave PODs BEFORE deleting the PVC.

Deleting the POD will automatically detach the VMDK from worker nodes where these POD were living.

Don't know if it's a K8s issue here or not, but this particular scenario should have some kind of protection here.

@divyenpatel
Copy link

@guillierf There is no issue here.

Sequence should be.
Delete the Pod first, then delete PVC.

When you delete PVC, Kubernetes will try deleting associated PV, but since disks are attached to the Node VM, they can not be deleted. You can ignore errors you are seeing on the vCenter, as this is expected.

Kubernetes keeps re-trying failed operations. So once you delete the Pod, VMDK will be detached from the Node VM, and eventually PVs will be deleted.

Let me know if we can close this issue?

@guillierf
Copy link
Author

guillierf commented Oct 27, 2017

@divyenpatel

I replayed the test:
1/delete PVC
2/delete PV (PV were not deleted automatically upon PVC deletion)
3/delete POD

I can see the worker nodes gets reconfigured (VMDK disks detached from worker nodes VM)

but VMDK files still remain on Datastore (for 1 hour now).
Do you know how long it takes for these files to be definitively removed?

@divyenpatel
Copy link

@guillierf you just need to delete the PVC, and not the PV resource. Kubernetes will delete PV associated with PVC along with VMDK.

Can you try with
Just delete the POD and then PVC and see the tasks on VC.

@guillierf
Copy link
Author

@divyenpatel
OK I will try it.

but the MAIN point of this thread/issue is that there is not so much safeguard implemented here.
By not following the correct procedure, I can easily leave VMDK files on datastore forever (and it will be difficult to detect if they are really used or not). They take physical resource and they will be extremely difficult to get rid of when cleaning needs to be done.

I believe stronger code should be implemented to make sure user cannot perform undesired actions.

@divyenpatel
Copy link

I believe stronger code should be implemented to make sure user cannot perform undesired actions.

@guillierf Yes, these limitations will be addressed by Kubernetes. You can refer to the following proposals.

CC: @BaluDontu @SandeepPissay @tusharnt

@guillierf
Copy link
Author

thanks @divyenpatel !!!
this is very useful

@guillierf
Copy link
Author

@divyenpatel

"Can you try with
Just delete the POD and then PVC and see the tasks on VC."

yes, this sequence works 100% fine. VMDK is correctly deleted on Datastore.

@divyenpatel
Copy link

Thank you for confirming. I am closing this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants