-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ceph-CSI: Data retains if PV is deleted #4651
Comments
@code-chris have you created and deleted PV or PVC? |
Both created and both deleted |
@code-chris you have created PV or PVC? |
I created the PV or PVC not manually. The CSI-Driver did. |
have you deleted the PV manually if yes its not an issue, if no this is a cephfs issue |
I deleted the PV manually, as the ReclaimPolicy of the StorageClass is Retain. |
the user should not delete the PV object (the provisioner as to delete the PV object after deleting the backend image), check provisioner logs |
Yes, there are logs which indicate, that the provisioner tries to sync the folders with PVs. I will test that. |
yeah sorry i didn't noticed the reclaim policy, in that case even if you delete PV and PVC admin need to manually cleanup the backend storage. This is not a bug working as expected see https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain |
ah ok, seems I missed that. Thanks. |
I made the same mistake and deleted the PV manually . How can I cleanup ceph ? |
@ehassan1312 try https://www.mrajanna.com/tracking-pv-rados-omap-in-cephcsi/ you will find a way to track down the mapping between rbd image in the pool and the pv. if the pv is not present delete rbd image and rados object. |
I've written a script that will cross-references your existing Rook-Ceph PV's to Ceph RBD Images and list out images that are Stale/Orphaned and can be removed: https://github.com/reefland/find-orphaned-rbd-images Such as:
|
Is this a bug report or feature request?
Deviation from expected behavior:
Expected behavior:
How to reproduce it (minimal and precise):
File(s) to submit:
Environment:
uname -a
): 4.15.0-1044-gkerook version
inside of a Rook Pod): 1.2.1ceph -v
): ceph/ceph:v14.2.5-20191210kubectl version
): v1.14.8-gke.17ceph health
in the Rook Ceph toolbox): HEALTHYThe text was updated successfully, but these errors were encountered: