-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infrastructure volume not deleted if I delete the PV on AWS, Azure, GCP, OpenStack #57496
Comments
|
/sig storage |
|
Hi @vlerenc , it is a intentional behavior to keep the backing infrastructure volume exist. Deleting the backing infrastructure volume is a dangerous operation because it may lead to data loss. Only when you create a PVC with As you can see from your You cane get a detailed description of |
|
Who am I to argue, but I find that surprising/misleading. If I delete a PV, I expect the backing volume to be gone. I am sure there are better ways to let the user control the behaviour. Also, e.g. Bosh keeps such disks for a few (configurable) days around and a GC deletes them after a grace period. The way PVs are implemented in Kubernetes right now, people don't even notice they have disks a.) with potentially sensitive data that they have forgotten about and b.) generating unintended costs. I think discussions like #23032 prove the point, that the current behaviour is counter-intuitive. Also, the above was only a shortcut. What I actually did at first was to delete the PVC, while its PV was mounted by a pod. Obviously, the backing volume couldn't be deleted, but the deletion of the PVC passed without an error. That surprised me. When I deleted the pod and then the PV I noticed nothing happened. By then Kubernetes had forgotten about the backing volume. Then I repeated the experiment, but this time I deleted the PV right away. This all makes no sense to me, especially if the idea is to bind the PV to the PVC. It is far to easy to end up in a situation with orphan volumes, at least for my liking. |
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
|
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
|
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: When I delete a PV, the backing infrastructure volume is not deleted (and the PVC turns to
lost). Only if I delete the PVC that led to the PV and the backing infrastructure volume in the first place, all three entities, i.e. the PVC and PV in Kubernetes and the backing infrastructure volume are deleted.What you expected to happen: If I delete a PV (which I did accidentally, which is why I never noticed this behaviour before), I expect it to delete the backing infrastructure volume, otherwise it becomes orphan, remains in the infrastructure and generates costs.
How to reproduce it (as minimally and precisely as possible):
I used the following super simple
pvc.yaml:Console output:
Anything else we need to know?:
I had self-created vanilla Kubernetes clusters on AWS, Azure, GCP and OpenStack. It happened on all four infrastructures. Deleting the PVC always deleted itself, the PV and the backing infrastructure volume. Deleting the PV directly nowhere deleted the backing infrastructure volume.
Environment:
kubectl version):The text was updated successfully, but these errors were encountered: