-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Race condition: terminating pod destroys PV mount on new pod #601
Comments
Please come back to me if you were able to replicate the behavior or know how I can fix it on my side! Thanks! |
the new pod with nfs volume should have a standalone nfs mount to the remote nfs server, if you delete the old pod, the existing nfs mount would be unmounted. per your description, the existing nfs mount on the node would become stale when you unmount another nfs mount on the same node, I think you could try to repro this issue without using k8s. and also pls provide nfs csi driver logs on the node, follow by: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/csi-debug.md#case2-volume-mountunmount-failed |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What happened:
I seem to have a race condition between pods...
I have a simple debug pod that mounts an NFS share as an RWM PVC/PV using NFS CSI as storage driver/class. The first time I deployed the pod, everything went well.
Now when I delete the pod to start a new one (e.g., to remount the share when export settings on the server side change) the mount goes stale the moment the old pod actually terminated (disappears).
When I set the replica count of the pod deployment to 0, wait for the pod to terminate, set it to 1 again to not have the overlapping pending/terminating pods states but a clean, undisturbed new debug pod, the mount inside the pod remains stable.
What you expected to happen:
The mount inside the new pod should remain stable, even if the old pod terminates and unmounts its own PV/PVC binding (remember: RWM).
How to reproduce it:
Anything else we need to know?:
Storage Class:
Environment:
kubectl version
): 1.28.3uname -a
): vanilla kernel of the corresponding distro/OS. no changes here.The text was updated successfully, but these errors were encountered: