New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volumes can get mounted multiple times, preventing pod deletion #695
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale This is still plaguing the driver and causes quite some problems to us. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug
What happened?
Trying to delete pod that's using multiple EFS volumes fails very frequently due to failure to unmount the EFS volumes. This seems to happen because some of the volumes got mounted multiple times (with different stunnel port) and kubelet will not attempt to unmount them.
What you expected to happen?
If I ask for 10 EFS volumes, the driver would mount 10 volumes and kubelet would unmount them all and the pod deletion doesn't hang.
How to reproduce it (as minimally and precisely as possible)?
Using a StorageClass for dynamic EFS provisioning:
Create 10 EFS volumes:
Create a pod that uses the volumes:
Wait for the pod to start (may not necessarily succeed due to a bug in efs-utils), then try to delete it with
kubectl delete pod busybox-test
. Depending on the machine load there's good chance the pod deletion would hang indefinitely.Looking at the CSI node driver container, the logs would contain something like this (cut out parts that are unrelated to the problematic volume, redacted EFS endpoints):
So, we have the
pvc-acbe1d27-ed03-4b52-b5e3-bc861cfa7413
PV mounted twice with different stunnel ports. And it seems like kubelet is aware of only one of the mounts and won't attempt to unmount the other one when deleting the pod (?). It is possible to rsh into the node driver container, runumount
manually and get the pod deleted cleanly.Anything else we need to know?:
The problem seems to appear if the volume takes too long to mount: kubelet has a reconciliation loop and if the volume mount test times-out, it will attemt to call mount again. After some time both these attempts succeed but only one of them is then taken in account. Here's some snippets from the kubelet logs (different test, time stamps and PV names won't match the logs above):
Environment
Kubernetes version (use
kubectl version
):v1.23.5+9ce5071
Driver version:
Reproduced with v1.1.1 and v1.3.7
The text was updated successfully, but these errors were encountered: