-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File exists when re-using mounts after scaling pods #240
Comments
what's your pv/pvc/deployment config? If you only have one PV, the |
PV:
PVC:
The deployment itself is managed by the OpenLiberty operator but just contains a simple reference to the PVC.
|
could this issue repro by this example(https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/deployment.yaml) in your cluster? I tried, while it works well after a few tries scaling up/down quickly. |
Hi @andyzhangx , It looks like the issue was caused by having this driver and the old deprecated driver installed at the same time. We installed both drivers to make the transition easier as we have a lot of applications which had to be tested. Issue can be closed for me, unless there are actions you want to take. |
ok, thanks |
f8c8cc4c7 Merge pull request kubernetes-csi#237 from msau42/prow b36b5bfdc Merge pull request kubernetes-csi#240 from dannawang0221/upgrade-go-version adfddcc9a Merge pull request kubernetes-csi#243 from pohly/git-subtree-pull-fix c4650889d pull-test.sh: avoid "git subtree pull" error 7b175a1e2 Update csi-test version to v5.2.0 987c90ccd Update go version to 1.21 to match k/k 2c625d41d Add script to generate patch release notes f9d5b9c05 Merge pull request kubernetes-csi#236 from mowangdk/feature/bump_csi-driver-host-path_version b01fd5372 Bump csi-driver-host-path version up to v1.12.0 984feece4 Merge pull request kubernetes-csi#234 from siddhikhapare/csi-tools 1f7e60599 fixed broken links of testgrid dashboard git-subtree-dir: release-tools git-subtree-split: f8c8cc4c7414c11526f14649856ff8e6b8a4e67c
What happened:
We have an issue that seems to happen when scaling pods down/up quickly in OpenShift (due to deployment change, for example changes to the liveness probe).
If the terminated pod was running on the same node that is used for the new pod, the node already has the mount point configured and seems to fail to re-use it:
We can fix this by labeling that specific node as unschedulable and deleting the pod (mount is created on a new node) which works fine, but is a manual intervention we would like to prevent.
What you expected to happen:
scale down/up quickly not causing any issues with the re-use or re-initialization of mounts.
How to reproduce it:
Install the latest driver, edit a deployment (for example livenessprobe) causing pods to terminate and recreate (possibly on the same node). The behavior we are seeing is that most of the time, 1 out of 2 pods will experience this issue.
Anything else we need to know?:
Environment:
CSI Driver version:
latest
Kubernetes version (use kubectl version):
Install tools:
Install by kubectl
The text was updated successfully, but these errors were encountered: