You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When CSI migration is enabled, kubelet & the CSI driver is able to run such a pod. When the pod is deleted, its volumes remain mounted + the Pod is Terminating forever:
$ mount | grep 1641379409741383902
/dev/sde on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/csi.vsphere.vmware.com-[WorkloadDatastore] 5137595f-7ce3-e95a-5c03-06d835dea807/e2e-vmdk-1641379409741383902.vmdk/globalmount type ext4 (rw,relatime,seclabel)
/dev/sde on /var/lib/kubelet/pods/00541d84-1c59-4788-bb9b-e97ae5a6ecbe/volumes/kubernetes.io~csi/csi.vsphere.vmware.com-[WorkloadDatastore] 5137595f-7ce3-e95a-5c03-06d835dea807~e2e-vmdk-1641379409741383902.vmdk/mount type ext4 (rw,relatime,seclabel)
What you expected to happen:
The volumes are unmounted.
How to reproduce it (as minimally and precisely as possible):
Enable CSI migration and run in-tree tests for inline vsphere volumes.
Anything else we need to know?:
The root cause is that /proc/mounts and /proc/self/mountinfo escape space " " in the mount path with \040:
NodeUnpublishVolume: Target /var/lib/kubelet/pods/00541d84-1c59-4788-bb9b-e97ae5a6ecbe/volumes/kubernetes.io~csi/csi.vsphere.vmware.com-[WorkloadDatastore] 5137595f-7ce3-e95a-5c03-06d835dea807~e2e-vmdk-1641379409741383902.vmdk/mount not present in mount points. Assuming it is already unpublished
NodeUnstageVolume: Target path \"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/csi.vsphere.vmware.com-[WorkloadDatastore] 5137595f-7ce3-e95a-5c03-06d835dea807/e2e-vmdk-1641379409741383902.vmdk/globalmount\" is not mounted. Skipping unstage."
https://github.com/akutz/gofsutil is no longer maintained. We could consider moving some of the util functions to vsphere-csi-driver repo so that we can fix bugs ourselves.
/kind bug
What happened:
I have a pod with in-tree in-line volume:
When CSI migration is enabled, kubelet & the CSI driver is able to run such a pod. When the pod is deleted, its volumes remain mounted + the Pod is Terminating forever:
What you expected to happen:
The volumes are unmounted.
How to reproduce it (as minimally and precisely as possible):
Enable CSI migration and run in-tree tests for inline vsphere volumes.
Anything else we need to know?:
The root cause is that
/proc/mountsand/proc/self/mountinfoescape space" "in the mount path with\040:Therefore mount comparison in the driver never finds the volume mounted:
vsphere-csi-driver/pkg/csi/service/osutils/linux_os_utils.go
Line 1013 in 4cbd430
And I can see in the logs:
Environment:
uname -a): 4.18.0-305.30.1.el8_4.x86_64The text was updated successfully, but these errors were encountered: