You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
A PV provisioned before CSI migration was enabled fails to be attached after the migration is enabled:
rpc error: code = Internal desc = failed to get VolumeID from volumeMigrationService for volumePath: "[WorkloadDatastore] 5137595f-7ce3-e95a-5c03-06d835dea807/jsafrane-ws95r-dynamic-pvc-e54172ad-ba68-4876-8d24-a1a1fc5c855c.vmdk"
What you expected to happen:
The pod can be used.
How to reproduce it (as minimally and precisely as possible):
Dynamically provision a PV without CSI migration enabled. I got PV:
It is quite possible I have mis-configured something, but InvalidArgument error without any indication what is wrong does not help at all
Note that I am able to dynamically provision an in-tree PV after CSI migration is enabled and use that PV in a Pod without any issues. The PV looks like:
Notice _0002/_00d0 in the volume path. This is not present in in-tree PVs provisioned by in-tree volume plugin. I'm not sure it's related to the issue or not.
Environment:
csi-vsphere version: 2.4.0
vsphere-cloud-controller-manager version:
Kubernetes version: 1.23
vSphere version: 7.0.2
OS (e.g. from /etc/os-release): RHEL 8
Install tools: OpenShift
The text was updated successfully, but these errors were encountered:
Trim takes its second argument as set of characters, so if my datastore name is [WorkloadDatastore], it will trim any character from that name from the end of VolumePath. I.e. from path ending with .vmdk it cuts d and k, and file with .vm suffix is then missing on the CNS side.
BTW, returning InvalidArgument without any description about what is wrong is bad - can you fix the CNS side of things? And unit tests would help a lot too.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
A PV provisioned before CSI migration was enabled fails to be attached after the migration is enabled:
What you expected to happen:
The pod can be used.
How to reproduce it (as minimally and precisely as possible):
Dynamically provision a PV without CSI migration enabled. I got PV:
Enable the CSI migration (and wait for everything to restart).
Use the PV in a Pod.
Anything else we need to know?:
ControllerPublish logs from the CSI driver:
The syncer container has very similar logs (note, this is from a different test, with a different PV!):
It is quite possible I have mis-configured something, but
InvalidArgument
error without any indication what is wrong does not help at allNote that I am able to dynamically provision an in-tree PV after CSI migration is enabled and use that PV in a Pod without any issues. The PV looks like:
Notice
_0002/_00d0
in the volume path. This is not present in in-tree PVs provisioned by in-tree volume plugin. I'm not sure it's related to the issue or not.Environment:
The text was updated successfully, but these errors were encountered: