New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make dynamically provisioned volumes (PVC-PV-ceph) recognizably related #780
Comments
with kubernetes-csi/external-provisioner#399 in external-provsioner, kubernetes is passing the PVC name and namespace to the csi driver |
Has Anyone been working on this feature, or I would like to take it . |
I plan to fill the volumeID filed of response with "PVC name + namespace + kubernetes cluster (id)" so that external-provsioner can get it as VolumeHandle field. Is there anything wrong with that ? @Madhu-1 |
The This also has a length limitation as per the spec, and hence cannot encode arbitrary values into the same. Overloading the From a CSI plugin perspective, the ability to reflect information in a kubernetes PV is limited. The options are to add a more recognizable field in A typical PVs
With the above, even with image names that are different than the default (IOW, with prefix The PV already carries the |
@ShyamsundarR Thank you for your answer. According to my understanding, external-provisioner |
The The I would look at modifying code above in rbd and here for cephFS to return additional parameters in the volume context. One caveat about the |
#957 got merged, can this be closed now? |
closing as this is fixed in #957 |
Describe the feature you'd like to have
Right now it is somewhat cumbersome to relate Pods, PVCs, PVs and ceph images with each other. A common unique prefix if not entire name, based on PVC name + namespace + kubernetes cluster (id) would make human interactions and debugging much easier.
Alternatively/least effort solution would be to have a volumeName field in PVs under spec.csi.volumeAttributes or similar like there was in the old rbd-provisioner.
What is the value to the end user? (why is it a priority?)
Simplified debugging of storage issues from pod to(/via) ceph (to pg/osd/...)
How will we know we have a good solution? (acceptance criteria)
There is a unique, identifiable prefix connecting the pod all the way to the ceph image.
Or minimal viable solution:
Every resource contains the name of it's direct parent and child in it's specs.
Additional context
Old rbd-provisioner had the actual image name in it's spec at: spec.rbd.image
The text was updated successfully, but these errors were encountered: