Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make dynamically provisioned volumes (PVC-PV-ceph) recognizably related #780

Closed
Tatsu-Kishi opened this issue Jan 10, 2020 · 8 comments
Closed
Assignees
Labels
component/cephfs Issues related to CephFS component/rbd Issues related to RBD enhancement New feature or request

Comments

@Tatsu-Kishi
Copy link

Describe the feature you'd like to have

Right now it is somewhat cumbersome to relate Pods, PVCs, PVs and ceph images with each other. A common unique prefix if not entire name, based on PVC name + namespace + kubernetes cluster (id) would make human interactions and debugging much easier.

Alternatively/least effort solution would be to have a volumeName field in PVs under spec.csi.volumeAttributes or similar like there was in the old rbd-provisioner.

What is the value to the end user? (why is it a priority?)

Simplified debugging of storage issues from pod to(/via) ceph (to pg/osd/...)

How will we know we have a good solution? (acceptance criteria)

There is a unique, identifiable prefix connecting the pod all the way to the ceph image.

Or minimal viable solution:
Every resource contains the name of it's direct parent and child in it's specs.

Additional context

Old rbd-provisioner had the actual image name in it's spec at: spec.rbd.image

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: ceph.com/rbd
    rbdProvisionerIdentity: ceph.com/rbd
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: dynamic-pvc-test-0
    namespace: default
    resourceVersion: "5897738"
    uid: 00509e74-57a6-11e9-bad8-a6446271379b
  mountOptions:
  - discard
  persistentVolumeReclaimPolicy: Delete
  rbd:
    image: kubernetes-dynamic-pvc-00545857-57a6-11e9-8856-56af7cfe6524
    keyring: /etc/ceph/keyring
    monitors:
    - 10.0.0.1:6789
    - 10.0.0.2:6789
    - 10.0.0.3:6789
    pool: k8s-images
    secretRef:
      name: rbd-secret-k8s
    user: k8s-images
  storageClassName: rbd
  volumeMode: Filesystem
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Feb 25, 2020

with kubernetes-csi/external-provisioner#399 in external-provsioner, kubernetes is passing the PVC name and namespace to the csi driver

@Madhu-1 Madhu-1 added the help wanted Extra attention is needed label Feb 25, 2020
@chenxu1990
Copy link
Contributor

Has Anyone been working on this feature, or I would like to take it .

@chenxu1990
Copy link
Contributor

I plan to fill the volumeID filed of response with "PVC name + namespace + kubernetes cluster (id)" so that external-provsioner can get it as VolumeHandle field. Is there anything wrong with that ? @Madhu-1

@ShyamsundarR
Copy link
Contributor

I plan to fill the volumeID filed of response with "PVC name + namespace + kubernetes cluster (id)" so that external-provsioner can get it as VolumeHandle field. Is there anything wrong with that ? @Madhu-1

The volumeID as it is encoded currently, is needed to identify where the volumes backing image is, among other things. This encoding helps identify the backing volume/image using just the volumeID (for example in the DeleteVolume or DeleteSnapshot calls).

This also has a length limitation as per the spec, and hence cannot encode arbitrary values into the same. Overloading the volumeID would not be the preferred solution for a recognizable image name (or the real image name) in the PV.

From a CSI plugin perspective, the ability to reflect information in a kubernetes PV is limited. The options are to add a more recognizable field in spec.csi.volumeAttributes during provision time, to reflect a field that can contain the image name.

A typical PVs volume_attributes look like so (edited for terseness), and hence can be extended with one more parameter, say imageName (need to find a name common across Cephfs subvolumes and RBD images), that can contain the required information.

  spec:
    csi:
      driver: rbd.csi.ceph.com
      volumeAttributes:
        clusterID: rook-ceph
        pool: replicapool
        imageName: csi-vol-f8c5366c-7456-11ea-8a88-76475b6d39cf <=== suggestion
      volumeHandle: 0001-0009-rook-ceph-0000000000000003-f8c5366c-7456-11ea-8a88-76475b6d39cf

With the above, even with image names that are different than the default (IOW, with prefix csi-vol-), can be easily decoded (this would be the child ref as mentioned in the issue).

The PV already carries the claimRef section for PVC name and namespace, hence repeating this in the volumeAttributes for this requirement would be duplicating the information (IOW, the parent link as mentioned in the issue).

@chenxu1990
Copy link
Contributor

@ShyamsundarR Thank you for your answer. According to my understanding, external-provisioner
calls 'CreateVolume' grpc interface then create pv object . Would you please tell me where ceph-csi can reflect a field of 'spec.csi.volumeAttribute' , I can't find the code but only find the rbd-provisioner store the 'volumeID' to the field 'VolumeHandle' (https://github.com/kubernetes-csi/external-provisioner/blob/3d6bea496c305929b2d9b816ed8cfdfad7e4f5ce/pkg/controller/controller.go#L695)

@ShyamsundarR
Copy link
Contributor

The VolumeAttributes is where the CSI returned VolumeContext is stored.

The VolumeContext is returned by CreateVolume request and is a direct map, as of now, to the input parameters, but is extensible.

I would look at modifying code above in rbd and here for cephFS to return additional parameters in the volume context.

One caveat about the VolumeContext is that we should be able to regenerate it, only based on the volume handle (as per the CSI spec.). For the currently discussed parameter that is possible, hence should not impact this ability in the future.

@nixpanic
Copy link
Member

#957 got merged, can this be closed now?

@nixpanic nixpanic added component/cephfs Issues related to CephFS enhancement New feature or request component/rbd Issues related to RBD and removed help wanted Extra attention is needed labels Apr 20, 2020
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 20, 2020

closing as this is fixed in #957

@Madhu-1 Madhu-1 closed this as completed Apr 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS component/rbd Issues related to RBD enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants