Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The dynamic PV name does not match the CEPH RBD image name. #69324

Open
yinwenqin opened this Issue Oct 2, 2018 · 3 comments

Comments

Projects
None yet
4 participants
@yinwenqin
Copy link

yinwenqin commented Oct 2, 2018

/kind feature
/sig storage

What happened:
We use CEPH RBD through dynamic PV. Evething is ok,but the dynamic pv's name do not match the rbd image name,I can't clearly see which pod corresponds to which rbd image, which brings great inconvenience to view the capacity usage of each pod, which is inconvenient to operate and maintain.

Here are the dynamic pvs:

root@yksp009027:~/mytest/ceph/rbd/test# kubectl get pv | grep pvc
pvc-65fbc1c0-c55a-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/accountsystemdbdm        cephrbd                  22h
pvc-f692f7b1-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/amazoninfosysdbdm        cephrbd                  22h
pvc-f6c6a2a3-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/dmclgsdbdm               cephrbd                  22h
pvc-f6f971c7-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/erpcenterdbdm            cephrbd                  22h
pvc-f72ea03b-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/lazadatooldbdm           cephrbd                  22h
pvc-f75e1c9a-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/newordersconverterdbdm   cephrbd                  22h
pvc-f78e3eae-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/yksowdbdm                cephrbd                  22h

Here are the rbd images:

[root@yksp020114 ~]# rbd ls rbd
kubernetes-dynamic-pvc-6613ab8f-c55a-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f6985fc6-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f6cdd9c2-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f711dc1d-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f7c90b57-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f825062d-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f8802d6d-c55b-11e8-b2df-0a58ac1a084e

The name of pv does not match the name of rbd image

What you expected to happen:
What is the mapping or regularity between the name of dynamic pv and the name of rbd image?So that I can master the capacity usage of each pod.

Environment:

  • Kubernetes version (v1.9.3):
  • OS (Ubuntu16.04):
  • Kernel (4.4.0-87-generic):
  • Ceph version 10.2.11
@ictfox

This comment has been minimized.

Copy link

ictfox commented Dec 11, 2018

This issue related to rbd provisioner code here:
https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/rbd/pkg/provision/provision.go#L124

And I found the related pv name has passed in parameter: options.PVName, maybe there are not much concern about the needs to keep them match in UUID before.

@ictfox

This comment has been minimized.

Copy link

ictfox commented Dec 12, 2018

In the Kubernetes CSI code, volume provisioner will pass the volume name to SP CSI controller.
And in Ceph CSI code, it will check the parameter volume name, if it's length not zero, it will use this name to create RBD image.
So in CSI framework, the Kubernetes pv name will keep same with RBD image name.
https://github.com/ceph/ceph-csi/blob/master/pkg/rbd/controllerserver.go#L94

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Mar 12, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.