Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The dynamic PV name does not match the CEPH RBD image name. #69324

Closed
yinwenqin opened this issue Oct 2, 2018 · 8 comments
Closed

The dynamic PV name does not match the CEPH RBD image name. #69324

yinwenqin opened this issue Oct 2, 2018 · 8 comments

Comments

@yinwenqin
Copy link

@yinwenqin yinwenqin commented Oct 2, 2018

/kind feature
/sig storage

What happened:
We use CEPH RBD through dynamic PV. Evething is ok,but the dynamic pv's name do not match the rbd image name,I can't clearly see which pod corresponds to which rbd image, which brings great inconvenience to view the capacity usage of each pod, which is inconvenient to operate and maintain.

Here are the dynamic pvs:

root@yksp009027:~/mytest/ceph/rbd/test# kubectl get pv | grep pvc
pvc-65fbc1c0-c55a-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/accountsystemdbdm        cephrbd                  22h
pvc-f692f7b1-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/amazoninfosysdbdm        cephrbd                  22h
pvc-f6c6a2a3-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/dmclgsdbdm               cephrbd                  22h
pvc-f6f971c7-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/erpcenterdbdm            cephrbd                  22h
pvc-f72ea03b-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/lazadatooldbdm           cephrbd                  22h
pvc-f75e1c9a-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/newordersconverterdbdm   cephrbd                  22h
pvc-f78e3eae-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/yksowdbdm                cephrbd                  22h

Here are the rbd images:

[root@yksp020114 ~]# rbd ls rbd
kubernetes-dynamic-pvc-6613ab8f-c55a-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f6985fc6-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f6cdd9c2-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f711dc1d-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f7c90b57-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f825062d-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f8802d6d-c55b-11e8-b2df-0a58ac1a084e

The name of pv does not match the name of rbd image

What you expected to happen:
What is the mapping or regularity between the name of dynamic pv and the name of rbd image?So that I can master the capacity usage of each pod.

Environment:

  • Kubernetes version (v1.9.3):
  • OS (Ubuntu16.04):
  • Kernel (4.4.0-87-generic):
  • Ceph version 10.2.11
@ictfox
Copy link

@ictfox ictfox commented Dec 11, 2018

This issue related to rbd provisioner code here:
https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/rbd/pkg/provision/provision.go#L124

And I found the related pv name has passed in parameter: options.PVName, maybe there are not much concern about the needs to keep them match in UUID before.

@ictfox
Copy link

@ictfox ictfox commented Dec 12, 2018

In the Kubernetes CSI code, volume provisioner will pass the volume name to SP CSI controller.
And in Ceph CSI code, it will check the parameter volume name, if it's length not zero, it will use this name to create RBD image.
So in CSI framework, the Kubernetes pv name will keep same with RBD image name.
https://github.com/ceph/ceph-csi/blob/master/pkg/rbd/controllerserver.go#L94

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Mar 12, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Apr 11, 2019

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@fejta-bot
Copy link

@fejta-bot fejta-bot commented May 11, 2019

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented May 11, 2019

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ejmarten
Copy link

@ejmarten ejmarten commented Jul 19, 2019

/reopen

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Jul 19, 2019

@ejmarten: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants