Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The dynamic PV name does not match the CEPH RBD image name. #69324

Closed
yinwenqin opened this issue Oct 2, 2018 · 8 comments
Closed

The dynamic PV name does not match the CEPH RBD image name. #69324

yinwenqin opened this issue Oct 2, 2018 · 8 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@yinwenqin
Copy link

yinwenqin commented Oct 2, 2018

/kind feature
/sig storage

What happened:
We use CEPH RBD through dynamic PV. Evething is ok,but the dynamic pv's name do not match the rbd image name,I can't clearly see which pod corresponds to which rbd image, which brings great inconvenience to view the capacity usage of each pod, which is inconvenient to operate and maintain.

Here are the dynamic pvs:

root@yksp009027:~/mytest/ceph/rbd/test# kubectl get pv | grep pvc
pvc-65fbc1c0-c55a-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/accountsystemdbdm        cephrbd                  22h
pvc-f692f7b1-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/amazoninfosysdbdm        cephrbd                  22h
pvc-f6c6a2a3-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/dmclgsdbdm               cephrbd                  22h
pvc-f6f971c7-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/erpcenterdbdm            cephrbd                  22h
pvc-f72ea03b-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/lazadatooldbdm           cephrbd                  22h
pvc-f75e1c9a-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/newordersconverterdbdm   cephrbd                  22h
pvc-f78e3eae-c55b-11e8-878f-141877468256   100Gi      RWO            Retain           Bound     default/yksowdbdm                cephrbd                  22h

Here are the rbd images:

[root@yksp020114 ~]# rbd ls rbd
kubernetes-dynamic-pvc-6613ab8f-c55a-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f6985fc6-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f6cdd9c2-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f711dc1d-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f7c90b57-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f825062d-c55b-11e8-b2df-0a58ac1a084e
kubernetes-dynamic-pvc-f8802d6d-c55b-11e8-b2df-0a58ac1a084e

The name of pv does not match the name of rbd image

What you expected to happen:
What is the mapping or regularity between the name of dynamic pv and the name of rbd image?So that I can master the capacity usage of each pod.

Environment:

  • Kubernetes version (v1.9.3):
  • OS (Ubuntu16.04):
  • Kernel (4.4.0-87-generic):
  • Ceph version 10.2.11
@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 2, 2018
@ictfox
Copy link

ictfox commented Dec 11, 2018

This issue related to rbd provisioner code here:
https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/rbd/pkg/provision/provision.go#L124

And I found the related pv name has passed in parameter: options.PVName, maybe there are not much concern about the needs to keep them match in UUID before.

@ictfox
Copy link

ictfox commented Dec 12, 2018

In the Kubernetes CSI code, volume provisioner will pass the volume name to SP CSI controller.
And in Ceph CSI code, it will check the parameter volume name, if it's length not zero, it will use this name to create RBD image.
So in CSI framework, the Kubernetes pv name will keep same with RBD image name.
https://github.com/ceph/ceph-csi/blob/master/pkg/rbd/controllerserver.go#L94

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 11, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ejmarten
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@ejmarten: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

5 participants