Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to use the Cinder volume plugin w/o OpenStack provider #33944

Closed
wants to merge 2 commits into from
Closed

Add ability to use the Cinder volume plugin w/o OpenStack provider #33944

wants to merge 2 commits into from

Conversation

Quentin-M
Copy link
Contributor

@Quentin-M Quentin-M commented Oct 3, 2016

What this PR does / why we need it:

Introduced a few weeks back at SIG Storage, this PR adds the ability to the existing Cinder plugin to manage and provision OpenStack Cinder volumes without the need to use the OpenStack provider. Instead, users can simply specify a secret containing their own OpenStack Keystone credentials, along with a volume identifier.

While currently the plugin only supports the rbd backend (any other Cinder backends can be implemented as necessary moving forward), it differs from the existing rbd driver by abstracting away any technical details and sensitive configuration from the end-user, leaving that responsibility to the cluster administrator (i.e. ceph.conf, ceph.client.admin.keyring); Therefore respecting the separation of knowledge/privilege established between OpenStack administrators and users. The plugin also supports dynamic provisioning.

Below are examples/excerpts of manifests that use this plugin.

Keystone credentials

apiVersion: v1
kind: Secret
metadata:
  name: cinder-auth
type: kubernetes.io/cinder
data:
  authURL: aHR0cDovL2lkZW50aXR5Lm9wZW5zdGFjay5jbHVzdGVyOjMwMDgwL3YzLw==
  username: YWRtaW4=
  password: YWRtaW4=
  project: YWRtaW4=
  domain: ZGVmYXVsdA==

(and optional caCert, cert/key fields to TLS authentication)

Defining a Cinder volume for a pod mount

  - name: cinder-volume
    cinder:
      volumeID: cdf62f61-3120-4946-b68d-5371894cc890
      secretRef: cinder-auth
      fsType: ext4

StorageClass for dynamic provisioning

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: cinder-volumes
provisioner: kubernetes.io/cinder
parameters:
  secretRef: cinder-auth

Claiming a new Cinder volume for a pod mount

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cinder-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "cinder-volumes"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Special notes for your reviewer:

The PR has a huge lines count because the gophercloud/gophercloud dependency is introduced and because there're changes in the generated code. Of course, rackspace/gophercloud is already vendored in Kubernetes, but the project is currently in the state of feature freeze. Unfortunately, we couldn't simply use rackspace/gophercloud in all good conscience because multi-attach support isn't there and because new features such as BeginDetaching are theoretically needed. On the other hand, we couldn't replace entirely the rackspace/gophercloud dependency because the new repository doesn't contain the rackspace/ directory, currently used by the rackspace cloud provider. The feature freeze issue mentions that the directory will be moved to its own repository. I am sure that the owners/maintainers of the rackspace provider's (no file present in the directory) would be able to tell me more and may come-up with a follow-up PR.

Thank you for your time.


This change is Reviewable

@k8s-ci-robot
Copy link
Contributor

Can a kubernetes member verify that this patch is reasonable to test? If so, please reply with "@k8s-bot ok to test" on its own line.

Regular contributors should join the org to skip this step.

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for the commit author(s). If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.

@k8s-github-robot k8s-github-robot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. release-note-label-needed labels Oct 3, 2016
@lavalamp lavalamp assigned saad-ali and unassigned lavalamp Oct 3, 2016
@k8s-github-robot k8s-github-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Oct 4, 2016
@googlebot
Copy link

CLAs look good, thanks!

@googlebot googlebot added cla: yes and removed cla: no labels Oct 7, 2016
@rootfs
Copy link
Contributor

rootfs commented Oct 10, 2016

@kubernetes/sig-openstack

There is a request to migrate from rackspace to gophercloud #30404.

@@ -0,0 +1,432 @@
/*
Copyright 2015 The Kubernetes Authors.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2016 (and for other new files)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

DeleteVolume(deleter *cinderVolumeDeleter) error
}

// cinderPersistentDisk volumes are disk resources provided by C3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

C3?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

type cinderVolume struct {
volName string
// Specifies the partition to mount
//partition string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you going to support it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not believe so in the near future as it is not even available in gophercloud and there is no use-case right now. I dropped the comments.

}

// SetUp bind mounts to the volume path.
func (m *cinderVolumeMounter) SetUpAt(dir string, fsGroup *int64) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to implement detach/attach

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly do you mean? SetUpAt calls out to cdManager.AttachDisk, which then does the required interactions with Cinder and then uses the adequate handler to the backend driver for the actual attachment (volHandler.AttachDisk) - if not already attached locally and if it can be attached (checks for any other attachments and multi-attach support). Same goes for the detachment process.

Copy link
Contributor

@rootfs rootfs Oct 10, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implement attach/detach interface and make it an attachable volume, see https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/attacher.go

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, Attacher and Detacher interface is not called on the kubelet but rather on the controller. When the implementation ends up calling cloud provider, that's exactly what one wants. In this case the attachment needs to happen on the target kubelet. It exactly the same situation as with current rbd volume driver.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eyakubovich we are addressing this issue in rbd, see #33660

type cinderVolumeMounter struct {
*cinderVolume
readOnly bool
pod *api.Pod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need the entire pod? would podUID do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

return secret, nil
}

func newCinderClient(host volume.VolumeHost, secret *api.Secret) (*gophercloud.ServiceClient, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: please split the file into openstack util and cinder util

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Afraid it would only contain this function?

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 12, 2016
@khrisrichardson
Copy link

@Quentin-M having not read the code or knowing the variances between the rackspace and gophercloud libs, do you know whether gophercloud is susceptible to the Cinder device mapping issue, which @kiall is attempting to workaround with https://github.com/kubernetes/kubernetes/pull/33270/files?

@anguslees
Copy link
Member

@khrisrichardson: That's an issue with the information cinder server provides (the "device" value is mostly lies and wishful thinking) - and not something the client can fix. The same workarounds will be necessary with both gophercloud/gophercloud and rackspace/gophercloud.

@k8s-github-robot k8s-github-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 13, 2016
pkg/api/types.go Outdated
@@ -243,6 +243,8 @@ type VolumeSource struct {
VsphereVolume *VsphereVirtualDiskVolumeSource `json:"vsphereVolume,omitempty"`
// AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.
AzureDisk *AzureDiskVolumeSource `json:"azureDisk,omitempty"`
// CinderLocal represents a cinder volume attached and mounted on kubelets host machine without involing the cloud-provider.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: "involing"

pkg/api/types.go Outdated
@@ -287,6 +289,8 @@ type PersistentVolumeSource struct {
VsphereVolume *VsphereVirtualDiskVolumeSource `json:"vsphereVolume,omitempty"`
// AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.
AzureDisk *AzureDiskVolumeSource `json:"azureDisk,omitempty"`
// CinderLocal represents a cinder volume attached and mounted on kubelets host machine without involing the cloud-provider.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: "involing"

@anguslees
Copy link
Member

Looks fine to me from a casual read. I'm disappointed we couldn't share more of the code with the existing cinder volume plugin.

In particular, it seems like the main difference vs existing cinder plugin is that the keystone auth credentials now come from the (user-specified) volume metadata rather than the (admin-specified) provider config. If this is the ~only difference, could cinder_local be basically a thin wrapper around creating an openstack provider instance with different credentials, and then using the existing cinder driver to do the attach/detach? (Or if there is a gap, is there something easy we can do to make that possible rather than maintaining two separate codebases?)

@rootfs
Copy link
Contributor

rootfs commented Oct 14, 2016

@anguslees +1
I would love to have a single cinder volume that works with openstack provider and also directly with keystone. Having two cinder volumes is confusing.

Is there anything existing cinder volume cannot be done while local cinder can? I see the flexible keystone config is interesting, but unless a node can join multiple openstack, how is it different from using cloud config?

Can multi attach support be considered in cloudprovider? @kubernetes/sig-openstack

@feiskyer
Copy link
Member

feiskyer commented Oct 14, 2016

@Quentin-M Really nice idea of supporting cinder volumes without cloud provider.

I would love to have a single cinder volume that works with openstack provider and also directly with keystone. Having two cinder volumes is confusing.

+1. We reuse one cinder volume for both cases in hypernetes. The main differences are

  • Auth: I think it's ok to get them from secret.
  • How to mount the cinder backend volumes: without cloud provider, plugins couldn't attach the volume by Nova.

Other logics are generally same with existing cinder volume.

I think we could make a little change to Cinder volume plugin for this feature.

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 18, 2016
@k8s-github-robot k8s-github-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Oct 20, 2016
@xsgordon
Copy link

@rootfs multi-attach isn't quite fully complete from an OpenStack perspective (still some work outstanding in Nova) but I'm not sure if that actually impacts our ability to use it for these use cases?

@rootfs
Copy link
Contributor

rootfs commented Oct 21, 2016

@xsgordon thanks for clarification. I agree multi-attach is not a must-have for cinder volume. We've marked cinder as RWO.

@@ -1,5 +1,5 @@
/*
Copyright 2015 The Kubernetes Authors.
Copyright 2016 The Kubernetes Authors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave copyright year with original creation date

globalPDPath := makeGlobalPDName(cdl.plugin.host, volumeSource.VolumeID)
glog.V(4).Infof("Cinder (%v): Attach %v", cdl.GetName(), volumeSource.VolumeID)

client, err := newCinderClientFromSecret(volumeSource.SecretRef, cdl.plugin.host)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not call newCinderClientFromSecret(...) in getManager(...) and stick the resulting gophercloud.ServiceClient in the cdManagerLocal struct, instead of creating it on every call?

}

type cinderPlugin struct {
host volume.VolumeHost
// Guarding SetUp and TearDown operations
volumeLocks keymutex.KeyMutex
// For tests purposes, otherwise detected using host:
cinderProvider CinderProvider
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cinderProvider is specific to the cloud provider implementation. Why stick it in this common cinderPlugin struct? Wouldn't it make more sense in cdManagerCloud?

return "", fmt.Errorf("failed to list openstack instances")
}
instanceid, err := instances.InstanceID(nodeName)
manager, err := getManager(attacher.plugin, spec, nil)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this manager be cached in cinderVolume, which already has a manager cinderManager field? If not, should that field be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Deleter and the Provisioner caches the manager inside cinderVolume as their constructors have either volume.Spec or volume.VolumeOptions, letting us determine which manager should be used right away (their instances are volume specific). Unfortunately, the Attacher and Detacher only get volume.Spec during the actual method calls and we therefore need to determine the manager at this time (their instance are not constructed as volume specific - even though it might be the case in practice).

@k8s-github-robot k8s-github-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 3, 2017
@Quentin-M
Copy link
Contributor Author

Rebased. I can re-test that PR fully once it gets a little further in the review process. Thanks.

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 4, 2017
@k8s-github-robot k8s-github-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 11, 2017
@e0ne
Copy link

e0ne commented Mar 17, 2017

I'm not expert in kubernetes and Go, but this patch looks good from the Cinder's point of view. I'll try to spent more time playing with it soon

@k8s-github-robot
Copy link

@Quentin-M PR needs rebase

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 23, 2017
@grodrigues3 grodrigues3 added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Apr 5, 2017
@dims
Copy link
Member

dims commented May 5, 2017

@Quentin-M : will you be revisiting this PR?

@dims
Copy link
Member

dims commented May 5, 2017

@k8s-bot ok to test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 5, 2017
@k8s-ci-robot
Copy link
Contributor

@Quentin-M: The following test(s) failed:

Test name Commit Details Rerun command
Jenkins Bazel Build e1567a0 link @k8s-bot bazel test this
Jenkins kops AWS e2e e1567a0 link @k8s-bot kops aws e2e test this
Jenkins verification e1567a0 link @k8s-bot verify test this
Jenkins GCE etcd3 e2e e1567a0 link @k8s-bot gce etcd3 e2e test this
Jenkins Kubemark GCE e2e e1567a0 link @k8s-bot kubemark e2e test this
Jenkins GCE Node e2e e1567a0 link @k8s-bot node e2e test this
Jenkins unit/integration e1567a0 link @k8s-bot unit test this

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@xrl
Copy link

xrl commented Jun 1, 2017

Does this work mean cinder requests can connect volumes to dedicated hardware?

@mkutsevol
Copy link
Contributor

Hey @xrl! I'm not the author/expert here, but when implementing v2 support I've explored this one a bit. This work is for supporting cinder volumes without a cloud provider per se. Like a standalone mode for cinder volumes.

@j-griffith
Copy link
Contributor

@Quentin-M let me know if you think you'll be picking this back up, if not I'd be happy to volunteer to help out with it, assuming it's something the community feels ok about moving forward with. Personally, I think it's valuable to have this as supported.

@Quentin-M
Copy link
Contributor Author

@j-griffith Thank you for following-up. I do think this has lot of value as well - just like various other members of the community. I think that this is not getting traction tho due to the interface change, that might be unsafe for kubelet restarts, and because of the shift towards having the drivers outside of the repository. I might be able to give it some work if the community wants to move forward too.

@j-griffith
Copy link
Contributor

@Quentin-M Awesome! Something I started thinking about a little bit today (and it might prove to be a bad idea so floating it here for all the world to chime in); Rather than try and modify the existing Cinder code and also make the changes you have here to the interfaces, what about creating an independent volume specific driver keeping in line with the existing volume provider interface?

Honestly there's some room for clean up in the Cinder code that exists here anyway (deprecate/remove V1 as well as some consistency things), and this might be a good opportunity to tackle some of that, and at the same time we could work out the standalone Cinder model without having to introduce any risk or heavy lifting on the existing cloud-provider related side of it. Then work out changes to the openstack cloudprovider (whether it's in or out of tree) to build on/use the volume plugin.

In terms of attach, just wanted to note that there is the os-brick-extension in cinderclient now that should enable all of the supported connectors in Cinder/Nova for Localhost. Granted right now it wouldn't be the most elegant (the client being a python lib) but it's certainly doable. It might at least be an interim solution if people are interested. That means you in response to some of the earlier questions you then have the ability to attach ALL of the supported Cinder transport types (iSCSI-CHAP, FC, LIO, ISER, RBD etc). Long term there is some interest from folks to build a brick golang package that would make things much nicer but that could be a ways off. Anyway, just some thoughts for anybody/everybody interested here.

@mtanino
Copy link

mtanino commented Jul 17, 2017

@j-griffith @Quentin-M
Hi, I think this PR needs to be discussed at sig-storage meeting again. Next meeting will be on July 20(Thursdays at 16:00 UTC).

Feature for v1.8 is under planning phase currently. Feel free to put the PR on the section "PRs to discuss or that need attention" then attend to the meeting!

@FengyunPan
Copy link

/cc

@k8s-github-robot
Copy link

Adding do-not-merge/release-note-label-needed because the release note process has not been followed.
One of the following labels is required "release-note", "release-note-action-required", "release-note-experimental" or "release-note-none".
Please see: https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md#write-release-notes-if-needed.

@k8s-github-robot k8s-github-robot added the do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. label Aug 30, 2017
@fabiand
Copy link
Contributor

fabiand commented Sep 25, 2017

I wonder if this issue can be closed, considering that kubernetes-retired/external-storage#368 landed, this allows to consume Cinder managed volumes, using the already by Kubernetes supported transport protocols.

@Quentin-M Quentin-M closed this Sep 25, 2017
@Quentin-M Quentin-M deleted the volume-cinder-local branch December 17, 2017 07:27
@jianglingxia
Copy link
Contributor

Hi,now k8s support cinder volume in Iaas sceneo,is there a plan let k8s support cinder volume in bare metal ?or how can use it?thanks for all of you and for you reply! @rootfs @feiskyer @dims @FengyunPan

@dims
Copy link
Member

dims commented Mar 28, 2018

@jianglingxia yes, you can use the standalone cinder provisioner with bare metal - https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-standalone-provisioner.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/openstack Issues or PRs related to openstack provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet