New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to use the Cinder volume plugin w/o OpenStack provider #33944
Conversation
Can a kubernetes member verify that this patch is reasonable to test? If so, please reply with "@k8s-bot ok to test" on its own line. Regular contributors should join the org to skip this step. |
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for the commit author(s). If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. |
CLAs look good, thanks! |
@kubernetes/sig-openstack There is a request to migrate from rackspace to gophercloud #30404. |
pkg/volume/cinder_local/cinder.go
Outdated
@@ -0,0 +1,432 @@ | |||
/* | |||
Copyright 2015 The Kubernetes Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2016 (and for other new files)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
pkg/volume/cinder_local/cinder.go
Outdated
DeleteVolume(deleter *cinderVolumeDeleter) error | ||
} | ||
|
||
// cinderPersistentDisk volumes are disk resources provided by C3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
C3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
pkg/volume/cinder_local/cinder.go
Outdated
type cinderVolume struct { | ||
volName string | ||
// Specifies the partition to mount | ||
//partition string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you going to support it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not believe so in the near future as it is not even available in gophercloud and there is no use-case right now. I dropped the comments.
pkg/volume/cinder_local/cinder.go
Outdated
} | ||
|
||
// SetUp bind mounts to the volume path. | ||
func (m *cinderVolumeMounter) SetUpAt(dir string, fsGroup *int64) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to implement detach/attach
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly do you mean? SetUpAt
calls out to cdManager.AttachDisk
, which then does the required interactions with Cinder and then uses the adequate handler to the backend driver for the actual attachment (volHandler.AttachDisk
) - if not already attached locally and if it can be attached (checks for any other attachments and multi-attach support). Same goes for the detachment process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement attach/detach interface and make it an attachable volume, see https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/attacher.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, Attacher
and Detacher
interface is not called on the kubelet but rather on the controller. When the implementation ends up calling cloud provider, that's exactly what one wants. In this case the attachment needs to happen on the target kubelet. It exactly the same situation as with current rbd volume driver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eyakubovich we are addressing this issue in rbd, see #33660
pkg/volume/cinder_local/cinder.go
Outdated
type cinderVolumeMounter struct { | ||
*cinderVolume | ||
readOnly bool | ||
pod *api.Pod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need the entire pod? would podUID do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
return secret, nil | ||
} | ||
|
||
func newCinderClient(host volume.VolumeHost, secret *api.Secret) (*gophercloud.ServiceClient, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: please split the file into openstack util and cinder util
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Afraid it would only contain this function?
@Quentin-M having not read the code or knowing the variances between the rackspace and gophercloud libs, do you know whether gophercloud is susceptible to the Cinder device mapping issue, which @kiall is attempting to workaround with https://github.com/kubernetes/kubernetes/pull/33270/files? |
@khrisrichardson: That's an issue with the information cinder server provides (the "device" value is mostly lies and wishful thinking) - and not something the client can fix. The same workarounds will be necessary with both gophercloud/gophercloud and rackspace/gophercloud. |
pkg/api/types.go
Outdated
@@ -243,6 +243,8 @@ type VolumeSource struct { | |||
VsphereVolume *VsphereVirtualDiskVolumeSource `json:"vsphereVolume,omitempty"` | |||
// AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. | |||
AzureDisk *AzureDiskVolumeSource `json:"azureDisk,omitempty"` | |||
// CinderLocal represents a cinder volume attached and mounted on kubelets host machine without involing the cloud-provider. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: "involing"
pkg/api/types.go
Outdated
@@ -287,6 +289,8 @@ type PersistentVolumeSource struct { | |||
VsphereVolume *VsphereVirtualDiskVolumeSource `json:"vsphereVolume,omitempty"` | |||
// AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. | |||
AzureDisk *AzureDiskVolumeSource `json:"azureDisk,omitempty"` | |||
// CinderLocal represents a cinder volume attached and mounted on kubelets host machine without involing the cloud-provider. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: "involing"
Looks fine to me from a casual read. I'm disappointed we couldn't share more of the code with the existing cinder volume plugin. In particular, it seems like the main difference vs existing cinder plugin is that the keystone auth credentials now come from the (user-specified) volume metadata rather than the (admin-specified) provider config. If this is the ~only difference, could cinder_local be basically a thin wrapper around creating an openstack provider instance with different credentials, and then using the existing cinder driver to do the attach/detach? (Or if there is a gap, is there something easy we can do to make that possible rather than maintaining two separate codebases?) |
@anguslees +1 Is there anything existing cinder volume cannot be done while local cinder can? I see the flexible keystone config is interesting, but unless a node can join multiple openstack, how is it different from using cloud config? Can multi attach support be considered in cloudprovider? @kubernetes/sig-openstack |
@Quentin-M Really nice idea of supporting cinder volumes without cloud provider.
+1. We reuse one cinder volume for both cases in hypernetes. The main differences are
Other logics are generally same with existing cinder volume. I think we could make a little change to Cinder volume plugin for this feature. |
@rootfs multi-attach isn't quite fully complete from an OpenStack perspective (still some work outstanding in Nova) but I'm not sure if that actually impacts our ability to use it for these use cases? |
@xsgordon thanks for clarification. I agree multi-attach is not a must-have for cinder volume. We've marked cinder as RWO. |
pkg/volume/cinder/cinder.go
Outdated
@@ -1,5 +1,5 @@ | |||
/* | |||
Copyright 2015 The Kubernetes Authors. | |||
Copyright 2016 The Kubernetes Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leave copyright year with original creation date
pkg/volume/cinder/manager_local.go
Outdated
globalPDPath := makeGlobalPDName(cdl.plugin.host, volumeSource.VolumeID) | ||
glog.V(4).Infof("Cinder (%v): Attach %v", cdl.GetName(), volumeSource.VolumeID) | ||
|
||
client, err := newCinderClientFromSecret(volumeSource.SecretRef, cdl.plugin.host) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not call newCinderClientFromSecret(...)
in getManager(...)
and stick the resulting gophercloud.ServiceClient
in the cdManagerLocal
struct, instead of creating it on every call?
pkg/volume/cinder/cinder.go
Outdated
} | ||
|
||
type cinderPlugin struct { | ||
host volume.VolumeHost | ||
// Guarding SetUp and TearDown operations | ||
volumeLocks keymutex.KeyMutex | ||
// For tests purposes, otherwise detected using host: | ||
cinderProvider CinderProvider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cinderProvider
is specific to the cloud provider implementation. Why stick it in this common cinderPlugin
struct? Wouldn't it make more sense in cdManagerCloud
?
pkg/volume/cinder/attacher.go
Outdated
return "", fmt.Errorf("failed to list openstack instances") | ||
} | ||
instanceid, err := instances.InstanceID(nodeName) | ||
manager, err := getManager(attacher.plugin, spec, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this manager
be cached in cinderVolume
, which already has a manager cinderManager
field? If not, should that field be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Deleter and the Provisioner caches the manager
inside cinderVolume
as their constructors have either volume.Spec
or volume.VolumeOptions
, letting us determine which manager should be used right away (their instances are volume specific). Unfortunately, the Attacher and Detacher only get volume.Spec
during the actual method calls and we therefore need to determine the manager at this time (their instance are not constructed as volume specific - even though it might be the case in practice).
Rebased. I can re-test that PR fully once it gets a little further in the review process. Thanks. |
I'm not expert in kubernetes and Go, but this patch looks good from the Cinder's point of view. I'll try to spent more time playing with it soon |
@Quentin-M PR needs rebase |
@Quentin-M : will you be revisiting this PR? |
@k8s-bot ok to test |
@Quentin-M: The following test(s) failed:
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Does this work mean cinder requests can connect volumes to dedicated hardware? |
Hey @xrl! I'm not the author/expert here, but when implementing v2 support I've explored this one a bit. This work is for supporting cinder volumes without a cloud provider per se. Like a standalone mode for cinder volumes. |
@Quentin-M let me know if you think you'll be picking this back up, if not I'd be happy to volunteer to help out with it, assuming it's something the community feels ok about moving forward with. Personally, I think it's valuable to have this as supported. |
@j-griffith Thank you for following-up. I do think this has lot of value as well - just like various other members of the community. I think that this is not getting traction tho due to the interface change, that might be unsafe for kubelet restarts, and because of the shift towards having the drivers outside of the repository. I might be able to give it some work if the community wants to move forward too. |
@Quentin-M Awesome! Something I started thinking about a little bit today (and it might prove to be a bad idea so floating it here for all the world to chime in); Rather than try and modify the existing Cinder code and also make the changes you have here to the interfaces, what about creating an independent volume specific driver keeping in line with the existing volume provider interface? Honestly there's some room for clean up in the Cinder code that exists here anyway (deprecate/remove V1 as well as some consistency things), and this might be a good opportunity to tackle some of that, and at the same time we could work out the standalone Cinder model without having to introduce any risk or heavy lifting on the existing cloud-provider related side of it. Then work out changes to the openstack cloudprovider (whether it's in or out of tree) to build on/use the volume plugin. In terms of attach, just wanted to note that there is the os-brick-extension in cinderclient now that should enable all of the supported connectors in Cinder/Nova for Localhost. Granted right now it wouldn't be the most elegant (the client being a python lib) but it's certainly doable. It might at least be an interim solution if people are interested. That means you in response to some of the earlier questions you then have the ability to attach ALL of the supported Cinder transport types (iSCSI-CHAP, FC, LIO, ISER, RBD etc). Long term there is some interest from folks to build a brick golang package that would make things much nicer but that could be a ways off. Anyway, just some thoughts for anybody/everybody interested here. |
@j-griffith @Quentin-M Feature for v1.8 is under planning phase currently. Feel free to put the PR on the section "PRs to discuss or that need attention" then attend to the meeting!
|
/cc |
Adding do-not-merge/release-note-label-needed because the release note process has not been followed. |
I wonder if this issue can be closed, considering that kubernetes-retired/external-storage#368 landed, this allows to consume Cinder managed volumes, using the already by Kubernetes supported transport protocols. |
Hi,now k8s support cinder volume in Iaas sceneo,is there a plan let k8s support cinder volume in bare metal ?or how can use it?thanks for all of you and for you reply! @rootfs @feiskyer @dims @FengyunPan |
@jianglingxia yes, you can use the standalone cinder provisioner with bare metal - https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-standalone-provisioner.md |
What this PR does / why we need it:
Introduced a few weeks back at SIG Storage, this PR adds the ability to the existing Cinder plugin to manage and provision OpenStack Cinder volumes without the need to use the OpenStack provider. Instead, users can simply specify a secret containing their own OpenStack Keystone credentials, along with a volume identifier.
While currently the plugin only supports the rbd backend (any other Cinder backends can be implemented as necessary moving forward), it differs from the existing rbd driver by abstracting away any technical details and sensitive configuration from the end-user, leaving that responsibility to the cluster administrator (i.e. ceph.conf, ceph.client.admin.keyring); Therefore respecting the separation of knowledge/privilege established between OpenStack administrators and users. The plugin also supports dynamic provisioning.
Below are examples/excerpts of manifests that use this plugin.
Keystone credentials
(and optional
caCert
,cert
/key
fields to TLS authentication)Defining a Cinder volume for a pod mount
StorageClass for dynamic provisioning
Claiming a new Cinder volume for a pod mount
Special notes for your reviewer:
The PR has a huge lines count because the
gophercloud/gophercloud
dependency is introduced and because there're changes in the generated code. Of course,rackspace/gophercloud
is already vendored in Kubernetes, but the project is currently in the state of feature freeze. Unfortunately, we couldn't simply userackspace/gophercloud
in all good conscience because multi-attach support isn't there and because new features such as BeginDetaching are theoretically needed. On the other hand, we couldn't replace entirely therackspace/gophercloud
dependency because the new repository doesn't contain the rackspace/ directory, currently used by the rackspace cloud provider. The feature freeze issue mentions that the directory will be moved to its own repository. I am sure that the owners/maintainers of the rackspace provider's (no file present in the directory) would be able to tell me more and may come-up with a follow-up PR.Thank you for your time.
This change is