New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for flex volume. #13840
Add support for flex volume. #13840
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed, please reply here (e.g.
|
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
fc489de
to
ecab812
Compare
@thockin @markturansky @bgrant0607 Please review |
Hi @NELCY thanks for the PR. What kind of storage do you envision a FlexVolume provisioning/attaching/mounting? Can you describe the use case for this volume? |
Looks very interesting On Thursday, September 10, 2015, Chakravarthy Nelluri <
|
Hi Mark, We are planning to use this plugin to support our own volume type. But this is generic enough to support any kind of volume, with out requiring code changes in Kubernetes. |
49e35c5
to
6d2f755
Compare
I signed it! CLA Please verify. |
CLAs look good, thanks! |
Labelling this PR as size/XL |
|
||
// NewBuilder is the builder routine to build the volume. | ||
func (plugin *flexVolumePlugin) NewBuilder(spec *volume.Spec, pod *api.Pod, _ volume.VolumeOptions, mounter mount.Interface) (volume.Builder, error) { | ||
return plugin.newBuilderInternal(spec, pod, &flexVolumeUtil{}, mounter, exec.New()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you planning to support SELinux with this volume type? Currently the SELinux context of the volumes directory is passed to plugins via the VolumeOptions
arg to NewBuilder
.
You might also be interested in: #12944
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @pmore.
Yes. Went through 12944. Looks very interesting. Would extend this plugin and add support for SELinux after your PR is in.
Thinking more, the provider plugin can verify whether the provider implementation of filesystem supports SELinux or not. For example: Some NFS implementations do not support SELinux and some do.
How does an admin manage security of custom volumes? A volume can potentially gain root access, so we need some way of limiting or granting access to the volume type. In addition, command is highly insecure and would have to be controlled. For network plugins we require them to be registered on each kubelet. Is there a reason not to enforce that for these sorts of volume plugins as well? |
Thanks @smarterclayton for the feedback. Volumes still need to be authenticated with the storage server to be attached to a Kubelet node. The authentication parameters are plugin specific and are passed to the plugin using options. The custom plugin executable still needs to be installed on each Kubelet node in the plugin path. My current assumption is this path is secure and only admin supported plugins are available with the restricted set of permissions. If we want to control it more, I can add support for adding a 'white list of volume types' controlled by cluster admin. Let me know WDYT? The network plugins are probed today during Kubelet bootstrap. I moved away from this to be able to support dynamically installed plugins with out requiring Kubelet restart. If this is a concern and we deliberately want to restrict the list, I will add support for it. |
I'm curious to hear what Tim and Brian think. No doubt you've seen plugins.go where each binary includes which plugins are compiled into it. The comments there deliberately say there's no magic or dynamic loading of plugins. This is purposeful. I tend to think this PR breaks that pattern. If this can work for any volume, as you say, then why have volume plugins at all? We just need this one. That's why I was asking for your use case and type of storage you want to use. Why not a plugin like the rest? |
Hi Mark, The intent is not to replace the existing plugins. It is not practical to add all proprietary volume types. The plugin is intended to add flexibility to support other proprietary volumes. There are so many other good things with the current plugins, like argument validation etc, which are tough to cover using flexible volume plugin. We wanted to support our own volume type using this plugin. We can add our volume type to Kubelet code like the rest, but our volume type is not a generic volume type and can only be used with our storage. Testing, validating is going to be a problem. I initially started with HTTP plugin, but ended up with exec based plugin after hearing from @bgrant0607 that we want to follow one model for external plugins and we already use exec based plugin model for networking. @thockin @bgrant0607 PTAL |
I doubt I'll have time to look at this in the 1.1 timeframe -- the code-complete deadline is Friday. Extensibility is likely to be a priority for 1.2. I'd like to consider this then, together with several other extension proposals. |
For an exec based plugin, I would recommend not allowing anything to be On Mon, Sep 14, 2015 at 1:20 PM, Brian Grant notifications@github.com
Clayton Coleman | Lead Engineer, OpenShift |
Hi Clayton, There are a multitude of options specific to each volume from different providers like number of replicas, snapshot interval etc and my concern we cannot cover all of them in a single spec with out exposing a generic key value pair, like our annotations today. |
GCE e2e build/test failed for commit fa76de7. |
@k8s-bot test this |
GCE e2e test build/test passed for commit fa76de7. |
@k8s-bot unit test this |
The author of this PR is not in the whitelist for merge, can one of the admins add the 'ok-to-merge' label? |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit fa76de7. |
@k8s-bot unit test this |
@k8s-bot test this please |
GCE e2e test build/test passed for commit fa76de7. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit fa76de7. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit fa76de7. |
@k8s-bot unit test this please |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e test build/test passed for commit fa76de7. |
Automatic merge from submit-queue |
Auto commit by PR queue bot
@NELCY lots of code in this PR. nice job on getting it merged. |
Thanks @markturansky. Really appreciate the support from @saad-ali, you and @thockin. |
Auto commit by PR queue bot
This change list adds support for a new type of volume, 'flex volume'.
The goal of this change is to enable third-party providers to provide their own executable to setup and teardown volumes in Kubernetes.
Plugin takes the following parameters:
The executable command is invoked from kubelet to setup & teardown the volumes.
Setup Path:
Executable is invoked with "setup", "volumeID" and "json encoded options" as arguments. Setup returns the block device where the volume is setup, like /dev/mapper/vg-vol1
Teardown Path:
Executable is invoked with "teardown" and "block device" as arguments.
Example flex volume definition:
{
"provider": "blue",
"command": "setup.sh",
"volumeID": "vol1",
"fsType": "ext4",
"options": {
"size": "10g",
"replicas": 3,
"qos": "high"
}
}