-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experimental feedback - Volume plugins #13420
Comments
I went through docs and got to admit, plugin system looks pretty interesting; haven't tried it myself though. One note I could think of was that I find the API between plugin and engine to be too simple. E.g. I would like to see there ID of a container which is asking for volume creation/mounting. I also assume that configuration is completely up to plugin and there is no way to pass anything from engine to the plugin (e.g. arguments for mounting). Also, are you guys planning to do other plugin types? E.g.
That would make plugins very powerful and I can image a lot of ongoing problems would got solved (e.g. secrets/mounts during build, layer/cache control). |
Is someone working on anything rsync related regarding volume drivers? For the client side, I started working on https://github.com/synack/docker-rsync. |
The suggested syntax (
But assuming, say, a "blockdev" volume driver, you could do this:
Or a "tmpfs" volume driver (Hi, Dan), like this:
And you could combine these:
|
I agree, I don't like --volume-driver. |
As I commented in my tmpfs volume patch, I dont line --volume-driver either rather then use syntax like docker run -ti -v tmpfs:/run -v tmpfs:/tmp fedora /bin/sh "
|
I like the idea of passing the mount options on the volume command also. With tmpfs the big one is size. Other potentials would be noexec. I think nodev should be default for all of these file systems |
@larsks Not sure you need to tmpfs::/mount/point. tmpfs:/mount/point would be easier for users and not hard to implement. |
@rhatdan I am proposing that the syntax is:
Where the interpretation of I'm in favor of an explicit vs. implicit syntax. |
@larsks Right now it's |
Hello all, and thanks for the feedbacks! Let me try to answer some of the questions here.
You're right that the
I hope this clears up some of the questions. 1 Truth is there already is a hackish way to do multiple volume drivers for a single containers using |
@cpuguy83 I am fine with -v tmpfs::/tmp @icecrime I find the whole --volume-driver to be huge failure on the Ease of Use Scale. One of the huge advantages of docker is its ease of use from the CLI, and I see this as a huge step backwards. docker run --volume-from=tmpfs /tmp --volume-from=local /mnt fedora /usr/sbin/yuck docker run -v tmpfs::/tmp -v /foo:/bar -v /mnt fedora /usr/bin/nice |
@icecrime I totally get not wanting to do much more than So enable a user to change the default driver (at the daemon level) thus providing the desired minimal syntax for most cases, and then allowing people to have the finer control with on the |
@rhatdan But if you consider this as a transition path toward
How would you feel about that? One of the reasons the feature is experimental, is that we know this is not the final UX, so I'm glad we can discuss this now. |
From a usability point of view not as easy as what we have. I have no problem with that but I still prefer the shorthand. Lots of different applications will be using tmpfs volumes, each vying for the same name tmpfs or worse each creating their own slightly different volume name. It breaks our atomic command since we currently expect their to be one command to start an application if I use tmpfs now I need two commands, if my volume creation tool fails what do I do? Do I know if it failed because the content already existed versus other errors? Do I need to start doing docker volume list to figure out if the volume already exists. |
Bottom line I would want both. Yours is good for seting up advanced volumes, but it is harder for the general use case. |
@icecrime I was thinking of implementing secrets during build. |
in playing with https://github.com/SvenDowideit/docker-volumes-nfs I think we need more meta-info. because the plugin doesn't know anything about the daemon, or the container that is being requested, there's no way to make a unique mount point per container, to mount the volume to the actual docker graphdriver dir (either matching the non-root partition, or the correct docker daemon if there are more than one), and if the same thing is mounted into 2 containers, we're forced to add reference counting. BUT.... it works :) |
@SvenDowideit Plz don't mount to the graphdriver dir :) |
What happen if we want to use different volume drivers on the same container? for example, a nfs volume and local volume. Would we use multiple --volume-drivers before the actual -v? +1 to @larsks proposal |
Hi. We just wrote a (crude) implementation of the docker volume plugin using Ceph RBD. Quick feedback: when creating new volumes for Ceph, we need to pass in the desired volume size (among other things). Currently, we have to hard-code that or provide it as a configuration option but that will be applied to all volumes. IOW, we might want to think about some way of passing driver-specific parameters (say, part of the |
@cpuguy83 atm, the daemon isn't sending a unique id/name, which is part of the problem :) |
Is snapshotting an intended feature? |
@loggstream What is the intention of snapshotting?
|
@cpuguy83 a typical Openstack use-case is snapshotting a machine and its volumes for backup/scaleup. In your 2nd scenario, I expect to pass some hints to trigger the cloning... How would you implement with docker the following use case?
|
Will this support containerized volume client? If I wanted to keep the CEPH RBD client stack (or any other docker volume client) in a container and not on the host, will that be possible? |
FWIW, I'd leave the management functionality out of the picture for now (ie snapshots). Docker isn't/shouldn't be an Infrastructure management platform. Just a plugin to make things easier to consume. |
As @SvenDowideit points out, passing in the container_id as part of the mount/unmount request will help. If the same volume is mounted in more than one container, then it gets a little messy for the plugin to keep track of refcounts across restarts and reboots. |
The compromise is to add a unique volume id for each volume-name+container pair - see #14737 |
I agree to @cpuguy83 's comment that generally people will use same volume driver for a container. @icecrime How about an option where user can specify a config file to be used. Use case- say for example, user wants to use a specific size and layout for the volume. Each volume driver provider can have special formats of the config file. This config option could also be used for plugins other than volume plugin. |
I'm a bit worried about the "one volume driver per container should be enough" assumption. Is it driven by how difficult is the implementation, or what's the rational? |
I am agree with @aisrael. Volume driver API only accepts a volume name. We are currently unable to specify other volume characteristics like volume size, IOPS, volume layout, pool name, snapshot etc. How are we planning to add these storage requirements to volume driver API? |
@mauri This is a temporary issue. @Patiljn See above. The top-level API will allow setting volume-driver specific options when creating a volume as just a map of key/value pairs. |
@cpuguy83 Are these opts per volume basis, right? e.g. If we want to create more than one volume with the same Volume driver, then we should be able to specify different opts to different volumes depending on our requirement. |
@cpuguy83 so what about supporting snapshots like this?
|
@ioggstream So far these opts just get passed directly to the volume driver and are not parsed by docker at all, so yeah that's totally possible, but must be implemented by the driver. |
Is there any orchestration tool already supporting something like that? |
@ioggstream Being that the api is not part of Docker yet? I'd say no. |
@ioggstream volume API was just merged, so 1.9 will include it, and hopefully orchestration systems will follow suit. Also, re-read my last comment and it probably seemed a bit rude... totally not intended to be! Sorry about that. |
2015-08-26 22:59 GMT+02:00 Brian Goff notifications@github.com:
Looking ...
Peace, |
I have the habit of using the I am also happy to share with you my first Go program: docker-volume-rsync |
Hi! I'm confused by volume plugin: |
2015-09-09 7:57 GMT+02:00 zhijian notifications@github.com:
The plugin will:
So you shouldn't care about how binding an EBS to your server. |
So: |
@cooljiansir The process of making a volume available to a Docker host in EC2 has a handful of steps.
.. and the reverse when the container stops. So you can think of the steps that a Volume Driver accomplishes as a workflow that simplifies preparing external volumes to be used with Docker. For example, see the following example where we are using Docker Machine to fire up a host with a volume driver configured. We then can have a single command that enables the container to use the new volume. The same command applied to another host would swing the volume to that host.
|
@icecrime I think we can close this now that the volume plugins moved out of experimental with 1.9? |
closing per my comment above |
This is a placeholder issue to collect feedback on the volume plugins experimental feature shipped as part of Docker 1.7.0.
The text was updated successfully, but these errors were encountered: