New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] Allow for named volumes to specify host mount point #19990

Closed
CWSpear opened this Issue Feb 4, 2016 · 66 comments

Comments

Projects
None yet
@CWSpear
Contributor

CWSpear commented Feb 4, 2016

With docker-compose 1.6 coming out soon and the new v2 syntax, I've been learning more about docker volume that came out in docker 1.9.

My thought was that docker volumes could be created to replace data-only containers and --volumes-from. Specifically, it allows for cleaner docker ps and it allows us to mount volumes on two different containers at different places.

But it doesn't let us persist that data as well as data-only containers could.

Is there a particular reason it doesn't/you can't use a mount (or perhaps bind is a better work?) point outside of /var/lib/docker?

My proposal: would it be possible for us to add an option for the local driver to specify a bind/mount point on the host?

My use-case is I use Docker for smaller things that don't need massive scaling, and something like Flocker is mega overkill. However, I have had multiple times where I needed a volume on at least 2 containers and to have it persisted, and it got pretty messy with data-only containers, and it could be solved nicely with my proposal.

Specific use case: I have a letsencrypt docker container that creates certificates. I need the certificates to persist, but I also need my nginx container to have access them. I've had similar setups with images, but I also wanted to have them mounted at different points within the specific containers, something not possible with data-only containers.

I could get some of this working by mounting each container to the same volume on the host, but then my containers are more host-dependent, and I'd rather move that to a dedicated volume whose job it is to persist things, and then I only have one place to change it and my other containers don't need to care about that.

Anyway, hopefully we could add something to help here, please let me know if I can add any clarification, etc. Thanks!

@qq690388648

This comment has been minimized.

Show comment
Hide comment
@qq690388648

qq690388648 Feb 4, 2016

Contributor

If I have not misunderstood, the commanddocker run -v /path/to/hostfile:/path/to/containerfile image_name will help you.

Contributor

qq690388648 commented Feb 4, 2016

If I have not misunderstood, the commanddocker run -v /path/to/hostfile:/path/to/containerfile image_name will help you.

@vdemeester

This comment has been minimized.

Show comment
Hide comment
@vdemeester

vdemeester Feb 4, 2016

Member

@CWSpear You could have a docker volume plugin that does just that 😉 (allows to create a named volume that does bind mounting).

Member

vdemeester commented Feb 4, 2016

@CWSpear You could have a docker volume plugin that does just that 😉 (allows to create a named volume that does bind mounting).

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Feb 4, 2016

Contributor

As @qq690388648 mentioned, if you want to bind a path, use the full host path in the first part of the -v.

Contributor

cpuguy83 commented Feb 4, 2016

As @qq690388648 mentioned, if you want to bind a path, use the full host path in the first part of the -v.

@cpuguy83 cpuguy83 closed this Feb 4, 2016

@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Feb 4, 2016

Contributor

@qq690388648 you have misunderstood, I tried to explain why that isn't good enough.

@cpuguy83 I know about that and I tried to explain, I'm trying to avoid doing that. Perhaps I wasn't clear enough, or perhaps I was too verbose and people didn't read carefully enough.

@vdemeester I was thinking of creating a whole new plugin, but it seemed as though it'd be easier (and even more appropriate) to just add an opt to the local plugin.

@cpuguy83 To try and be more clear, I want to avoid linking my non-data containers to the host directly.

I want to change something like this:

docker create -v /persistent/images:/images --name images debian:jessie /bin/true
docker run -d --volumes-from images --name container1 some_image
docker run -d --volumes-from images --name container2 some_image2
docker run -d --volumes-from images --name container3 some_image3

(which returns a dead container in docker ps -a and less clear output in docker volume ls)

with:

docker volume create --opt mount=/persistent/images --name images
docker run -d -v images:/path/to/a --name container1 some_image
docker run -d -v images:/path/to/b --name container2 some_image
docker run -d -v images:/path/to/c --name container3 some_image

which:

  1. allows me to map images to different places on each container without each of those containers specifically relying on the host. They just rely on the volume whose job it is to find a place to persist the files
  2. cleans up docker ps -a
  3. has a more meaningful output for docker volumes ls (which could also have a column for mount point?)
  4. when dealing with docker-compose allows for a more clean separation in the v2 syntax where it's clear what are dedicated to volumes

@cpuguy83 am I making more sense?

Contributor

CWSpear commented Feb 4, 2016

@qq690388648 you have misunderstood, I tried to explain why that isn't good enough.

@cpuguy83 I know about that and I tried to explain, I'm trying to avoid doing that. Perhaps I wasn't clear enough, or perhaps I was too verbose and people didn't read carefully enough.

@vdemeester I was thinking of creating a whole new plugin, but it seemed as though it'd be easier (and even more appropriate) to just add an opt to the local plugin.

@cpuguy83 To try and be more clear, I want to avoid linking my non-data containers to the host directly.

I want to change something like this:

docker create -v /persistent/images:/images --name images debian:jessie /bin/true
docker run -d --volumes-from images --name container1 some_image
docker run -d --volumes-from images --name container2 some_image2
docker run -d --volumes-from images --name container3 some_image3

(which returns a dead container in docker ps -a and less clear output in docker volume ls)

with:

docker volume create --opt mount=/persistent/images --name images
docker run -d -v images:/path/to/a --name container1 some_image
docker run -d -v images:/path/to/b --name container2 some_image
docker run -d -v images:/path/to/c --name container3 some_image

which:

  1. allows me to map images to different places on each container without each of those containers specifically relying on the host. They just rely on the volume whose job it is to find a place to persist the files
  2. cleans up docker ps -a
  3. has a more meaningful output for docker volumes ls (which could also have a column for mount point?)
  4. when dealing with docker-compose allows for a more clean separation in the v2 syntax where it's clear what are dedicated to volumes

@cpuguy83 am I making more sense?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Feb 4, 2016

Contributor

@CWSpear We won't support this in the built-in driver. You are more than welcome to build a plugin to handle this.

Contributor

cpuguy83 commented Feb 4, 2016

@CWSpear We won't support this in the built-in driver. You are more than welcome to build a plugin to handle this.

@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Feb 4, 2016

Contributor

@cpuguy83 Why not? If I were to create a plugin, the code would be 99% the same. I don't know go very well (read: at all), but I've got a proof-of-concept (would need some error checking, etc) almost working that (I think) handles it and it doesn't require very much code.

Contributor

CWSpear commented Feb 4, 2016

@cpuguy83 Why not? If I were to create a plugin, the code would be 99% the same. I don't know go very well (read: at all), but I've got a proof-of-concept (would need some error checking, etc) almost working that (I think) handles it and it doesn't require very much code.

@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Feb 8, 2016

Contributor

Well, I still feel like it could definitely find a place in the core, but I did create a plugin as @vdemeester suggested: https://github.com/CWSpear/local-persist

Since I literally learned Go and wrote it just this weekend, I wouldn't mind some helpful eyes =)

I'm calling this a 1.0-beta. Probably needs a few tweaks to validate name and mountpoint, and gotta figure out binaries and probably a starter upstart script or something, but we're getting there! It works as intended and plan on using it in production this week =)

Contributor

CWSpear commented Feb 8, 2016

Well, I still feel like it could definitely find a place in the core, but I did create a plugin as @vdemeester suggested: https://github.com/CWSpear/local-persist

Since I literally learned Go and wrote it just this weekend, I wouldn't mind some helpful eyes =)

I'm calling this a 1.0-beta. Probably needs a few tweaks to validate name and mountpoint, and gotta figure out binaries and probably a starter upstart script or something, but we're getting there! It works as intended and plan on using it in production this week =)

@briceburg

This comment has been minimized.

Show comment
Hide comment
@briceburg

briceburg Feb 24, 2016

@CWSpear thanks, I found this useful as well!

We have 100s of application environments -- each has a media folder > 80gb. I don't duplicate this media folder per environment -- its slow and cumbersome. Instead I bind a UFS [overlay||aufs] export of the media, so each environment can now write to media without side-effects in other environments ++ we don't need to copy media ever.

These UFS exports get setup on the docker host, and I'd like to reference them with named volumes for later use in docker-compose. The named volume pattern helps us keep things consistent and organized. E.g.

# doesn't work
docker volume create --name qa-3-media  --mountpoint /var/UFS/exports/qa-3-media

# now possible w/ https://github.com/CWSpear/local-persist
docker volume create --name qa-3-media  -o mountpoint=/var/UFS/exports/qa-3-media -d local-persist 

Great work !!! && call me crazy for trying to avoid data containers after migrating to docker 1.10 & compose 1.6

@CWSpear thanks, I found this useful as well!

We have 100s of application environments -- each has a media folder > 80gb. I don't duplicate this media folder per environment -- its slow and cumbersome. Instead I bind a UFS [overlay||aufs] export of the media, so each environment can now write to media without side-effects in other environments ++ we don't need to copy media ever.

These UFS exports get setup on the docker host, and I'd like to reference them with named volumes for later use in docker-compose. The named volume pattern helps us keep things consistent and organized. E.g.

# doesn't work
docker volume create --name qa-3-media  --mountpoint /var/UFS/exports/qa-3-media

# now possible w/ https://github.com/CWSpear/local-persist
docker volume create --name qa-3-media  -o mountpoint=/var/UFS/exports/qa-3-media -d local-persist 

Great work !!! && call me crazy for trying to avoid data containers after migrating to docker 1.10 & compose 1.6

@zokier

This comment has been minimized.

Show comment
Hide comment
@zokier

zokier Aug 15, 2016

I find it surprising that named volumes do not have feature parity with anonymous volumes. Personally I would have thought docker volume create /hostdir:vname && docker run -v vname:/contdir ... be at least roughly equivalent to docker run -v /hostdir:/contdir .... Of course the exact syntax is irrelevant here, -o mountpoint etc is probably better choice. Also isn't this one major feature which prevents fully deprecating the use of data containers?

While I really appreciate @CWSpear creating local-persist, I would prefer official plugin instead to ensure long-term viability and compatibility. With no offense intended, one-man projects sadly have high risk of becoming unmaintained at some point.

zokier commented Aug 15, 2016

I find it surprising that named volumes do not have feature parity with anonymous volumes. Personally I would have thought docker volume create /hostdir:vname && docker run -v vname:/contdir ... be at least roughly equivalent to docker run -v /hostdir:/contdir .... Of course the exact syntax is irrelevant here, -o mountpoint etc is probably better choice. Also isn't this one major feature which prevents fully deprecating the use of data containers?

While I really appreciate @CWSpear creating local-persist, I would prefer official plugin instead to ensure long-term viability and compatibility. With no offense intended, one-man projects sadly have high risk of becoming unmaintained at some point.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Aug 15, 2016

Contributor

@zokier /hostdir is not a volume, it's a host mount.
Why create a volume for something that already exists on the host?
Why would one use data-containers for something that lives and is addressable directly on the host?

Contributor

cpuguy83 commented Aug 15, 2016

@zokier /hostdir is not a volume, it's a host mount.
Why create a volume for something that already exists on the host?
Why would one use data-containers for something that lives and is addressable directly on the host?

@zokier

This comment has been minimized.

Show comment
Hide comment
@zokier

zokier Aug 15, 2016

In my case specifically? I'm working with docker-compose (learning as I go, so I might be way off), and I thought of using named volumes as a neat abstraction to avoid putting unnecessary host-specific information in docker-compose.yml, or at least encapsulating it into one section. That in turn would make it easier to swap in different host-paths, or even completely different backends.

Overall I just find thinking mounts as a special case of volumes would be more elegant than having them as distinct "top-level" concept. They are already quite thoroughly mixed in together now, being defined with same -v flag or in the same volumes sub-section in docker-compose etc.

zokier commented Aug 15, 2016

In my case specifically? I'm working with docker-compose (learning as I go, so I might be way off), and I thought of using named volumes as a neat abstraction to avoid putting unnecessary host-specific information in docker-compose.yml, or at least encapsulating it into one section. That in turn would make it easier to swap in different host-paths, or even completely different backends.

Overall I just find thinking mounts as a special case of volumes would be more elegant than having them as distinct "top-level" concept. They are already quite thoroughly mixed in together now, being defined with same -v flag or in the same volumes sub-section in docker-compose etc.

@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Aug 16, 2016

Contributor

@zokier you know how to help protect open source "one-man projects" from "becoming unmaintained at some point?"

You help contribute when you find an issue.

Rather than just stay away from projects you (supposedly) believe in, help improve/offer support =)

In this case tho, the scope of the plugin is very narrow and it's really quite basic and straightforward. I use Docker in dozens of projects, and use my plugin myself on many of them. I'm not going to lose interest in it any time soon, and the maintenance demand is quite minimal, so it's not a huge burden.

That all being said, I definitely feel that this functionality should be in core. Which was my first proposal. The plugin was in response to the Docker team feeling it shouldn't be in core. It'd be many fewer lines if it were in core, that's for sure.

Contributor

CWSpear commented Aug 16, 2016

@zokier you know how to help protect open source "one-man projects" from "becoming unmaintained at some point?"

You help contribute when you find an issue.

Rather than just stay away from projects you (supposedly) believe in, help improve/offer support =)

In this case tho, the scope of the plugin is very narrow and it's really quite basic and straightforward. I use Docker in dozens of projects, and use my plugin myself on many of them. I'm not going to lose interest in it any time soon, and the maintenance demand is quite minimal, so it's not a huge burden.

That all being said, I definitely feel that this functionality should be in core. Which was my first proposal. The plugin was in response to the Docker team feeling it shouldn't be in core. It'd be many fewer lines if it were in core, that's for sure.

@shrikeh

This comment has been minimized.

Show comment
Hide comment
@shrikeh

shrikeh Sep 13, 2016

+1 for this. I have a data container which, for added security, I mount into various containers as read only. The source code, being PHP and some applications not being very well written (while there are some very very good PHP developers, there are also many with very spotty security knowledge).

I therefore put their code in a data container, and then mount that code with volumes_from into my php-fpm container. This literally means that no matter how weird the code is, it can't be rewritten by any sort of cunning exploit.

Ideally volumes_from would allow me a choice of where in the other container it would be mounted to, though.

shrikeh commented Sep 13, 2016

+1 for this. I have a data container which, for added security, I mount into various containers as read only. The source code, being PHP and some applications not being very well written (while there are some very very good PHP developers, there are also many with very spotty security knowledge).

I therefore put their code in a data container, and then mount that code with volumes_from into my php-fpm container. This literally means that no matter how weird the code is, it can't be rewritten by any sort of cunning exploit.

Ideally volumes_from would allow me a choice of where in the other container it would be mounted to, though.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Sep 13, 2016

Contributor

@shrikeh This is what named volumes do. Create a volume, give it a name, and use -v imporant_data:/foo:ro

Contributor

cpuguy83 commented Sep 13, 2016

@shrikeh This is what named volumes do. Create a volume, give it a name, and use -v imporant_data:/foo:ro

@shrikeh

This comment has been minimized.

Show comment
Hide comment
@shrikeh

shrikeh Sep 13, 2016

@cpuguy83 the bit I haven't figured out is: where does the code come from then in the above?

shrikeh commented Sep 13, 2016

@cpuguy83 the bit I haven't figured out is: where does the code come from then in the above?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Sep 13, 2016

Contributor

@shrikeh WDYM?

Please feel free to hop on IRC to discuss. GH issues is not the best place.

Contributor

cpuguy83 commented Sep 13, 2016

@shrikeh WDYM?

Please feel free to hop on IRC to discuss. GH issues is not the best place.

@shrikeh

This comment has been minimized.

Show comment
Hide comment
@shrikeh

shrikeh Sep 13, 2016

So I reread your solution and we're solving different problems. Essentially I don't want to mount a volume from the host. Here's the current composed stack:

{CloudFlare} -[HTTPS]->|nginx1|-[HTTP]->|varnish|-[HTTP]->|nginx2|-[TCP]->|php-fpm01|

Not shown above is a data container, app. nginx01 and php-fpm01 share the same data container. Not shown above is a nginx03/php-fpm02 combo for administrators, with various tweaks allowing higher memory usage but less users, etc. But it, too, uses the same data container.

The data container has the code, static assets, and the build tools (because don't minify or fetch vendors in production), and doesn't even run. But it does give me some advantages:

  • I can never get into the situation where any of the three containers that need the code or assets are out of sync, which can play havoc with caches because it creates unnecessary race conditions.
  • I can tag changes to my data container referencing GitHub commits/branches/tags and know that it's just code changes. It's very easy to rollback.
  • Similarly any changes to the other containers are config changes only, as they don't have the code themselves.
  • The data container itself is mounted read only in the other containers where possible, so adding to overall security (I'm looking at you, WordPress). I don't even need access to the repo to run it, it's just a volume I can easily move around. Otherwise, to change one line of code would require rebuilding three containers.

All of the above relies on volumes_from. Which is working just fine. I just wish I had a little bit more control over where it mounted code to within the various containers that use it.

shrikeh commented Sep 13, 2016

So I reread your solution and we're solving different problems. Essentially I don't want to mount a volume from the host. Here's the current composed stack:

{CloudFlare} -[HTTPS]->|nginx1|-[HTTP]->|varnish|-[HTTP]->|nginx2|-[TCP]->|php-fpm01|

Not shown above is a data container, app. nginx01 and php-fpm01 share the same data container. Not shown above is a nginx03/php-fpm02 combo for administrators, with various tweaks allowing higher memory usage but less users, etc. But it, too, uses the same data container.

The data container has the code, static assets, and the build tools (because don't minify or fetch vendors in production), and doesn't even run. But it does give me some advantages:

  • I can never get into the situation where any of the three containers that need the code or assets are out of sync, which can play havoc with caches because it creates unnecessary race conditions.
  • I can tag changes to my data container referencing GitHub commits/branches/tags and know that it's just code changes. It's very easy to rollback.
  • Similarly any changes to the other containers are config changes only, as they don't have the code themselves.
  • The data container itself is mounted read only in the other containers where possible, so adding to overall security (I'm looking at you, WordPress). I don't even need access to the repo to run it, it's just a volume I can easily move around. Otherwise, to change one line of code would require rebuilding three containers.

All of the above relies on volumes_from. Which is working just fine. I just wish I had a little bit more control over where it mounted code to within the various containers that use it.

@xeor

This comment has been minimized.

Show comment
Hide comment
@xeor

xeor Sep 22, 2016

I would like this option as well.
As it is now, every volumes created with docker volume create would fill up the / partition if you don't have /var/lib/docker on it's own partition. In my setup, I use devicemapper and lvm-thinpooldev for my storage driver, but as the documentation sais, docker volume create abc would bypass this.

The local volume driver already supports some options when creating a volume. This is done for nfs, example docker volume create --driver local --opt type=nfs --opt o=addr=10.1.2.3,rw --opt device=:/docker --name nfsdatavolume. Can we use the same for making eg a bind-mount?

Another option would be to symlink/bindmount /var/lib/docker/volumes/ to where you actually want to. Not sure about the consequences by doing that tho.

I would much rather have a named volume with the correct options created once, than doing what feels like hacks (eg; variables in my docker-compose > volumes).

The usecases are many! It feels like it belongs to the local driver. The local driver already support (stuff like this).

xeor commented Sep 22, 2016

I would like this option as well.
As it is now, every volumes created with docker volume create would fill up the / partition if you don't have /var/lib/docker on it's own partition. In my setup, I use devicemapper and lvm-thinpooldev for my storage driver, but as the documentation sais, docker volume create abc would bypass this.

The local volume driver already supports some options when creating a volume. This is done for nfs, example docker volume create --driver local --opt type=nfs --opt o=addr=10.1.2.3,rw --opt device=:/docker --name nfsdatavolume. Can we use the same for making eg a bind-mount?

Another option would be to symlink/bindmount /var/lib/docker/volumes/ to where you actually want to. Not sure about the consequences by doing that tho.

I would much rather have a named volume with the correct options created once, than doing what feels like hacks (eg; variables in my docker-compose > volumes).

The usecases are many! It feels like it belongs to the local driver. The local driver already support (stuff like this).

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Sep 22, 2016

Contributor

@xeor docker volume create --opt type=none --opt device=<host path> --opt o=bind

If the host path does not exist, it will not be created.

Contributor

cpuguy83 commented Sep 22, 2016

@xeor docker volume create --opt type=none --opt device=<host path> --opt o=bind

If the host path does not exist, it will not be created.

@xeor

This comment has been minimized.

Show comment
Hide comment
@xeor

xeor Sep 22, 2016

@cpuguy83 thats great! worked perfectly! thanks!

@CWSpear isnt this exactly what you wanted in the beginning before making the plugin?

xeor commented Sep 22, 2016

@cpuguy83 thats great! worked perfectly! thanks!

@CWSpear isnt this exactly what you wanted in the beginning before making the plugin?

@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Sep 22, 2016

Contributor

@xeor I think so...!

@cpuguy83 Can you explain those opts? Where are they documented? How long has this been possible?

Contributor

CWSpear commented Sep 22, 2016

@xeor I think so...!

@cpuguy83 Can you explain those opts? Where are they documented? How long has this been possible?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Sep 22, 2016

Contributor

It was added in 1.12, I think, maybe 1.11.
Opts are (generally) just the same options you pass to the mount command.

Contributor

cpuguy83 commented Sep 22, 2016

It was added in 1.12, I think, maybe 1.11.
Opts are (generally) just the same options you pass to the mount command.

@xeor

This comment has been minimized.

Show comment
Hide comment
@xeor

xeor Sep 22, 2016

Docker volume create --magic could use some more documenation for stuff like this. "generally just the same as mount" is very wage... (but thanks) :)

If I do a bind-mount manually;

[root@d3 /]# mkdir /src /dest
[root@d3 /]# mount -o bind /source /dest
[root@d3 /]# touch source/a
[root@d3 /]# ls dest/
a
[root@d3 /]# mount | grep /dest
/dev/mapper/vg1-root on /dest type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@d3 /]# cat /etc/mtab | grep /dest
/dev/mapper/vg1-root /dest xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

It is not easy to get the options.. type=none, device=.., o=bind?...

xeor commented Sep 22, 2016

Docker volume create --magic could use some more documenation for stuff like this. "generally just the same as mount" is very wage... (but thanks) :)

If I do a bind-mount manually;

[root@d3 /]# mkdir /src /dest
[root@d3 /]# mount -o bind /source /dest
[root@d3 /]# touch source/a
[root@d3 /]# ls dest/
a
[root@d3 /]# mount | grep /dest
/dev/mapper/vg1-root on /dest type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@d3 /]# cat /etc/mtab | grep /dest
/dev/mapper/vg1-root /dest xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

It is not easy to get the options.. type=none, device=.., o=bind?...

@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Sep 22, 2016

Contributor

Yeah, I've been digging... I'm quite confused, too. I'm not a mount master, but I'm with @xeor. Some clarification would be dandy... link to code or docs would be swell as well.

I'm looking through code a bit, but day job is calling... if I find anything, I'll post.

Cc @cpuguy83

Contributor

CWSpear commented Sep 22, 2016

Yeah, I've been digging... I'm quite confused, too. I'm not a mount master, but I'm with @xeor. Some clarification would be dandy... link to code or docs would be swell as well.

I'm looking through code a bit, but day job is calling... if I find anything, I'll post.

Cc @cpuguy83

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Sep 22, 2016

Contributor

Options are passed in literally to the mount syscall. We may add special cases for certain "types" because they are awkward to use... like the nfs example referenced above.

Contributor

cpuguy83 commented Sep 22, 2016

Options are passed in literally to the mount syscall. We may add special cases for certain "types" because they are awkward to use... like the nfs example referenced above.

@xeor

This comment has been minimized.

Show comment
Hide comment
@xeor

xeor Sep 22, 2016

@cpuguy83 should the docs be updated with the bind-mount example and some more clarifications? I imagine there are a ton of usecases for different types of mount options.

xeor commented Sep 22, 2016

@cpuguy83 should the docs be updated with the bind-mount example and some more clarifications? I imagine there are a ton of usecases for different types of mount options.

@Skrath

This comment has been minimized.

Show comment
Hide comment
@Skrath

Skrath Nov 4, 2016

I'm still finding the disparity between regular volume definitions and named volume definitions to be uh... odd.

For example

# Specify an absolute path mapping
- /opt/data:/var/lib/mysql

# Path on the host, relative to the Compose file
- ./cache:/tmp/cache

lets you map a volume between a host location and a location on the container (HOST:CONTAINER:ro). But if you attempt to use a named volume, this is no longer possible. Instead the format changes to simply (NAME:CONTAINER). I understand that volumes and mount points are inherently different concepts, but this still results in a lack of feature parity. I would have thought that named containers would work in an identical fashion with the extra benefit of having a name (and lacking the requirement of a specific container).

It seems like everyone is leaning in the direction of using various driver_opts but I have yet to see this actually work from within docker-compose (and feels like something that should be unnecessary).

Skrath commented Nov 4, 2016

I'm still finding the disparity between regular volume definitions and named volume definitions to be uh... odd.

For example

# Specify an absolute path mapping
- /opt/data:/var/lib/mysql

# Path on the host, relative to the Compose file
- ./cache:/tmp/cache

lets you map a volume between a host location and a location on the container (HOST:CONTAINER:ro). But if you attempt to use a named volume, this is no longer possible. Instead the format changes to simply (NAME:CONTAINER). I understand that volumes and mount points are inherently different concepts, but this still results in a lack of feature parity. I would have thought that named containers would work in an identical fashion with the extra benefit of having a name (and lacking the requirement of a specific container).

It seems like everyone is leaning in the direction of using various driver_opts but I have yet to see this actually work from within docker-compose (and feels like something that should be unnecessary).

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Nov 5, 2016

Member

the "regular" volume you're describing is a bind-mount, not a volume; you specify a path from the host, and it's mounted in the container. No data is copied from the container to that path, because the files from the host are used.

For a volume, you're asking docker to create a volume (persistent storage) to store data, and copy the data from the container to that volume.

Volumes are managed by docker (or through a plugin) and the storage path (or mechanism) is an implementation detail, as all you're asking is a storage, that's managed.

Member

thaJeztah commented Nov 5, 2016

the "regular" volume you're describing is a bind-mount, not a volume; you specify a path from the host, and it's mounted in the container. No data is copied from the container to that path, because the files from the host are used.

For a volume, you're asking docker to create a volume (persistent storage) to store data, and copy the data from the container to that volume.

Volumes are managed by docker (or through a plugin) and the storage path (or mechanism) is an implementation detail, as all you're asking is a storage, that's managed.

@costa

This comment has been minimized.

Show comment
Hide comment
@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Nov 15, 2016

Contributor

It also appears that it does not create the directory... I am pretty sure it used to? But now it won't and then you'll get a (kind of ambiguous) error:

ERROR: for mariadb  Cannot create container for service mariadb: no such file or directory

(Note that the error is mentioning the container's name, and not the volume's name.)

Using this relevant snippet from docker-compose.yml:

volumes:
  database:
    driver_opts:
      type: none
      device: /path/is/writable/dir-does-not-exist/
      o: bind

(The parent directory was writable to the user running docker.)

Contributor

CWSpear commented Nov 15, 2016

It also appears that it does not create the directory... I am pretty sure it used to? But now it won't and then you'll get a (kind of ambiguous) error:

ERROR: for mariadb  Cannot create container for service mariadb: no such file or directory

(Note that the error is mentioning the container's name, and not the volume's name.)

Using this relevant snippet from docker-compose.yml:

volumes:
  database:
    driver_opts:
      type: none
      device: /path/is/writable/dir-does-not-exist/
      o: bind

(The parent directory was writable to the user running docker.)

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 15, 2016

Contributor

@CWSpear This has never created the directory. You are asking the volume driver to bind mount a dir. This is different than creating a bind-mount directly.

Contributor

cpuguy83 commented Nov 15, 2016

@CWSpear This has never created the directory. You are asking the volume driver to bind mount a dir. This is different than creating a bind-mount directly.

@CWSpear

This comment has been minimized.

Show comment
Hide comment
@CWSpear

CWSpear Nov 15, 2016

Contributor

@cpuguy83 yeah, I guess I misread your earlier comment, sorry!

Contributor

CWSpear commented Nov 15, 2016

@cpuguy83 yeah, I guess I misread your earlier comment, sorry!

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Dec 23, 2016

Contributor

Nope. Are you sure a volume with that same name didn't already exist?

Contributor

cpuguy83 commented Dec 23, 2016

Nope. Are you sure a volume with that same name didn't already exist?

@shmendo

This comment has been minimized.

Show comment
Hide comment
@shmendo

shmendo Dec 23, 2016

@cpuguy83 Oops, sorry, I should have specified these are not actually volumes, they are bind-mounts to the host file system. I realize this does create a volume that shows in docker volume ls, however I was able to reproduce this multiple times with different volume names in the docker-compose file. It only occurred when a volume name was 19 characters or more in length. I should also mention I'm on OS X running docker for mac. Not sure if that has anything to do with it.

shmendo commented Dec 23, 2016

@cpuguy83 Oops, sorry, I should have specified these are not actually volumes, they are bind-mounts to the host file system. I realize this does create a volume that shows in docker volume ls, however I was able to reproduce this multiple times with different volume names in the docker-compose file. It only occurred when a volume name was 19 characters or more in length. I should also mention I'm on OS X running docker for mac. Not sure if that has anything to do with it.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 23, 2016

Member

@shmendo are you seeing the same issue when using docker directly (i.e. not through docker-compose)? This works for me;

version: "2.0"

services:
  foo:
    image: "nginx:alpine"
    volumes:
      - "arbitrary-repo-name:/one"
      - "arbitrary-repo-namee:/two"

volumes:
  arbitrary-repo-name:
    driver_opts:
      type: none
      device: /var/log
      o: bind
  arbitrary-repo-namee:
    driver_opts:
      type: none
      device: /var/log
      o: bind
Member

thaJeztah commented Dec 23, 2016

@shmendo are you seeing the same issue when using docker directly (i.e. not through docker-compose)? This works for me;

version: "2.0"

services:
  foo:
    image: "nginx:alpine"
    volumes:
      - "arbitrary-repo-name:/one"
      - "arbitrary-repo-namee:/two"

volumes:
  arbitrary-repo-name:
    driver_opts:
      type: none
      device: /var/log
      o: bind
  arbitrary-repo-namee:
    driver_opts:
      type: none
      device: /var/log
      o: bind
@shmendo

This comment has been minimized.

Show comment
Hide comment
@shmendo

shmendo Dec 27, 2016

@thaJeztah I'm not 100% sure what you mean by not using docker-compose, but I will test again using the same setup as above. I'm in the middle of a ticket, so it may be a little while before I respond but I will respond with results when i do test it as well as version #s of everything.

shmendo commented Dec 27, 2016

@thaJeztah I'm not 100% sure what you mean by not using docker-compose, but I will test again using the same setup as above. I'm in the middle of a ticket, so it may be a little while before I respond but I will respond with results when i do test it as well as version #s of everything.

@flaccid

This comment has been minimized.

Show comment
Hide comment
@flaccid

flaccid Jan 2, 2017

Contributor

@thaJeztah I'm trying your solution from the above compose and also just docker volume create --opt type=none --opt device=<host path> --opt o=bind with an existing host path. The volume creates however it is empty. I was expecting that inspecting the volume would show Mountpoint to be the host path or a symlink or something.

Contributor

flaccid commented Jan 2, 2017

@thaJeztah I'm trying your solution from the above compose and also just docker volume create --opt type=none --opt device=<host path> --opt o=bind with an existing host path. The volume creates however it is empty. I was expecting that inspecting the volume would show Mountpoint to be the host path or a symlink or something.

@evilive3000

This comment has been minimized.

Show comment
Hide comment
@evilive3000

evilive3000 Jan 4, 2017

Should bound volume disappear(unmount) after docker-compose down? (if should not, how to do it umounts on down)

I still have a copy of bound volume after calling down, and next ups does not change it content.

I bind via driver_opts.

evilive3000 commented Jan 4, 2017

Should bound volume disappear(unmount) after docker-compose down? (if should not, how to do it umounts on down)

I still have a copy of bound volume after calling down, and next ups does not change it content.

I bind via driver_opts.

@shmendo

This comment has been minimized.

Show comment
Hide comment
@shmendo

shmendo Jan 10, 2017

@thaJeztah

OK, i found my problem. it has nothing to do with the volume name character length. What was happening under the hood when using docker compose with named volumes, is that it actually creates a named volume that shows in docker volumes ls. I wasn't aware of this. I thought it just mounted the volume with a name that existed as long as the docker container was running. Running the docker-compose up command the first time created a volume named arbitrary-repo-name which used the path /var/log/currently_exists.

volumes:
  arbitrary-repo-name:
    driver_opts:
      type: none
      device: /var/log/currently_exists
      o: bind

Later on, i moved the src to a new folder (re-cloned a second project folder) and the name of the volume used was the same as before arbitrary-repo-name. I had changed the path in the device block, but since the volume already existed, it was used again instead of creating a new volume with the same name. The example below appears to re-use the volume mapping to /var/log/currently_exists but that folder no longer exists.

volumes:
  arbitrary-repo-name:
    driver_opts:
      type: none
      device: /var/log/new_folder
      o: bind

Hopefully this helps someone else. It was my mistake in understanding what is happening under the hood.

shmendo commented Jan 10, 2017

@thaJeztah

OK, i found my problem. it has nothing to do with the volume name character length. What was happening under the hood when using docker compose with named volumes, is that it actually creates a named volume that shows in docker volumes ls. I wasn't aware of this. I thought it just mounted the volume with a name that existed as long as the docker container was running. Running the docker-compose up command the first time created a volume named arbitrary-repo-name which used the path /var/log/currently_exists.

volumes:
  arbitrary-repo-name:
    driver_opts:
      type: none
      device: /var/log/currently_exists
      o: bind

Later on, i moved the src to a new folder (re-cloned a second project folder) and the name of the volume used was the same as before arbitrary-repo-name. I had changed the path in the device block, but since the volume already existed, it was used again instead of creating a new volume with the same name. The example below appears to re-use the volume mapping to /var/log/currently_exists but that folder no longer exists.

volumes:
  arbitrary-repo-name:
    driver_opts:
      type: none
      device: /var/log/new_folder
      o: bind

Hopefully this helps someone else. It was my mistake in understanding what is happening under the hood.

@emboss

This comment has been minimized.

Show comment
Hide comment
@emboss

emboss Jan 19, 2017

The solution proposed by @cpuguy83 works perfectly for directories for me. However, when I try to create a volume for a single file with

docker volume create --name my_nginx_conf \
--opt type=none \
--opt device=/etc/nginx/nginx.conf \
--opt o=bind

and I try to use that volume in a container, e.g.

docker run --rm -it -v my_nginx_conf:/test/nginx.conf busybox

it will give me an error: docker: Error response from daemon: not a directory.

I thought it would work just like a normal mount where I can choose between a directory or a single file. Any idea what I might be missing?

emboss commented Jan 19, 2017

The solution proposed by @cpuguy83 works perfectly for directories for me. However, when I try to create a volume for a single file with

docker volume create --name my_nginx_conf \
--opt type=none \
--opt device=/etc/nginx/nginx.conf \
--opt o=bind

and I try to use that volume in a container, e.g.

docker run --rm -it -v my_nginx_conf:/test/nginx.conf busybox

it will give me an error: docker: Error response from daemon: not a directory.

I thought it would work just like a normal mount where I can choose between a directory or a single file. Any idea what I might be missing?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jan 19, 2017

Contributor

@emboss I'm at loss as to why you'd even want to do this.
-v /etc/nginx/nginx.conf:/test/nginx.conf should suffice.

Contributor

cpuguy83 commented Jan 19, 2017

@emboss I'm at loss as to why you'd even want to do this.
-v /etc/nginx/nginx.conf:/test/nginx.conf should suffice.

@emboss

This comment has been minimized.

Show comment
Hide comment
@emboss

emboss Jan 19, 2017

@cpuguy83 Thanks for the quick response! I'm looking for a way to define a named volume seeded with a single file located on the host. I can't use -v <path_to_file_on_host>:<target> in this specific situation because the docker run is executed in a CI pipeline which itself runs in a Docker container and has no access to the file on the host.

I thought I could solve this with a named volume bind-mounting the host file - the CI process could reference the volume then instead of the file. Would it be possible using volume create?

emboss commented Jan 19, 2017

@cpuguy83 Thanks for the quick response! I'm looking for a way to define a named volume seeded with a single file located on the host. I can't use -v <path_to_file_on_host>:<target> in this specific situation because the docker run is executed in a CI pipeline which itself runs in a Docker container and has no access to the file on the host.

I thought I could solve this with a named volume bind-mounting the host file - the CI process could reference the volume then instead of the file. Would it be possible using volume create?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jan 19, 2017

Contributor

"-v" has access to the same fs that the volume driver does.

Contributor

cpuguy83 commented Jan 19, 2017

"-v" has access to the same fs that the volume driver does.

@emboss

This comment has been minimized.

Show comment
Hide comment
@emboss

emboss Jan 19, 2017

@cpuguy83 I must be doing something wrong then. Thanks for the help!

emboss commented Jan 19, 2017

@cpuguy83 I must be doing something wrong then. Thanks for the help!

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jan 24, 2017

Contributor

@emboss Feel free to ping me in #general on slack if you'd like to discuss further.

Contributor

cpuguy83 commented Jan 24, 2017

@emboss Feel free to ping me in #general on slack if you'd like to discuss further.

@emboss

This comment has been minimized.

Show comment
Hide comment
@emboss

emboss Jan 24, 2017

@cpuguy83 Thank you, much appreciated! I pinged you on #docker-dev IRC since I couldn't figure out how to join the Slack channel :)

emboss commented Jan 24, 2017

@cpuguy83 Thank you, much appreciated! I pinged you on #docker-dev IRC since I couldn't figure out how to join the Slack channel :)

@thenewguy

This comment has been minimized.

Show comment
Hide comment
@thenewguy

thenewguy Jun 9, 2017

This is a problem for me. I want docker to mount particular data dirs in the container on a particular mirrored storage accessible by file path on the host. If I do this using the syntax that works under volumes path/on/host:path/on/container I encounter permissions errors because I am running a 3rd party container and do not have access to setting the uid for the user a script runs as. If I let docker manage the volume with the local driver everything works as expected.

This is a problem for me. I want docker to mount particular data dirs in the container on a particular mirrored storage accessible by file path on the host. If I do this using the syntax that works under volumes path/on/host:path/on/container I encounter permissions errors because I am running a 3rd party container and do not have access to setting the uid for the user a script runs as. If I let docker manage the volume with the local driver everything works as expected.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jun 9, 2017

Contributor
Contributor

cpuguy83 commented Jun 9, 2017

@tkvw

This comment has been minimized.

Show comment
Hide comment
@tkvw

tkvw Jul 10, 2017

is this supposed to work on Windows as well?

If I specify this:

version: '3'

services:
  test: 
    image: busybox
    volumes:
      - data:/test
volumes:
  data: 
    driver_opts:
      type: none
      device: C:\Users\scd\src\github\tkvw\docker\data\test
      o: bind

I get this error:

ERROR: for d5132bd8836f_compose_test_1  Cannot start service test: error while mounting volume '/var/lib/docker/volumes/compose_data/_data': error while mounting volume with options: type='none' device='C:\Users\scd\src\github\tkvw\docker\data\test' o='bind': no such file or directory

ERROR: for test  Cannot start service test: error while mounting volume '/var/lib/docker/volumes/compose_data/_data': error while mounting volume with options: type='none' device='C:\Users\scd\src\github\tkvw\docker\data\test' o='bind': no such file or directory
ERROR: Encountered errors while bringing up the project.

The directory does exists and c: is shared in docker.

tkvw commented Jul 10, 2017

is this supposed to work on Windows as well?

If I specify this:

version: '3'

services:
  test: 
    image: busybox
    volumes:
      - data:/test
volumes:
  data: 
    driver_opts:
      type: none
      device: C:\Users\scd\src\github\tkvw\docker\data\test
      o: bind

I get this error:

ERROR: for d5132bd8836f_compose_test_1  Cannot start service test: error while mounting volume '/var/lib/docker/volumes/compose_data/_data': error while mounting volume with options: type='none' device='C:\Users\scd\src\github\tkvw\docker\data\test' o='bind': no such file or directory

ERROR: for test  Cannot start service test: error while mounting volume '/var/lib/docker/volumes/compose_data/_data': error while mounting volume with options: type='none' device='C:\Users\scd\src\github\tkvw\docker\data\test' o='bind': no such file or directory
ERROR: Encountered errors while bringing up the project.

The directory does exists and c: is shared in docker.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jul 10, 2017

Contributor

@tkvw you would have to supply the unix path... I suspect this would not work.. but why would you not just use a normal bind in this case?

Contributor

cpuguy83 commented Jul 10, 2017

@tkvw you would have to supply the unix path... I suspect this would not work.. but why would you not just use a normal bind in this case?

@tkvw

This comment has been minimized.

Show comment
Hide comment
@tkvw

tkvw Jul 10, 2017

I am new to docker, but I was thinking it is a nice separation of volumes between different environments:

compose-common.yml:

services:
  test: 
    image: busybox
    volumes:
      - data:/test

compose-dev.yml:

volumes:
  data: 
    driver_opts:
      type: none
      device: C:\Users\scd\src\github\tkvw\docker\data\test
      o: bind

compose-prod.yml:

volumes:
  data: 

But maybe I am doing this all wrong?

tkvw commented Jul 10, 2017

I am new to docker, but I was thinking it is a nice separation of volumes between different environments:

compose-common.yml:

services:
  test: 
    image: busybox
    volumes:
      - data:/test

compose-dev.yml:

volumes:
  data: 
    driver_opts:
      type: none
      device: C:\Users\scd\src\github\tkvw\docker\data\test
      o: bind

compose-prod.yml:

volumes:
  data: 

But maybe I am doing this all wrong?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jul 10, 2017

Contributor

you would have to supply the unix path

But no, this is not really a good separation. It's adding a lot of platform-dependent details for something that you can just do:

volumes:
    C:\Users\scd\src\github\tkvw\docker\data\test:/test
Contributor

cpuguy83 commented Jul 10, 2017

you would have to supply the unix path

But no, this is not really a good separation. It's adding a lot of platform-dependent details for something that you can just do:

volumes:
    C:\Users\scd\src\github\tkvw\docker\data\test:/test
@tkvw

This comment has been minimized.

Show comment
Hide comment
@tkvw

tkvw Jul 10, 2017

Thanks @cpuguy83 , I solved my issue with your unix path tip (windows drive is mounted in /C).
Can you elaborate a bit on your last comment?; consider:

# common.yml
services: 
  proxy:
    ....
    volume: 
      - shared:/shared
  foo:
    ....
    volume: 
      - shared:/shared
  bar: 
    ....
    volume: 
      - shared:/shared

Now I'm allowed to write:

# dev.yml
volumes:
  shared:
      driver_opts:
         type: none
         device: /C/Users/scd/src/github/tkvw/docker/data/test/
         o: bind

I think this is much cleaner and more platform-independent than explicitly setting the (platform-dependent) mounts in all the services (because volumes_from is not allowed in v3)? or am I missing something obvious?

tkvw commented Jul 10, 2017

Thanks @cpuguy83 , I solved my issue with your unix path tip (windows drive is mounted in /C).
Can you elaborate a bit on your last comment?; consider:

# common.yml
services: 
  proxy:
    ....
    volume: 
      - shared:/shared
  foo:
    ....
    volume: 
      - shared:/shared
  bar: 
    ....
    volume: 
      - shared:/shared

Now I'm allowed to write:

# dev.yml
volumes:
  shared:
      driver_opts:
         type: none
         device: /C/Users/scd/src/github/tkvw/docker/data/test/
         o: bind

I think this is much cleaner and more platform-independent than explicitly setting the (platform-dependent) mounts in all the services (because volumes_from is not allowed in v3)? or am I missing something obvious?

@johnharris85

This comment has been minimized.

Show comment
Hide comment
@johnharris85

johnharris85 Nov 3, 2017

Contributor

The use case for this (for us at least) is wanting to be host independent. We can pre-create our named volumes (pointing to different mount points on each host if necessary) then use the same stack files with the named volume. We're using this as part of a workaround to btrfs issues (unable to mount a non-btrfs at /var/docker/volumes for ... reasons) so can create these volumes outside of /var/docker (which is btrfs) to workaround the negative affects of volumes on a btrfs.

Contributor

johnharris85 commented Nov 3, 2017

The use case for this (for us at least) is wanting to be host independent. We can pre-create our named volumes (pointing to different mount points on each host if necessary) then use the same stack files with the named volume. We're using this as part of a workaround to btrfs issues (unable to mount a non-btrfs at /var/docker/volumes for ... reasons) so can create these volumes outside of /var/docker (which is btrfs) to workaround the negative affects of volumes on a btrfs.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 3, 2017

Contributor

@johnharris85 How is this host independent? In any case, you can make a volume driver to mount a btrfs subvolume pretty easily or you can just use a bind-mount.

Contributor

cpuguy83 commented Nov 3, 2017

@johnharris85 How is this host independent? In any case, you can make a volume driver to mount a btrfs subvolume pretty easily or you can just use a bind-mount.

@matthew-hickok

This comment has been minimized.

Show comment
Hide comment
@matthew-hickok

matthew-hickok Nov 21, 2017

I am completely new to Linux and even more new to Docker....so please bear with me.

I have a secondary disk, sitting at let's say /dev/sdb, I've created a partition, formatted it as ext4. This secondary disk is where I want all of the data inside the /var/garbage directory in my container. How do I use a named volume in this case?

Something like

docker volume create --opt type=none --opt device=/dev/sdb1 my_volume

sudo docker run -v my_volume:/var/garbage -it image/someimage

Am I even looking in the right general direction? I'm guessing I need to do something with the partition before I try to create the volume mount with Docker but I have no idea. (I'm on CoreOS btw)

Thanks in advance :)

I am completely new to Linux and even more new to Docker....so please bear with me.

I have a secondary disk, sitting at let's say /dev/sdb, I've created a partition, formatted it as ext4. This secondary disk is where I want all of the data inside the /var/garbage directory in my container. How do I use a named volume in this case?

Something like

docker volume create --opt type=none --opt device=/dev/sdb1 my_volume

sudo docker run -v my_volume:/var/garbage -it image/someimage

Am I even looking in the right general direction? I'm guessing I need to do something with the partition before I try to create the volume mount with Docker but I have no idea. (I'm on CoreOS btw)

Thanks in advance :)

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 21, 2017

Contributor

@matthew-hickok You'd have to set type=ext4, presumably.

Contributor

cpuguy83 commented Nov 21, 2017

@matthew-hickok You'd have to set type=ext4, presumably.

@ian-axelrod

This comment has been minimized.

Show comment
Hide comment
@ian-axelrod

ian-axelrod Nov 22, 2017

Perhaps this question is not 100% on-topic to the issue, so apologies in advance if that is the case.

I am wondering if the feature proposed in this issue could address problems I have been having using docker for local development. I have a Dockerfile that installs node and python dependencies for an app, and then copies the app source itself. Pretty standard setup. The problem I have is that I cannot find a way to allow an IDE access to the dependencies for debugging purposes, nor can I find a way to allow modification of both the source code while containers are running and also of dependencies that are stored in the same directory as the source code.

I cannot simply bind-mount, e.g., ./node_modules:/app/node_modules, and then bind-mount ./:/app because the dependencies in /app/node_modules will be hidden by the (empty) ./node_modules folder. I can change node_modules to be a named volume, but then there is no way to actually make changes to the contents of node_modules from the host. I do think this makes sense for named volumes, but I think bind-mount must then have some option to populate an empty directory with the contents of the container's directory. Without this ability, I fail to see how you can support code watching + dependency introspection. I think both are essential.

Am I missing anything? Is there a simple way to achieve this with existing mechanisms in docker? (Installing dependencies at runtime is not an option, imo. It adds a significant amount of complexity to the development setup if you want to ensure fast startup times. npm is slow af, even with caching...)

ian-axelrod commented Nov 22, 2017

Perhaps this question is not 100% on-topic to the issue, so apologies in advance if that is the case.

I am wondering if the feature proposed in this issue could address problems I have been having using docker for local development. I have a Dockerfile that installs node and python dependencies for an app, and then copies the app source itself. Pretty standard setup. The problem I have is that I cannot find a way to allow an IDE access to the dependencies for debugging purposes, nor can I find a way to allow modification of both the source code while containers are running and also of dependencies that are stored in the same directory as the source code.

I cannot simply bind-mount, e.g., ./node_modules:/app/node_modules, and then bind-mount ./:/app because the dependencies in /app/node_modules will be hidden by the (empty) ./node_modules folder. I can change node_modules to be a named volume, but then there is no way to actually make changes to the contents of node_modules from the host. I do think this makes sense for named volumes, but I think bind-mount must then have some option to populate an empty directory with the contents of the container's directory. Without this ability, I fail to see how you can support code watching + dependency introspection. I think both are essential.

Am I missing anything? Is there a simple way to achieve this with existing mechanisms in docker? (Installing dependencies at runtime is not an option, imo. It adds a significant amount of complexity to the development setup if you want to ensure fast startup times. npm is slow af, even with caching...)

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 23, 2017

Contributor

@ian-axelrod If you use a named volume with the above example for binding a host dir to the volume the data in the image will be copied over.

Contributor

cpuguy83 commented Nov 23, 2017

@ian-axelrod If you use a named volume with the above example for binding a host dir to the volume the data in the image will be copied over.

@ian-axelrod

This comment has been minimized.

Show comment
Hide comment
@ian-axelrod

ian-axelrod Nov 27, 2017

Hi @cpuguy83,

I did use the solution you gave earlier in this issue, in fact. The one blocker for me is the fact that it does not automatically create folders, which I know you specifically say it will not. That is why I am trying to find an alternative setup that does create folders, yet still has the advantages of the solution you proposed.

My team is using docker for local development, which means we have a set of requirements that need to be satisfied for development to go smoothly. First, we need dependencies visible on the host (inside the IDE workspace, actually) so that IDE integrations work properly. Second, we need a simple approach to correctly initialize the apps for new, junior developers, who may not be familiar with docker initially, to ease the onboarding process. I created a set of command-line tools that accomplishes this; however, I also want to give developers that are familiar with docker complete control over their environment. This means that I cannot build any extra logic into the aforementioned utilities that would force their use over simple docker-compose commands. I really want to avoid placing folder creation commands in the utilities, for instance, as that would mean devs not using the utility would have to manually create dependency folders for each new service they create. We have been creating quite a few new services as of late, so you can imagine that would become annoying.

Hopefully this gives you more insight. Is there anything I can do, or am I stuck creating the dependency folders manually for new services?

Cheers,

-Ian

Hi @cpuguy83,

I did use the solution you gave earlier in this issue, in fact. The one blocker for me is the fact that it does not automatically create folders, which I know you specifically say it will not. That is why I am trying to find an alternative setup that does create folders, yet still has the advantages of the solution you proposed.

My team is using docker for local development, which means we have a set of requirements that need to be satisfied for development to go smoothly. First, we need dependencies visible on the host (inside the IDE workspace, actually) so that IDE integrations work properly. Second, we need a simple approach to correctly initialize the apps for new, junior developers, who may not be familiar with docker initially, to ease the onboarding process. I created a set of command-line tools that accomplishes this; however, I also want to give developers that are familiar with docker complete control over their environment. This means that I cannot build any extra logic into the aforementioned utilities that would force their use over simple docker-compose commands. I really want to avoid placing folder creation commands in the utilities, for instance, as that would mean devs not using the utility would have to manually create dependency folders for each new service they create. We have been creating quite a few new services as of late, so you can imagine that would become annoying.

Hopefully this gives you more insight. Is there anything I can do, or am I stuck creating the dependency folders manually for new services?

Cheers,

-Ian

@matthew-hickok

This comment has been minimized.

Show comment
Hide comment
@matthew-hickok

matthew-hickok Dec 6, 2017

@cpuguy83

Just wanted to pick this back up as I have a very specific question...

I did what you said and create the mount like so:

docker volume create --opt type=ext4 --opt device=/storage/data my_volume

sudo docker run -v my_volume:/var/garbage -it image/someimage

It fails to perform the mount.

The only way that I got it to work was by adding the bind option like this

docker volume create --opt type=ext4 --opt device=/storage/data --opt o=bind my_volume

If I'm doing it this way, am I discarding the benefits of using the newer volume mounts and pretty much just using old-school bind mounts?

@cpuguy83

Just wanted to pick this back up as I have a very specific question...

I did what you said and create the mount like so:

docker volume create --opt type=ext4 --opt device=/storage/data my_volume

sudo docker run -v my_volume:/var/garbage -it image/someimage

It fails to perform the mount.

The only way that I got it to work was by adding the bind option like this

docker volume create --opt type=ext4 --opt device=/storage/data --opt o=bind my_volume

If I'm doing it this way, am I discarding the benefits of using the newer volume mounts and pretty much just using old-school bind mounts?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Dec 6, 2017

Contributor

@matthew-hickok Just fyi type=ext4 is not doing anything there, also yes you are just bind-mounting, though there are benefits/trade-offs to using a volume instead of a straight up -v /foo:/bar...

But I'm not sure what you are hoping to accomplish in general here.

Contributor

cpuguy83 commented Dec 6, 2017

@matthew-hickok Just fyi type=ext4 is not doing anything there, also yes you are just bind-mounting, though there are benefits/trade-offs to using a volume instead of a straight up -v /foo:/bar...

But I'm not sure what you are hoping to accomplish in general here.

@matthew-hickok

This comment has been minimized.

Show comment
Hide comment
@matthew-hickok

matthew-hickok Dec 6, 2017

@cpuguy83 All I want is to provide persistent storage to my containers which is on a secondary disk

For example, I want Elasticsearch data to be stored on /storage/es_data which is sitting on /dev/sdb. And since I am completely new to Docker (and Linux actually), I am not sure what the best way to accomplish that is.

I've heard that bind-mounts are bad because of things like permission issues. But it seems that if I want the persistent data to live outside of the default docker location used by volume mounts, I need to use a bind mount.

I could be going about this completely backwards, I really have no idea.

@cpuguy83 All I want is to provide persistent storage to my containers which is on a secondary disk

For example, I want Elasticsearch data to be stored on /storage/es_data which is sitting on /dev/sdb. And since I am completely new to Docker (and Linux actually), I am not sure what the best way to accomplish that is.

I've heard that bind-mounts are bad because of things like permission issues. But it seems that if I want the persistent data to live outside of the default docker location used by volume mounts, I need to use a bind mount.

I could be going about this completely backwards, I really have no idea.

konstin added a commit to meine-stadt-transparent/meine-stadt-transparent that referenced this issue Dec 17, 2017

Serve static and media with nginx in docker compose
Unfortunately there is not feasible way to mount data from inside a container to an outside folder (moby/moby#19990). Also media and cache are confirmed to work now.
@mikeyjk

This comment has been minimized.

Show comment
Hide comment
@mikeyjk

mikeyjk Jun 12, 2018

I found that I needed to manually delete docker volumes before @cpuguy83's solution would work.
Glad to have finally found this solution.

Does anyone have any thoughts on the best of handling 'n' shared volumes in this fashion?
The model here is a development environment, where we may want 'n' containers on 'n' different changesets.

Or any general thoughts on whether a clustered filesystem is more appropriate, be it on host machine or remote?

mikeyjk commented Jun 12, 2018

I found that I needed to manually delete docker volumes before @cpuguy83's solution would work.
Glad to have finally found this solution.

Does anyone have any thoughts on the best of handling 'n' shared volumes in this fashion?
The model here is a development environment, where we may want 'n' containers on 'n' different changesets.

Or any general thoughts on whether a clustered filesystem is more appropriate, be it on host machine or remote?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment