New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Volumes not respect container folder contents #4361

Closed
renanvaz opened this Issue Feb 26, 2014 · 26 comments

Comments

Projects
None yet
@renanvaz

When i use a volume, the contents of my folder in container are deleted.
Is there any configuration to be given a merge the files? It is a bug?

Ubuntu version: 13.10 (Saucy)
Docker version: Docker version 0.8
VirtualBox version: 4.3.6
Vagrant version: 1.4.3

@pda

This comment has been minimized.

Show comment
Hide comment
@pda

pda Feb 28, 2014

Contributor

If you mean that you're mounting a volume over an existing directory, and expecting to see the files from the existing directory… you wont.

This is consistent with the general behavior of mount:

The previous contents (if any) and owner and mode of dir become invisible

Contributor

pda commented Feb 28, 2014

If you mean that you're mounting a volume over an existing directory, and expecting to see the files from the existing directory… you wont.

This is consistent with the general behavior of mount:

The previous contents (if any) and owner and mode of dir become invisible

@renanvaz

This comment has been minimized.

Show comment
Hide comment
@renanvaz

renanvaz Feb 28, 2014

This behavior is bad. It would be nice if there was an option for this, because if a container stops turning, I'll lose my data. If I could centralize the files on the host and only let the software in the container would be ideal. An example would be having a MySQL database and write files on the host, but if I run a MySQL container when starting a volume'll lose the files and settings from default tables in MySQL

This behavior is bad. It would be nice if there was an option for this, because if a container stops turning, I'll lose my data. If I could centralize the files on the host and only let the software in the container would be ideal. An example would be having a MySQL database and write files on the host, but if I run a MySQL container when starting a volume'll lose the files and settings from default tables in MySQL

@pda

This comment has been minimized.

Show comment
Hide comment
@pda

pda Feb 28, 2014

Contributor

There can't be an option for this — Docker volumes are subject to the mechanics of unix filesystem mounting.

But you can certainly work around it. Some useful links:

I'd suggest you close this issue, and use public forums / IRC etc for further help.

Contributor

pda commented Feb 28, 2014

There can't be an option for this — Docker volumes are subject to the mechanics of unix filesystem mounting.

But you can certainly work around it. Some useful links:

I'd suggest you close this issue, and use public forums / IRC etc for further help.

@renanvaz

This comment has been minimized.

Show comment
Hide comment
@renanvaz

renanvaz Feb 28, 2014

Thanks pda! I'll check the links these

But Docker could not simply copy the contents of the container folder to put the host before creating the volume? This would greatly simplify the workflow.

Thanks pda! I'll check the links these

But Docker could not simply copy the contents of the container folder to put the host before creating the volume? This would greatly simplify the workflow.

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Feb 28, 2014

Contributor

or perhaps warn that its about to mount into a non-empty directory. Its quite fun to watch when someone succeeds in docker cping a file from a directory, only to find out that the file comes from the original docker build, and they didn't even realise there was a volume covering it.

mmm, Actually, a warning will help, but I think I also need to add some more documentation to that

Contributor

SvenDowideit commented Feb 28, 2014

or perhaps warn that its about to mount into a non-empty directory. Its quite fun to watch when someone succeeds in docker cping a file from a directory, only to find out that the file comes from the original docker build, and they didn't even realise there was a volume covering it.

mmm, Actually, a warning will help, but I think I also need to add some more documentation to that

@pda

This comment has been minimized.

Show comment
Hide comment
@pda

pda Feb 28, 2014

Contributor

I think:

  • Volumes are, and should be treated like, mount.
  • The Share Directories via Volumes page could mention that this is how mount works.
  • No warnings or automatic copying should happen; same as mount.
  • This issue should be closed as a non-issue.
Contributor

pda commented Feb 28, 2014

I think:

  • Volumes are, and should be treated like, mount.
  • The Share Directories via Volumes page could mention that this is how mount works.
  • No warnings or automatic copying should happen; same as mount.
  • This issue should be closed as a non-issue.
@renanvaz

This comment has been minimized.

Show comment
Hide comment
@renanvaz

renanvaz Mar 1, 2014

I understand that volumes are treated as mount, but I believe many people need this as a new feature because it would greatly simplify and make possible to have an environment already configured rather than run a container with volumes and only then configure it.

renanvaz commented Mar 1, 2014

I understand that volumes are treated as mount, but I believe many people need this as a new feature because it would greatly simplify and make possible to have an environment already configured rather than run a container with volumes and only then configure it.

@paislee

This comment has been minimized.

Show comment
Hide comment
@paislee

paislee Sep 4, 2015

Wait so is this not true? From Docker docs.

Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization.

paislee commented Sep 4, 2015

Wait so is this not true? From Docker docs.

Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization.

@pda

This comment has been minimized.

Show comment
Hide comment
@pda

pda Sep 4, 2015

Contributor

This issue was from Docker 0.8 days. The information in it may or may not have been true at the time, but I wouldn't pay much attention to it now, either way.

Contributor

pda commented Sep 4, 2015

This issue was from Docker 0.8 days. The information in it may or may not have been true at the time, but I wouldn't pay much attention to it now, either way.

@jelazos7

This comment has been minimized.

Show comment
Hide comment
@jelazos7

jelazos7 Oct 6, 2015

@paislee , there is a note further down specific to mounting host directories:

https://docs.docker.com/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume

Note: If the path /opt/webapp already exists inside the container’s image, its contents will be replaced by the contents of /src/webapp on the host to stay consistent with the expected behavior of mount

jelazos7 commented Oct 6, 2015

@paislee , there is a note further down specific to mounting host directories:

https://docs.docker.com/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume

Note: If the path /opt/webapp already exists inside the container’s image, its contents will be replaced by the contents of /src/webapp on the host to stay consistent with the expected behavior of mount

@beetree

This comment has been minimized.

Show comment
Hide comment
@beetree

beetree Dec 23, 2015

As paislee is pointing out, the documentation (https://docs.docker.com/engine/userguide/dockervolumes/) is wrong:

$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py

This command mounts the host directory, /src/webapp, into the container at /opt/webapp. If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content.

Instead the last sentence should read:

If the path /opt/webapp already exists inside the container’s image it will be removed and replaced by the /src/webapp mount.

The quote by @jelazos7 seems to have been removed.

/b3

beetree commented Dec 23, 2015

As paislee is pointing out, the documentation (https://docs.docker.com/engine/userguide/dockervolumes/) is wrong:

$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py

This command mounts the host directory, /src/webapp, into the container at /opt/webapp. If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content.

Instead the last sentence should read:

If the path /opt/webapp already exists inside the container’s image it will be removed and replaced by the /src/webapp mount.

The quote by @jelazos7 seems to have been removed.

/b3

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 23, 2015

Member

@beetree the content isn't removed from the container though, it is "masked", because the mounted directory is mounted on top of the existing files. The files are still in the container, only not reachable

Member

thaJeztah commented Dec 23, 2015

@beetree the content isn't removed from the container though, it is "masked", because the mounted directory is mounted on top of the existing files. The files are still in the container, only not reachable

@beetree

This comment has been minimized.

Show comment
Hide comment
@beetree

beetree Dec 24, 2015

@thaJeztah I might miss the subtle difference between "removed/replaced" and "masked". If the files can't be seen, read or written to, aren't they practically non-existent/deleted? Do the files appear if you delete the mounted directory from within the container, or do they appear if you unmount the volume(s) from the (running?) container?

I understand the technical difference in the layering/storage, but to a user "removed/replaced" seems identical to "masked".

beetree commented Dec 24, 2015

@thaJeztah I might miss the subtle difference between "removed/replaced" and "masked". If the files can't be seen, read or written to, aren't they practically non-existent/deleted? Do the files appear if you delete the mounted directory from within the container, or do they appear if you unmount the volume(s) from the (running?) container?

I understand the technical difference in the layering/storage, but to a user "removed/replaced" seems identical to "masked".

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 24, 2015

Member

No they're not deleted. The volume is mounted "over it", but the files in the container are untouched (not deleted). For example, take this Dockerfile;

FROM ubuntu:latest
RUN mkdir -p /test/
RUN echo hello > /test/hello

Build the Dockerfile, and tag the image "example";

root@dockr:~/projects/masked# docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:latest
 ---> 6d4946999d4f
Step 2 : RUN mkdir -p /test/
 ---> Using cache
 ---> f260463d3794
Step 3 : RUN echo hello > /test/hello
 ---> Running in 8f863e8e2f59
 ---> cef8a1760237
Removing intermediate container 8f863e8e2f59
Successfully built cef8a1760237

Run the container, using the current directory as a bind-mounted volume.

note: I'm using --privileged here, otherwise we're not allowed to
umount from inside a container. Don't do this; --privileged is insecure
as it gives the process in the container far too many privileges.
It's just to demonstrate the idea here.

root@dockr:~/projects/masked# docker run -it --rm --privileged -v $(pwd):/test/ example

Inside the /test/ directory, you'll see the contents of the mounted volume (in this case, the current directory, which only has a Dockerfile):

root@362dc1ce612b:/# ls /test
Dockerfile

However, un-mounting the volume reveals the content that is still in the container (only not accessible, because the volume is mounted over it)

root@362dc1ce612b:/# umount /test
root@362dc1ce612b:/# ls /test
hello

As you can see, the "hello" file is still there.

Member

thaJeztah commented Dec 24, 2015

No they're not deleted. The volume is mounted "over it", but the files in the container are untouched (not deleted). For example, take this Dockerfile;

FROM ubuntu:latest
RUN mkdir -p /test/
RUN echo hello > /test/hello

Build the Dockerfile, and tag the image "example";

root@dockr:~/projects/masked# docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:latest
 ---> 6d4946999d4f
Step 2 : RUN mkdir -p /test/
 ---> Using cache
 ---> f260463d3794
Step 3 : RUN echo hello > /test/hello
 ---> Running in 8f863e8e2f59
 ---> cef8a1760237
Removing intermediate container 8f863e8e2f59
Successfully built cef8a1760237

Run the container, using the current directory as a bind-mounted volume.

note: I'm using --privileged here, otherwise we're not allowed to
umount from inside a container. Don't do this; --privileged is insecure
as it gives the process in the container far too many privileges.
It's just to demonstrate the idea here.

root@dockr:~/projects/masked# docker run -it --rm --privileged -v $(pwd):/test/ example

Inside the /test/ directory, you'll see the contents of the mounted volume (in this case, the current directory, which only has a Dockerfile):

root@362dc1ce612b:/# ls /test
Dockerfile

However, un-mounting the volume reveals the content that is still in the container (only not accessible, because the volume is mounted over it)

root@362dc1ce612b:/# umount /test
root@362dc1ce612b:/# ls /test
hello

As you can see, the "hello" file is still there.

@beetree

This comment has been minimized.

Show comment
Hide comment
@beetree

beetree Dec 24, 2015

Ah, that makes a lot of sense. Thanks @thaJeztah for explaining this! Appreciate it!

beetree commented Dec 24, 2015

Ah, that makes a lot of sense. Thanks @thaJeztah for explaining this! Appreciate it!

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 24, 2015

Member

@beetree you're welcome!

Member

thaJeztah commented Dec 24, 2015

@beetree you're welcome!

@yefim

This comment has been minimized.

Show comment
Hide comment
@yefim

yefim Jan 14, 2016

Is there a way to run commands after the volume is mounted? I want to mount the folder that has my package.json then run npm install to install all the dependencies into the container.

yefim commented Jan 14, 2016

Is there a way to run commands after the volume is mounted? I want to mount the folder that has my package.json then run npm install to install all the dependencies into the container.

@kilpatty

This comment has been minimized.

Show comment
Hide comment
@kilpatty

kilpatty Apr 15, 2016

@yefim If you use an Entrypoint script it will run at container startup after the volumes have been mounted.

@yefim If you use an Entrypoint script it will run at container startup after the volumes have been mounted.

@tcollavo

This comment has been minimized.

Show comment
Hide comment
@tcollavo

tcollavo May 2, 2016

So I had an existing postgresql db container that I started with the following command:
docker run --name=postgresql-redmine -d --env='DB_NAME=redmine_production' --env='DB_USER=thisredmine' --env='DB_PASS=' --volume=/srv/docker/redmine/postgresql:/var/lib/postgresql sameersbn/postgresql:9.4-17

I was messing around with networking and started another container with the following command:
docker run --name postgresql-redmine-prod -d --env='DB_NAME=redmine_production' --env='DB_USER=thisredmine' --env='DB_PASS=' --volume=/srv/docker/redmine/postgresql:/var/lib/postgresql --net mynet --ip 10.0.0.1 sameersbn/postgresql:9.4-17

The 2nd container failed to start, and now when I try to start the first container via "docker start postgresql-redmine", this fails to start as well. Did I overwrite the first container's volume or just mount over it? Any idea how I can recover the volume and restart the container?

Thanks for any assistance...

tcollavo commented May 2, 2016

So I had an existing postgresql db container that I started with the following command:
docker run --name=postgresql-redmine -d --env='DB_NAME=redmine_production' --env='DB_USER=thisredmine' --env='DB_PASS=' --volume=/srv/docker/redmine/postgresql:/var/lib/postgresql sameersbn/postgresql:9.4-17

I was messing around with networking and started another container with the following command:
docker run --name postgresql-redmine-prod -d --env='DB_NAME=redmine_production' --env='DB_USER=thisredmine' --env='DB_PASS=' --volume=/srv/docker/redmine/postgresql:/var/lib/postgresql --net mynet --ip 10.0.0.1 sameersbn/postgresql:9.4-17

The 2nd container failed to start, and now when I try to start the first container via "docker start postgresql-redmine", this fails to start as well. Did I overwrite the first container's volume or just mount over it? Any idea how I can recover the volume and restart the container?

Thanks for any assistance...

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 2, 2016

Member

@tcollavo you started both containers with the same bind-mounted host directory; this resulted in two PostgreSQL servers working / writing to the same directory. I can't tell if this resulted in data being overwritten by the second PostgreSQL server, permissions being changed, or something else.

Member

thaJeztah commented May 2, 2016

@tcollavo you started both containers with the same bind-mounted host directory; this resulted in two PostgreSQL servers working / writing to the same directory. I can't tell if this resulted in data being overwritten by the second PostgreSQL server, permissions being changed, or something else.

@mangalaman93

This comment has been minimized.

Show comment
Hide comment
@mangalaman93

mangalaman93 Dec 23, 2016

I am also running a jenkins container using docker-compose. Every time when I reboot my machine, my container gets restarted when host comes back up again, though the host directory where I have mounted /var/jenkins_home of container, gets recreated from scratch and I lose all my data. I thought, as per this thread, that this issue was with earlier version of docker-compose whereas I am running version 1.9 of docker-compose. Here are further details -

$ docker info
Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 208
Server Version: 1.12.5
Storage Driver: devicemapper
 Pool Name: docker-253:2-268646682-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 15.96 GB
 Data Space Total: 107.4 GB
 Data Space Available: 49.66 GB
 Metadata Space Used: 13.83 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.134 GB
 Thin Pool Minimum Free Space: 10.74 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.135-RHEL7 (2016-09-28)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay null bridge host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 251.7 GiB
Name: <HOST_NAME>
ID: <ID>
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8
$ docker-compose version
docker-compose version 1.9.0, build 2585387
docker-py version: 1.10.6
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

mangalaman93 commented Dec 23, 2016

I am also running a jenkins container using docker-compose. Every time when I reboot my machine, my container gets restarted when host comes back up again, though the host directory where I have mounted /var/jenkins_home of container, gets recreated from scratch and I lose all my data. I thought, as per this thread, that this issue was with earlier version of docker-compose whereas I am running version 1.9 of docker-compose. Here are further details -

$ docker info
Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 208
Server Version: 1.12.5
Storage Driver: devicemapper
 Pool Name: docker-253:2-268646682-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 15.96 GB
 Data Space Total: 107.4 GB
 Data Space Available: 49.66 GB
 Metadata Space Used: 13.83 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.134 GB
 Thin Pool Minimum Free Space: 10.74 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.135-RHEL7 (2016-09-28)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay null bridge host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 251.7 GiB
Name: <HOST_NAME>
ID: <ID>
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8
$ docker-compose version
docker-compose version 1.9.0, build 2585387
docker-py version: 1.10.6
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1t  3 May 2016
@arvenil

This comment has been minimized.

Show comment
Hide comment
@arvenil

arvenil Oct 26, 2017

What is missing here is simple example how one can use host directory over non-empty VOLUME.

I understand mechanics behind linux mount, but for docker this creates inconsistent behavior.
If you use Dockerfile:

FROM ubuntu:latest
RUN mkdir -p /test/
RUN echo hello > /test/hello
VOLUME /test

And run it without passing any volume params e.g. docker run example it will create volume with copy of /test. You can read that in doc: https://docs.docker.com/engine/reference/builder/#volume
So with this your data is safe, you won't loose anything. But if at any point user decides he want's to have his own local directory and so bind mounts it - everything crashes, because now whole app gets empty dir.

I went through bazillion closed issues, and it looks like another one should be opened, because over and over and over again, people are asking for this, and so far all of them are closed and not even a workaround is provided.

I'm not really sure what's so hard to do with allowing additional param that will copy contents of guest directory to the host directory before mounting, the same way it's copied to the volume before it's mounted.

Or at least, what's the workaround to copy from image to host and then bind mount?

arvenil commented Oct 26, 2017

What is missing here is simple example how one can use host directory over non-empty VOLUME.

I understand mechanics behind linux mount, but for docker this creates inconsistent behavior.
If you use Dockerfile:

FROM ubuntu:latest
RUN mkdir -p /test/
RUN echo hello > /test/hello
VOLUME /test

And run it without passing any volume params e.g. docker run example it will create volume with copy of /test. You can read that in doc: https://docs.docker.com/engine/reference/builder/#volume
So with this your data is safe, you won't loose anything. But if at any point user decides he want's to have his own local directory and so bind mounts it - everything crashes, because now whole app gets empty dir.

I went through bazillion closed issues, and it looks like another one should be opened, because over and over and over again, people are asking for this, and so far all of them are closed and not even a workaround is provided.

I'm not really sure what's so hard to do with allowing additional param that will copy contents of guest directory to the host directory before mounting, the same way it's copied to the volume before it's mounted.

Or at least, what's the workaround to copy from image to host and then bind mount?

@mattacular

This comment has been minimized.

Show comment
Hide comment
@mattacular

mattacular Nov 7, 2017

I would also like to know if anyone has a good work-around for "copy from image to host and THEN bind mount" for volumes. It's extremely useful for local development with containers.

mattacular commented Nov 7, 2017

I would also like to know if anyone has a good work-around for "copy from image to host and THEN bind mount" for volumes. It's extremely useful for local development with containers.

@kehao95

This comment has been minimized.

Show comment
Hide comment
@kehao95

kehao95 Dec 12, 2017

I will show the inconsistent behavior when you mount your filesystem comparing to mounting a volume.

I will do the experiment using image library/ubuntu. And I have created a folder in /tmp which I will use as the mount folder

$ ls /tmp/myfolder
myfile  myfile2

mount nothing:

It will show the content of ubuntu's /usr/ --- some folders, which is fine.

$ docker run -it ubuntu ls /usr/
bin  games  include  lib  local  sbin  share  src

mount my folder in local FS

And when I mount myfolder to the container, the origin content of /usr from ubuntu is deleted(or hided) and replaced with my folder.

$ docker run -v /tmp/myfolder/:/usr/ -it ubuntu ls /usr/
myfile	myfile2

mount a volumn

However with the Docker Volumn the behavior is totally different from mounting local FS. It's more like mounting nothing.

$ docker run -v foo:/usr/ -it ubuntu ls /usr/
bin  games  include  lib  local  sbin  share  src

So that means with the identical syntax -v [host-src:]container-dest the docker‘s behavior is totally different.

kehao95 commented Dec 12, 2017

I will show the inconsistent behavior when you mount your filesystem comparing to mounting a volume.

I will do the experiment using image library/ubuntu. And I have created a folder in /tmp which I will use as the mount folder

$ ls /tmp/myfolder
myfile  myfile2

mount nothing:

It will show the content of ubuntu's /usr/ --- some folders, which is fine.

$ docker run -it ubuntu ls /usr/
bin  games  include  lib  local  sbin  share  src

mount my folder in local FS

And when I mount myfolder to the container, the origin content of /usr from ubuntu is deleted(or hided) and replaced with my folder.

$ docker run -v /tmp/myfolder/:/usr/ -it ubuntu ls /usr/
myfile	myfile2

mount a volumn

However with the Docker Volumn the behavior is totally different from mounting local FS. It's more like mounting nothing.

$ docker run -v foo:/usr/ -it ubuntu ls /usr/
bin  games  include  lib  local  sbin  share  src

So that means with the identical syntax -v [host-src:]container-dest the docker‘s behavior is totally different.

@iraklisg

This comment has been minimized.

Show comment
Hide comment
@iraklisg

iraklisg Dec 14, 2017

@kehao95 I believe this has to do with the following (taken from docker docs):

If you mount an empty volume into a directory in the container in which files or directories exist, these files or directories will be propagated (copied) into the volume. Similarly, if you start a container and specify a volume which does not already exist, an empty volume is created for you.

So what is happening here is that the contents of the /usr directory inside your container have been copied to the empty volume foo that had automatically been created by docker

This behavior has also been discussed in #18670

@kehao95 I believe this has to do with the following (taken from docker docs):

If you mount an empty volume into a directory in the container in which files or directories exist, these files or directories will be propagated (copied) into the volume. Similarly, if you start a container and specify a volume which does not already exist, an empty volume is created for you.

So what is happening here is that the contents of the /usr directory inside your container have been copied to the empty volume foo that had automatically been created by docker

This behavior has also been discussed in #18670

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 14, 2017

Member

Correct; even thought the "mechanics" behind bind mounted host directories and volumes are similar, they have a different purpose, and follow different semantics:

When using a (named) volume, you tell docker to create a new storage-space (if it doesn't exist) for a container to persist files outside of the container's filesystem. If that volume is empty, files from the container are copied to the volume's storage location, and the volume is mounted inside the container. Mounting a volume in the container "masks" the file that were already in that location, so what you see inside the container is the files that are present in the volume.
If a volume already contains data, or the volume is used with the :nocopy option, the "copy" step is skipped, and the volume is just mounted in the container.

When bind mounting a host directory you give the container permission to access a given path on the host, and any content in that path. If that directory happens to be empty, you give the container access to an empty directory. Access to the directory can be :ro (read-only), in which case the container can only read from it, or :rw (read-write; the default), in which case the container can also write to that directory.
Given that you only give "permission" to access the data on the host, Docker will not touch/alter the content, which means it will not copy the content from the container to the host (even if the path on the host is empty).

Member

thaJeztah commented Dec 14, 2017

Correct; even thought the "mechanics" behind bind mounted host directories and volumes are similar, they have a different purpose, and follow different semantics:

When using a (named) volume, you tell docker to create a new storage-space (if it doesn't exist) for a container to persist files outside of the container's filesystem. If that volume is empty, files from the container are copied to the volume's storage location, and the volume is mounted inside the container. Mounting a volume in the container "masks" the file that were already in that location, so what you see inside the container is the files that are present in the volume.
If a volume already contains data, or the volume is used with the :nocopy option, the "copy" step is skipped, and the volume is just mounted in the container.

When bind mounting a host directory you give the container permission to access a given path on the host, and any content in that path. If that directory happens to be empty, you give the container access to an empty directory. Access to the directory can be :ro (read-only), in which case the container can only read from it, or :rw (read-write; the default), in which case the container can also write to that directory.
Given that you only give "permission" to access the data on the host, Docker will not touch/alter the content, which means it will not copy the content from the container to the host (even if the path on the host is empty).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment