New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Base Images & Volume & mkdir #3639

Open
coderlol opened this Issue Jan 17, 2014 · 36 comments

Comments

Projects
None yet
@coderlol

coderlol commented Jan 17, 2014

If the base image Dockerfile has a VOUME /var/lib directive, then an image derived from the base image cannot do a RUN mkdir -p /var/lib/somedir

The mkdir command has no effect. I think it is convenient to have the base image export a VOLUME and the derived images can create their own structure without having to explicitly calling their own VOLUME.

As-is, the base image must not issue the VOLUME /var/lib or the derived images won't be able to create a directory in /var/lib

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Mar 14, 2014

Contributor

I'm not sure I understand what the expected behavior should be, can you give an example dockerfile to reproduce?

Contributor

crosbymichael commented Mar 14, 2014

I'm not sure I understand what the expected behavior should be, can you give an example dockerfile to reproduce?

@fdemmer

This comment has been minimized.

Show comment
Hide comment
@fdemmer

fdemmer Apr 22, 2014

i've run into this too and here's an example:

first Dockerfile:

FROM ubuntu:latest
VOLUME /etc

this Dockerfile is then used to build image "base":

# docker build -t base .
Uploading context  2.56 kB
Uploading context 
Step 0 : FROM ubuntu:latest
 ---> c0fe63f9a4c1
Step 1 : VOLUME /etc
 ---> Running in 8957e1485c31
 ---> 17879464e604
Successfully built 17879464e604
Removing intermediate container 8957e1485c31

... looks good, works.

second Dockerfile, should add more config and create a directory for later use in /etc:

FROM base
RUN mkdir /etc/ssl
ADD nginx.conf /etc/nginx/nginx.conf

again we build:

# docker build -t nginx .
Uploading context 3.072 kB
Uploading context 
Step 0 : FROM base
 ---> 17879464e604
Step 1 : RUN mkdir /etc/ssl
 ---> Running in ef0693690e0f
 ---> d94dba1a25f4
Step 2 : ADD nginx.conf /etc/nginx/nginx.conf
 ---> d4debccd60b9
Successfully built d4debccd60b9
Removing intermediate container ef0693690e0f
Removing intermediate container 3da3c324ab95

... looks good, but did not work:

# docker run -i -t nginx  bash
root@e158ee019e16:/# ls -la /etc/ssl
ls: cannot access /etc/ssl: No such file or directory
root@e158ee019e16:/# ls -la /etc/nginx/
total 8
drwxr-xr-x  2 root root 4096 Apr 22 21:11 .
drwxr-xr-x 57 root root 4096 Apr 22 21:12 ..
-rw-r--r--  1 root root    0 Apr 22 21:10 nginx.conf

as you see above, the ADD command worked (my sample nginx.conf is really empty, that's ok), but the RUN command to create the "ssl" directory was executed during the build (it seems), but then it is missing from the container when run.

edit: same problem with RUN rm. i was trying to remove the sites-enabled/default and use ADD my own site.conf, but default always remained. overwriting default with ADD works, but is not a very nice solution.

fdemmer commented Apr 22, 2014

i've run into this too and here's an example:

first Dockerfile:

FROM ubuntu:latest
VOLUME /etc

this Dockerfile is then used to build image "base":

# docker build -t base .
Uploading context  2.56 kB
Uploading context 
Step 0 : FROM ubuntu:latest
 ---> c0fe63f9a4c1
Step 1 : VOLUME /etc
 ---> Running in 8957e1485c31
 ---> 17879464e604
Successfully built 17879464e604
Removing intermediate container 8957e1485c31

... looks good, works.

second Dockerfile, should add more config and create a directory for later use in /etc:

FROM base
RUN mkdir /etc/ssl
ADD nginx.conf /etc/nginx/nginx.conf

again we build:

# docker build -t nginx .
Uploading context 3.072 kB
Uploading context 
Step 0 : FROM base
 ---> 17879464e604
Step 1 : RUN mkdir /etc/ssl
 ---> Running in ef0693690e0f
 ---> d94dba1a25f4
Step 2 : ADD nginx.conf /etc/nginx/nginx.conf
 ---> d4debccd60b9
Successfully built d4debccd60b9
Removing intermediate container ef0693690e0f
Removing intermediate container 3da3c324ab95

... looks good, but did not work:

# docker run -i -t nginx  bash
root@e158ee019e16:/# ls -la /etc/ssl
ls: cannot access /etc/ssl: No such file or directory
root@e158ee019e16:/# ls -la /etc/nginx/
total 8
drwxr-xr-x  2 root root 4096 Apr 22 21:11 .
drwxr-xr-x 57 root root 4096 Apr 22 21:12 ..
-rw-r--r--  1 root root    0 Apr 22 21:10 nginx.conf

as you see above, the ADD command worked (my sample nginx.conf is really empty, that's ok), but the RUN command to create the "ssl" directory was executed during the build (it seems), but then it is missing from the container when run.

edit: same problem with RUN rm. i was trying to remove the sites-enabled/default and use ADD my own site.conf, but default always remained. overwriting default with ADD works, but is not a very nice solution.

@fdemmer

This comment has been minimized.

Show comment
Hide comment
@fdemmer

fdemmer Apr 27, 2014

so I ran into this again with a different scenario just now and it's making me doubt if i am using VOLUME and Dockerfiles right at all...

  • i created a base image, that just installs openldap on top of an ubuntu image. in that dockerfile i set VOLUME /var/lib/ldap to have the database outside of the container root-fs.
  • i then use this base image with FROM in another dockerfile to add some custom config (as recommended in method 2 here: #2022 (comment)). the custom config is set via -e parameters on run and applied inside using dpkg-reconfigure, which modifies the database (on the volume).

that causes all kinds of weird behaviour of the ldap server. eg. the domain reconfiguration is applied, but other default objects are not there in the new base dn. anyway, point is: as soon as i remove the VOLUME for the database path from my base image, everything works as expected.

volumes really sound great the way the docs and articles like this http://crosbymichael.com/advanced-docker-volumes.html describe them, but if i cannot modify their contents when "inheriting" them FROM other images, how is the "base image+config image"-pattern supposed to work?

fdemmer commented Apr 27, 2014

so I ran into this again with a different scenario just now and it's making me doubt if i am using VOLUME and Dockerfiles right at all...

  • i created a base image, that just installs openldap on top of an ubuntu image. in that dockerfile i set VOLUME /var/lib/ldap to have the database outside of the container root-fs.
  • i then use this base image with FROM in another dockerfile to add some custom config (as recommended in method 2 here: #2022 (comment)). the custom config is set via -e parameters on run and applied inside using dpkg-reconfigure, which modifies the database (on the volume).

that causes all kinds of weird behaviour of the ldap server. eg. the domain reconfiguration is applied, but other default objects are not there in the new base dn. anyway, point is: as soon as i remove the VOLUME for the database path from my base image, everything works as expected.

volumes really sound great the way the docs and articles like this http://crosbymichael.com/advanced-docker-volumes.html describe them, but if i cannot modify their contents when "inheriting" them FROM other images, how is the "base image+config image"-pattern supposed to work?

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 6, 2014

Contributor

+1 on this issue, I think this is clearly a bug

Contributor

LK4D4 commented Jun 6, 2014

+1 on this issue, I think this is clearly a bug

@cpuguy83 cpuguy83 referenced this issue Jun 18, 2014

Closed

Collected issues with Volumes #6496

7 of 12 tasks complete
@e-max

This comment has been minimized.

Show comment
Hide comment
@e-max

e-max Jun 30, 2014

I've made simplest example to demonstrate this issue:

Parent image - create file

[e-max@e-max docker]$ cat ./parent/Dockerfile 
FROM ubuntu
RUN mkdir /tmp/docker/
RUN echo "hello" > /tmp/docker/hello
VOLUME ["/tmp/docker/"]

Child image - remove file

[e-max@e-max docker]$ cat ./child/Dockerfile 
FROM parent
RUN rm /tmp/docker/hello

Build

[e-max@e-max docker]$ docker build -t parent ./parent/
Sending build context to Docker daemon  2.56 kB
Sending build context to Docker daemon 
Step 0 : FROM ubuntu
 ---> 5cf8fd909c6c
Step 1 : RUN mkdir /tmp/docker/
 ---> Using cache
 ---> 0925b684892f
Step 2 : RUN echo "hello" > /tmp/docker/hello
 ---> Using cache
 ---> c63b75c84ecc
Step 3 : VOLUME ["/tmp/docker/"]
 ---> Using cache
 ---> 71a00868d3a2
Successfully built 71a00868d3a2
[e-max@e-max docker]$ docker build -t child ./child/
Sending build context to Docker daemon  2.56 kB
Sending build context to Docker daemon 
Step 0 : FROM parent
 ---> 71a00868d3a2
Step 1 : RUN rm /tmp/docker/hello
 ---> Using cache
 ---> dcde5cf04e6b
Successfully built dcde5cf04e6b

And test - file still exist in child image.

[e-max@e-max docker]$ docker run -i -t --rm child cat /tmp/docker/hello
hello
[e-max@e-max docker]$

e-max commented Jun 30, 2014

I've made simplest example to demonstrate this issue:

Parent image - create file

[e-max@e-max docker]$ cat ./parent/Dockerfile 
FROM ubuntu
RUN mkdir /tmp/docker/
RUN echo "hello" > /tmp/docker/hello
VOLUME ["/tmp/docker/"]

Child image - remove file

[e-max@e-max docker]$ cat ./child/Dockerfile 
FROM parent
RUN rm /tmp/docker/hello

Build

[e-max@e-max docker]$ docker build -t parent ./parent/
Sending build context to Docker daemon  2.56 kB
Sending build context to Docker daemon 
Step 0 : FROM ubuntu
 ---> 5cf8fd909c6c
Step 1 : RUN mkdir /tmp/docker/
 ---> Using cache
 ---> 0925b684892f
Step 2 : RUN echo "hello" > /tmp/docker/hello
 ---> Using cache
 ---> c63b75c84ecc
Step 3 : VOLUME ["/tmp/docker/"]
 ---> Using cache
 ---> 71a00868d3a2
Successfully built 71a00868d3a2
[e-max@e-max docker]$ docker build -t child ./child/
Sending build context to Docker daemon  2.56 kB
Sending build context to Docker daemon 
Step 0 : FROM parent
 ---> 71a00868d3a2
Step 1 : RUN rm /tmp/docker/hello
 ---> Using cache
 ---> dcde5cf04e6b
Successfully built dcde5cf04e6b

And test - file still exist in child image.

[e-max@e-max docker]$ docker run -i -t --rm child cat /tmp/docker/hello
hello
[e-max@e-max docker]$
@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass Jun 30, 2014

Collaborator

@e-max Thanks. I could reproduce with master.

Collaborator

tiborvass commented Jun 30, 2014

@e-max Thanks. I could reproduce with master.

@tiborvass tiborvass self-assigned this Jun 30, 2014

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jul 7, 2014

Contributor

Okay, I did little research and it seems that volume is non-mutable between Dockerfile instruction.
Here even smaller Dockerfile for testing:

FROM busybox

RUN mkdir /tmp/volume
RUN echo "hello" > /tmp/volume/hello
VOLUME ["/tmp/volume/"]
RUN [[ -f /tmp/volume/hello ]]
RUN rm /tmp/volume/hello
RUN [[ ! -e /tmp/volume/hello ]]

On each instruction we create new volume and copy content from original volume.

Contributor

LK4D4 commented Jul 7, 2014

Okay, I did little research and it seems that volume is non-mutable between Dockerfile instruction.
Here even smaller Dockerfile for testing:

FROM busybox

RUN mkdir /tmp/volume
RUN echo "hello" > /tmp/volume/hello
VOLUME ["/tmp/volume/"]
RUN [[ -f /tmp/volume/hello ]]
RUN rm /tmp/volume/hello
RUN [[ ! -e /tmp/volume/hello ]]

On each instruction we create new volume and copy content from original volume.

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 27, 2014

Fixes #3639 Do not create new volumes on build
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new HostConfig field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.hostConfig.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 28, 2014

Fixes #3639 Do not create new volumes on build
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 29, 2014

Fixes #3639 Do not create new volumes on build
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 29, 2014

Fixes #3639 Do not create new volumes on build
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Nov 10, 2014

Fixes #3639 Do not create new volumes on build
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Jan 14, 2015

Contributor

@cpuguy83 is this fixed?

Contributor

jessfraz commented Jan 14, 2015

@cpuguy83 is this fixed?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jan 14, 2015

Contributor

@jfrazelle No, see #7133

Contributor

cpuguy83 commented Jan 14, 2015

@jfrazelle No, see #7133

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Jan 14, 2015

Contributor

ah ok bummmeerrrrr

Contributor

jessfraz commented Jan 14, 2015

ah ok bummmeerrrrr

@tomfotherby

This comment has been minimized.

Show comment
Hide comment
@tomfotherby

tomfotherby Feb 11, 2015

Contributor

I just ran into this issue. I was trying to extend from the official mongodb image from the hub and seed the database with my apps skeleton data, but because the parent container uses VOLUME /data/db, any mongo data I add in my extended container doesn't persist the build. Shame 😞 .

( I blogged about my workaround)

Contributor

tomfotherby commented Feb 11, 2015

I just ran into this issue. I was trying to extend from the official mongodb image from the hub and seed the database with my apps skeleton data, but because the parent container uses VOLUME /data/db, any mongo data I add in my extended container doesn't persist the build. Shame 😞 .

( I blogged about my workaround)

This was referenced Feb 21, 2015

@pmoust

This comment has been minimized.

Show comment
Hide comment
@pmoust

pmoust Feb 26, 2015

Contributor

Relates #8647

Contributor

pmoust commented Feb 26, 2015

Relates #8647

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Mar 5, 2015

builder filter volumes on run
Fixes #3639

This makes the builder, on `RUN` commands disable volumes, this way
anything written to dirs that have a volume will persist in the image,
as is the expected behavior in builds.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

@cpuguy83 cpuguy83 self-assigned this Mar 5, 2015

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Mar 6, 2015

builder filter volumes on run
Fixes #3639

This makes the builder, on `RUN` commands disable volumes, this way
anything written to dirs that have a volume will persist in the image,
as is the expected behavior in builds.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

@spf13 spf13 added kind/bug exp/expert and removed exp/expert bug labels Mar 21, 2015

@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer

resouer Jul 15, 2015

Contributor

@LK4D4 @cpuguy83 Can anyone explain why docker create a immediate container for every each instruction? why don't we use only one container to run all the instructions?

 docker build --rm=true -t test:v1  .
Sending build context to Docker daemon 4.608 kB
Sending build context to Docker daemon 
Step 0 : FROM busybox
 ---> 4986bf8c1536
Step 1 : RUN mkdir /tmp/volume
 ---> Running in 44d16cfe2789
 ---> 5b43e0b0bd25
Removing intermediate container 44d16cfe2789
Step 2 : RUN echo "hello" > /tmp/volume/hello
 ---> Running in 3bb06fc6430f
 ---> f6c2df31942d
Removing intermediate container 3bb06fc6430f
Step 3 : VOLUME /tmp/volume/
 ---> Running in 772ecf1b514b
 ---> 20c101f04d84
Removing intermediate container 772ecf1b514b
Step 4 : RUN [[ -f /tmp/volume/hello  ]]
 ---> Running in a7b3d25253c1
 ---> fdcb7ad521b4
Removing intermediate container a7b3d25253c1
Step 5 : RUN rm /tmp/volume/hello
 ---> Running in f7eacc6fe381
 ---> 81723d425b39
Removing intermediate container f7eacc6fe381
Step 6 : RUN [[ ! -e /tmp/volume/hello  ]]
 ---> Running in b3770d7a9e46
The command '/bin/sh -c [[ ! -e /tmp/volume/hello  ]]' returned a non-zero code: 1

Contributor

resouer commented Jul 15, 2015

@LK4D4 @cpuguy83 Can anyone explain why docker create a immediate container for every each instruction? why don't we use only one container to run all the instructions?

 docker build --rm=true -t test:v1  .
Sending build context to Docker daemon 4.608 kB
Sending build context to Docker daemon 
Step 0 : FROM busybox
 ---> 4986bf8c1536
Step 1 : RUN mkdir /tmp/volume
 ---> Running in 44d16cfe2789
 ---> 5b43e0b0bd25
Removing intermediate container 44d16cfe2789
Step 2 : RUN echo "hello" > /tmp/volume/hello
 ---> Running in 3bb06fc6430f
 ---> f6c2df31942d
Removing intermediate container 3bb06fc6430f
Step 3 : VOLUME /tmp/volume/
 ---> Running in 772ecf1b514b
 ---> 20c101f04d84
Removing intermediate container 772ecf1b514b
Step 4 : RUN [[ -f /tmp/volume/hello  ]]
 ---> Running in a7b3d25253c1
 ---> fdcb7ad521b4
Removing intermediate container a7b3d25253c1
Step 5 : RUN rm /tmp/volume/hello
 ---> Running in f7eacc6fe381
 ---> 81723d425b39
Removing intermediate container f7eacc6fe381
Step 6 : RUN [[ ! -e /tmp/volume/hello  ]]
 ---> Running in b3770d7a9e46
The command '/bin/sh -c [[ ! -e /tmp/volume/hello  ]]' returned a non-zero code: 1

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Feb 7, 2016

Contributor

@simonvanderveldt I've attempted to fix this multiple times.
The last fix was fairly reasonable,imo, but was ultimately rejected.

Maybe a new attempt with a fresh set of eyes will help?

Volume is basically already a no-op, it's when RUN is called and it sees the Volume in the config that the volume is created like normal since RUN is basically the same thing as docker run.

Contributor

cpuguy83 commented Feb 7, 2016

@simonvanderveldt I've attempted to fix this multiple times.
The last fix was fairly reasonable,imo, but was ultimately rejected.

Maybe a new attempt with a fresh set of eyes will help?

Volume is basically already a no-op, it's when RUN is called and it sees the Volume in the config that the volume is created like normal since RUN is basically the same thing as docker run.

holybit added a commit to ReturnPath/gocd-docker that referenced this issue Feb 16, 2016

Remove config dirs from Dockerfile
Per docker issue 3639 Dockerfile that constitute base iamge should not
set config dirs w/ RUN instruction.

moby/moby#3639

Issue #8

Change-Id: Ic26beedd58c1f96c98deaf278955743fae75b959

holybit added a commit to ReturnPath/gocd-docker that referenced this issue Feb 16, 2016

Remove config dirs from VOLUME
Per docker issue 3639 Dockerfile that constitute a base image should not
set config dirs w/ VOLUME as downstream Dockerfile can not modify w/ RUN
instructions.

moby/moby#3639

Issue #8

lexi-lambda added a commit to lexi-lambda/docker-factorio that referenced this issue Apr 6, 2016

Remove VOLUMES declaration for the mods directory in the Dockerfile
This is totally unnecessary and is problematic when trying to create
images using this as the base image that place mods into that directory
(see moby/moby#3639).

lexi-lambda added a commit to lexi-lambda/docker-factorio that referenced this issue Apr 8, 2016

Remove VOLUMES declaration for the mods directory in the Dockerfile
This is totally unnecessary and is problematic when trying to create
images using this as the base image that place mods into that directory
(see moby/moby#3639).

z3cka added a commit to z3cka/Grav-PHP-Nginx that referenced this issue Apr 19, 2016

alexharrington referenced this issue in xibosignage/xibo-docker Jun 23, 2016

Merge pull request #26 from alexharrington/master
Move towards Docker best practices. With thanks to @brodkin for his input and support.
@habfast

This comment has been minimized.

Show comment
Hide comment
@habfast

habfast Jun 29, 2016

Basically, don't declare VOLUME unless you are done with that directory.

Yes, but then we are quite screwed when it comes to extending official images. There might be the logic of not using official images, but I thought using official images was the recommended way to work? The proofs that this is broken are all the references from external git repositories to this issue. They had to remove their VOLUME declarations so that people could extend their Dockerfile.

In any case, +1 for this feature too.

habfast commented Jun 29, 2016

Basically, don't declare VOLUME unless you are done with that directory.

Yes, but then we are quite screwed when it comes to extending official images. There might be the logic of not using official images, but I thought using official images was the recommended way to work? The proofs that this is broken are all the references from external git repositories to this issue. They had to remove their VOLUME declarations so that people could extend their Dockerfile.

In any case, +1 for this feature too.

@sirlatrom

This comment has been minimized.

Show comment
Hide comment
@sirlatrom

sirlatrom Jun 29, 2016

Here is a non-exhaustive list of VOLUME instructions in official images which prevent creating child images for use in e.g. integration tests, seeding data for developers, etc. in any trivial way.

Granted, some of them have instructions on how to work around this, but most involve copying over content from other non-VOLUME directories in your child image (such as the "Installing more tools" section of the Jenkins image page on Docker Hub), but it will often add unnecessarily to container startup time and probably other issues too.

sirlatrom commented Jun 29, 2016

Here is a non-exhaustive list of VOLUME instructions in official images which prevent creating child images for use in e.g. integration tests, seeding data for developers, etc. in any trivial way.

Granted, some of them have instructions on how to work around this, but most involve copying over content from other non-VOLUME directories in your child image (such as the "Installing more tools" section of the Jenkins image page on Docker Hub), but it will often add unnecessarily to container startup time and probably other issues too.

@mlosev

This comment has been minimized.

Show comment
Hide comment
@mlosev

mlosev Oct 4, 2016

Being unable to override VOLUME from parent image in any child images - that is very annoying(
Hope it will be fixed soon

mlosev commented Oct 4, 2016

Being unable to override VOLUME from parent image in any child images - that is very annoying(
Hope it will be fixed soon

@gotgenes

This comment has been minimized.

Show comment
Hide comment
@gotgenes

gotgenes commented Oct 7, 2016

@sirlatrom You can add JHipster to your list. See jhipster/generator-jhipster#4277.

@sirlatrom

This comment has been minimized.

Show comment
Hide comment
@sirlatrom

sirlatrom Nov 16, 2016

@gotgenes It doesn't look like jhister is an official image.

sirlatrom commented Nov 16, 2016

@gotgenes It doesn't look like jhister is an official image.

jjethwa referenced this issue in jjethwa/icinga2 Dec 7, 2016

modax added a commit to modax/puppet-in-docker that referenced this issue Feb 24, 2017

Cut down the number of directories in VOLUME.
VOLUME makes it impossible to add additional data to that directory
while building downstream docker image.

There is a 3 year old docker bug about that (see below) and I really
like the following advise:

Basically, don't declare VOLUME unless you are done with that directory.

moby/moby#3639 (comment)

In particular, culprit for me is
/opt/puppetlabs/server/data/puppetserver/ where I want to install
additional gems. For all kinds and purposes, the only VOLUME here is
ssldir (and maybe code to some extent). But since volumes can be defined
at run time (and mostly everybody would do it anyway), I would just
remove the VOLUME directive completely if I were you.

The problem was introduced by PR
puppetlabs#15 (author @ms1111)

modax added a commit to modax/puppet-in-docker that referenced this issue Feb 24, 2017

Cut down the number of directories in VOLUME.
VOLUME makes it impossible to add additional data to that directory
while building downstream docker image.

There is a 3 year old docker bug about that (see below) and I really
like the following advise:

Basically, don't declare VOLUME unless you are done with that directory.

moby/moby#3639 (comment)

In particular, culprit for me is
/opt/puppetlabs/server/data/puppetserver/ where I want to install
additional gems. For all kinds and purposes, the only VOLUME here is
ssldir (and maybe code to some extent). But since volumes can be defined
at run time (and mostly everybody would do it anyway), I would just
remove the VOLUME directive completely if I were you.

The problem was introduced by PR
puppetlabs#15 (author @ms1111)
@Neirda24

This comment has been minimized.

Show comment
Hide comment
@Neirda24

Neirda24 Apr 6, 2017

+1 for this feature

Neirda24 commented Apr 6, 2017

+1 for this feature

@Wilfred

This comment has been minimized.

Show comment
Hide comment
@Wilfred

Wilfred Apr 12, 2017

Would it be feasible to error or at least warn in this situation? Silently failing is unfortunate.

Wilfred commented Apr 12, 2017

Would it be feasible to error or at least warn in this situation? Silently failing is unfortunate.

@pjweisberg

This comment has been minimized.

Show comment
Hide comment
@pjweisberg

pjweisberg Dec 15, 2017

I've been relying on the behavior described in #21728 for several months; I had no idea it was a bug until helped a colleague figure out why the data he was putting in /var/lib/mysql was getting discarded. He spent hours trying to figure out what was blowing away his data.

He might end up having to just copy mysql's Dockerfile into his own instead of basing his image on theirs. Maybe I'm supposed to do the same, in case #21728 ever gets "fixed".

At minimum, child Dockerfiles should have a way to un-VOLUME a directory that was declared as a VOLUME by the parent. At least long enough to modify its contents.

pjweisberg commented Dec 15, 2017

I've been relying on the behavior described in #21728 for several months; I had no idea it was a bug until helped a colleague figure out why the data he was putting in /var/lib/mysql was getting discarded. He spent hours trying to figure out what was blowing away his data.

He might end up having to just copy mysql's Dockerfile into his own instead of basing his image on theirs. Maybe I'm supposed to do the same, in case #21728 ever gets "fixed".

At minimum, child Dockerfiles should have a way to un-VOLUME a directory that was declared as a VOLUME by the parent. At least long enough to modify its contents.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 15, 2017

Member

@pjweisberg see #3465 for that topic 👍

Member

thaJeztah commented Dec 15, 2017

@pjweisberg see #3465 for that topic 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment