Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Base Images & Volume & mkdir #3639

Open
coderlol opened this issue Jan 17, 2014 · 40 comments
Open

Base Images & Volume & mkdir #3639

coderlol opened this issue Jan 17, 2014 · 40 comments

Comments

@coderlol
Copy link

@coderlol coderlol commented Jan 17, 2014

If the base image Dockerfile has a VOUME /var/lib directive, then an image derived from the base image cannot do a RUN mkdir -p /var/lib/somedir

The mkdir command has no effect. I think it is convenient to have the base image export a VOLUME and the derived images can create their own structure without having to explicitly calling their own VOLUME.

As-is, the base image must not issue the VOLUME /var/lib or the derived images won't be able to create a directory in /var/lib

@crosbymichael

This comment has been minimized.

Copy link
Member

@crosbymichael crosbymichael commented Mar 14, 2014

I'm not sure I understand what the expected behavior should be, can you give an example dockerfile to reproduce?

@fdemmer

This comment has been minimized.

Copy link

@fdemmer fdemmer commented Apr 22, 2014

i've run into this too and here's an example:

first Dockerfile:

FROM ubuntu:latest
VOLUME /etc

this Dockerfile is then used to build image "base":

# docker build -t base .
Uploading context  2.56 kB
Uploading context 
Step 0 : FROM ubuntu:latest
 ---> c0fe63f9a4c1
Step 1 : VOLUME /etc
 ---> Running in 8957e1485c31
 ---> 17879464e604
Successfully built 17879464e604
Removing intermediate container 8957e1485c31

... looks good, works.

second Dockerfile, should add more config and create a directory for later use in /etc:

FROM base
RUN mkdir /etc/ssl
ADD nginx.conf /etc/nginx/nginx.conf

again we build:

# docker build -t nginx .
Uploading context 3.072 kB
Uploading context 
Step 0 : FROM base
 ---> 17879464e604
Step 1 : RUN mkdir /etc/ssl
 ---> Running in ef0693690e0f
 ---> d94dba1a25f4
Step 2 : ADD nginx.conf /etc/nginx/nginx.conf
 ---> d4debccd60b9
Successfully built d4debccd60b9
Removing intermediate container ef0693690e0f
Removing intermediate container 3da3c324ab95

... looks good, but did not work:

# docker run -i -t nginx  bash
root@e158ee019e16:/# ls -la /etc/ssl
ls: cannot access /etc/ssl: No such file or directory
root@e158ee019e16:/# ls -la /etc/nginx/
total 8
drwxr-xr-x  2 root root 4096 Apr 22 21:11 .
drwxr-xr-x 57 root root 4096 Apr 22 21:12 ..
-rw-r--r--  1 root root    0 Apr 22 21:10 nginx.conf

as you see above, the ADD command worked (my sample nginx.conf is really empty, that's ok), but the RUN command to create the "ssl" directory was executed during the build (it seems), but then it is missing from the container when run.

edit: same problem with RUN rm. i was trying to remove the sites-enabled/default and use ADD my own site.conf, but default always remained. overwriting default with ADD works, but is not a very nice solution.

@fdemmer

This comment has been minimized.

Copy link

@fdemmer fdemmer commented Apr 27, 2014

so I ran into this again with a different scenario just now and it's making me doubt if i am using VOLUME and Dockerfiles right at all...

  • i created a base image, that just installs openldap on top of an ubuntu image. in that dockerfile i set VOLUME /var/lib/ldap to have the database outside of the container root-fs.
  • i then use this base image with FROM in another dockerfile to add some custom config (as recommended in method 2 here: #2022 (comment)). the custom config is set via -e parameters on run and applied inside using dpkg-reconfigure, which modifies the database (on the volume).

that causes all kinds of weird behaviour of the ldap server. eg. the domain reconfiguration is applied, but other default objects are not there in the new base dn. anyway, point is: as soon as i remove the VOLUME for the database path from my base image, everything works as expected.

volumes really sound great the way the docs and articles like this http://crosbymichael.com/advanced-docker-volumes.html describe them, but if i cannot modify their contents when "inheriting" them FROM other images, how is the "base image+config image"-pattern supposed to work?

@LK4D4

This comment has been minimized.

Copy link
Contributor

@LK4D4 LK4D4 commented Jun 6, 2014

+1 on this issue, I think this is clearly a bug

@cpuguy83 cpuguy83 mentioned this issue Jun 18, 2014
7 of 12 tasks complete
@e-max

This comment has been minimized.

Copy link

@e-max e-max commented Jun 30, 2014

I've made simplest example to demonstrate this issue:

Parent image - create file

[e-max@e-max docker]$ cat ./parent/Dockerfile 
FROM ubuntu
RUN mkdir /tmp/docker/
RUN echo "hello" > /tmp/docker/hello
VOLUME ["/tmp/docker/"]

Child image - remove file

[e-max@e-max docker]$ cat ./child/Dockerfile 
FROM parent
RUN rm /tmp/docker/hello

Build

[e-max@e-max docker]$ docker build -t parent ./parent/
Sending build context to Docker daemon  2.56 kB
Sending build context to Docker daemon 
Step 0 : FROM ubuntu
 ---> 5cf8fd909c6c
Step 1 : RUN mkdir /tmp/docker/
 ---> Using cache
 ---> 0925b684892f
Step 2 : RUN echo "hello" > /tmp/docker/hello
 ---> Using cache
 ---> c63b75c84ecc
Step 3 : VOLUME ["/tmp/docker/"]
 ---> Using cache
 ---> 71a00868d3a2
Successfully built 71a00868d3a2
[e-max@e-max docker]$ docker build -t child ./child/
Sending build context to Docker daemon  2.56 kB
Sending build context to Docker daemon 
Step 0 : FROM parent
 ---> 71a00868d3a2
Step 1 : RUN rm /tmp/docker/hello
 ---> Using cache
 ---> dcde5cf04e6b
Successfully built dcde5cf04e6b

And test - file still exist in child image.

[e-max@e-max docker]$ docker run -i -t --rm child cat /tmp/docker/hello
hello
[e-max@e-max docker]$
@tiborvass

This comment has been minimized.

Copy link
Collaborator

@tiborvass tiborvass commented Jun 30, 2014

@e-max Thanks. I could reproduce with master.

@tiborvass tiborvass self-assigned this Jun 30, 2014
@LK4D4

This comment has been minimized.

Copy link
Contributor

@LK4D4 LK4D4 commented Jul 7, 2014

Okay, I did little research and it seems that volume is non-mutable between Dockerfile instruction.
Here even smaller Dockerfile for testing:

FROM busybox

RUN mkdir /tmp/volume
RUN echo "hello" > /tmp/volume/hello
VOLUME ["/tmp/volume/"]
RUN [[ -f /tmp/volume/hello ]]
RUN rm /tmp/volume/hello
RUN [[ ! -e /tmp/volume/hello ]]

On each instruction we create new volume and copy content from original volume.

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 27, 2014
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new HostConfig field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.hostConfig.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 28, 2014
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 29, 2014
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
cpuguy83 added a commit to cpuguy83/docker that referenced this issue Oct 29, 2014
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
cpuguy83 added a commit to cpuguy83/docker that referenced this issue Nov 10, 2014
During builds when `VOLUME` is declared it is not treated any
differently than a normal volume.  As such, for each new container
created as part of the build there is a new volume created for that
container.

Because of the way volumes work, this essentially makes a delcared
volume an immutable directory within the image since changes to the
data in the volume are not commited back to disk. This in turn makes
VOLUME delcarations highly positional and they must be declared at the
end of the Dockerfile.

The current behavior can also create a lot of added overhead if there is
any signficant amount of data in the image at the declared volume
location since this data gets automatically copied out onto the host as
part of the volution initialization.

This change introduces for the first time a differentiation from a build
container and a normal container.
This means a new Config field called `BuildOnly` has been added.
This field is a boolean that can be used to change behavior of the
container creation depending on if it is for a build or not. In the case
of this change it checks if `container.Config.BuildOnly` is true,
and if so, does not fully initialize the volume and instead just creates
the dir within the container's fs (if it doesn't exist).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@jessfraz

This comment has been minimized.

Copy link
Contributor

@jessfraz jessfraz commented Jan 14, 2015

@cpuguy83 is this fixed?

@cpuguy83

This comment has been minimized.

Copy link
Contributor

@cpuguy83 cpuguy83 commented Jan 14, 2015

@jfrazelle No, see #7133

@jessfraz

This comment has been minimized.

Copy link
Contributor

@jessfraz jessfraz commented Jan 14, 2015

ah ok bummmeerrrrr

@tomfotherby

This comment has been minimized.

Copy link
Contributor

@tomfotherby tomfotherby commented Feb 11, 2015

I just ran into this issue. I was trying to extend from the official mongodb image from the hub and seed the database with my apps skeleton data, but because the parent container uses VOLUME /data/db, any mongo data I add in my extended container doesn't persist the build. Shame 😞 .

( I blogged about my workaround)

This was referenced Feb 21, 2015
@pmoust

This comment has been minimized.

Copy link
Contributor

@pmoust pmoust commented Feb 26, 2015

Relates #8647

cpuguy83 added a commit to cpuguy83/docker that referenced this issue Mar 5, 2015
Fixes moby#3639

This makes the builder, on `RUN` commands disable volumes, this way
anything written to dirs that have a volume will persist in the image,
as is the expected behavior in builds.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@mlosev

This comment has been minimized.

Copy link

@mlosev mlosev commented Oct 4, 2016

Being unable to override VOLUME from parent image in any child images - that is very annoying(
Hope it will be fixed soon

@gotgenes

This comment has been minimized.

Copy link

@gotgenes gotgenes commented Oct 7, 2016

@sirlatrom You can add JHipster to your list. See jhipster/generator-jhipster#4277.

@sirlatrom

This comment has been minimized.

Copy link
Contributor

@sirlatrom sirlatrom commented Nov 16, 2016

@gotgenes It doesn't look like jhister is an official image.

jjethwa referenced this issue in jjethwa/icinga2 Dec 7, 2016
modax added a commit to modax/puppet-in-docker that referenced this issue Feb 24, 2017
VOLUME makes it impossible to add additional data to that directory
while building downstream docker image.

There is a 3 year old docker bug about that (see below) and I really
like the following advise:

Basically, don't declare VOLUME unless you are done with that directory.

moby/moby#3639 (comment)

In particular, culprit for me is
/opt/puppetlabs/server/data/puppetserver/ where I want to install
additional gems. For all kinds and purposes, the only VOLUME here is
ssldir (and maybe code to some extent). But since volumes can be defined
at run time (and mostly everybody would do it anyway), I would just
remove the VOLUME directive completely if I were you.

The problem was introduced by PR
puppetlabs#15 (author @ms1111)
modax added a commit to modax/puppet-in-docker that referenced this issue Feb 24, 2017
VOLUME makes it impossible to add additional data to that directory
while building downstream docker image.

There is a 3 year old docker bug about that (see below) and I really
like the following advise:

Basically, don't declare VOLUME unless you are done with that directory.

moby/moby#3639 (comment)

In particular, culprit for me is
/opt/puppetlabs/server/data/puppetserver/ where I want to install
additional gems. For all kinds and purposes, the only VOLUME here is
ssldir (and maybe code to some extent). But since volumes can be defined
at run time (and mostly everybody would do it anyway), I would just
remove the VOLUME directive completely if I were you.

The problem was introduced by PR
puppetlabs#15 (author @ms1111)
@Neirda24

This comment has been minimized.

Copy link

@Neirda24 Neirda24 commented Apr 6, 2017

+1 for this feature

@Wilfred

This comment has been minimized.

Copy link

@Wilfred Wilfred commented Apr 12, 2017

Would it be feasible to error or at least warn in this situation? Silently failing is unfortunate.

@pjweisberg

This comment has been minimized.

Copy link

@pjweisberg pjweisberg commented Dec 15, 2017

I've been relying on the behavior described in #21728 for several months; I had no idea it was a bug until helped a colleague figure out why the data he was putting in /var/lib/mysql was getting discarded. He spent hours trying to figure out what was blowing away his data.

He might end up having to just copy mysql's Dockerfile into his own instead of basing his image on theirs. Maybe I'm supposed to do the same, in case #21728 ever gets "fixed".

At minimum, child Dockerfiles should have a way to un-VOLUME a directory that was declared as a VOLUME by the parent. At least long enough to modify its contents.

@thaJeztah

This comment has been minimized.

Copy link
Member

@thaJeztah thaJeztah commented Dec 15, 2017

@pjweisberg see #3465 for that topic 👍

@dominikzalewski

This comment has been minimized.

Copy link

@dominikzalewski dominikzalewski commented Apr 10, 2019

So this issue is like what... 5 years old... just lost 2h 45m trying to figure out what's wrong. Is it ever going to be fixed?

@cpuguy83

This comment has been minimized.

Copy link
Contributor

@cpuguy83 cpuguy83 commented Apr 10, 2019

Use DOCKER_BUILDKIT=1
The new builder does not exhibit this behavior.

@dominikzalewski

This comment has been minimized.

Copy link

@dominikzalewski dominikzalewski commented Apr 11, 2019

Thank you very much for your answer/hint. This resolves all of my issues! As you can see below, the default behavior does not work, and the one you proposed does:

Screenshot 2019-04-11 at 09 24 03

That's a very simple Dockerfile that I'm using:

FROM wordpress:latest
ARG UPLOAD_DIR=/var/www/html/wp-content/uploads

RUN mkdir -p $UPLOAD_DIR
RUN ls -lhd $UPLOAD_DIR

I wish this was standing out from the documentation more. I'm developing for quite a while using docker and only now I started using this: https://docs.docker.com/develop/develop-images/build_enhancements/

Perhaps it's better if this 'new' builder was the default?

@thaJeztah

This comment has been minimized.

Copy link
Member

@thaJeztah thaJeztah commented Apr 11, 2019

I wish this was standing out from the documentation more

I thought we had a note in the documentation about this behaviour, but perhaps it got lost during a rewrite (can't find it in a quick search https://docs.docker.com/engine/reference/builder/)

Perhaps it's better if this 'new' builder was the default?

That's definitely the goal 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.