New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Dockerfile add INCLUDE #735

Closed
dysinger opened this Issue May 28, 2013 · 127 comments

Comments

Projects
None yet
@dysinger

dysinger commented May 28, 2013

No description provided.

@shykes

This comment has been minimized.

Show comment
Hide comment
@shykes

shykes Jun 21, 2013

Collaborator

+1

Collaborator

shykes commented Jun 21, 2013

+1

@keeb

This comment has been minimized.

Show comment
Hide comment
@keeb

keeb Jun 21, 2013

Contributor

+1

Contributor

keeb commented Jun 21, 2013

+1

@ptone

This comment has been minimized.

Show comment
Hide comment
@ptone

ptone Jun 21, 2013

Yes this would be great to see +1

ptone commented Jun 21, 2013

Yes this would be great to see +1

@jpfuentes2

This comment has been minimized.

Show comment
Hide comment
@jpfuentes2

jpfuentes2 Aug 9, 2013

I think this would be a great feature as I want to leverage some of my knowledge/experience with systems like Chef whereby you can compose complex builds using smaller/simpler building blocks.

Does anyone have a way they're implementing this now?

jpfuentes2 commented Aug 9, 2013

I think this would be a great feature as I want to leverage some of my knowledge/experience with systems like Chef whereby you can compose complex builds using smaller/simpler building blocks.

Does anyone have a way they're implementing this now?

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Aug 11, 2013

Contributor

Can someone give me a few examples on how they would use this? ping @ptone @keeb @dysinger

Contributor

crosbymichael commented Aug 11, 2013

Can someone give me a few examples on how they would use this? ping @ptone @keeb @dysinger

@ptone

This comment has been minimized.

Show comment
Hide comment
@ptone

ptone Aug 12, 2013

Lets say I have build file snippets for different components of web architecture.

I may want to include 1 or more of them on a single container image - or in general bundle things that always go together.

Simple examples might be nginx and varnish always go on the same container, or redis and pyredis.

I might have a "Scientific Python" list of python packages and linked libs, that I may want to include in other images.

The problem with FROM and base images is that you can't remix things the same way you can with includes. A base image is all or nothing - if you don't want something, your only hope is that you can go back to a 'lower' base image, and re-add stuff manually that you want, skipping the stuff you don't.

It is essentially a case of inheritance vs composition in many ways.

ptone commented Aug 12, 2013

Lets say I have build file snippets for different components of web architecture.

I may want to include 1 or more of them on a single container image - or in general bundle things that always go together.

Simple examples might be nginx and varnish always go on the same container, or redis and pyredis.

I might have a "Scientific Python" list of python packages and linked libs, that I may want to include in other images.

The problem with FROM and base images is that you can't remix things the same way you can with includes. A base image is all or nothing - if you don't want something, your only hope is that you can go back to a 'lower' base image, and re-add stuff manually that you want, skipping the stuff you don't.

It is essentially a case of inheritance vs composition in many ways.

@binocarlos

This comment has been minimized.

Show comment
Hide comment
@binocarlos

binocarlos Aug 29, 2013

+1 to this - @crosbymichael @ptone @keeb @dysinger @shykes

I am at this exact point where I have an appserver base image (which is just node.js).

I also have a collection of Dockerfiles representing little bits of services that have deps - so:

ImageMagick:

from appserver
run apt-get install imagemagick

RedisSession:

from appserver
run apt-get install redis-server

To have a container that is both ImageMagick and RedisSession:

from appserver
run apt-get install imagemagick
run apt-get install redis-server

Whereas the following syntax means I can build up a folder of modules and include them by name in the application Dockerfile:

from appserver
include ./modules/imagemagick
include ./modules/redis-server

Now, because Docker is so darn brilliantly good : ) this is currently trivial to do (i.e. read module file and inject here) and so I'm not sure if for me this is a required feature.

However - it would make a user app's Dockerfile (i.e. the last thing in the chain for this scenario) much more about composing modules together than about creating a strict inheritance tree where some combinations are not possible - my 2 cents on what is a joy to work with otherwise :)

binocarlos commented Aug 29, 2013

+1 to this - @crosbymichael @ptone @keeb @dysinger @shykes

I am at this exact point where I have an appserver base image (which is just node.js).

I also have a collection of Dockerfiles representing little bits of services that have deps - so:

ImageMagick:

from appserver
run apt-get install imagemagick

RedisSession:

from appserver
run apt-get install redis-server

To have a container that is both ImageMagick and RedisSession:

from appserver
run apt-get install imagemagick
run apt-get install redis-server

Whereas the following syntax means I can build up a folder of modules and include them by name in the application Dockerfile:

from appserver
include ./modules/imagemagick
include ./modules/redis-server

Now, because Docker is so darn brilliantly good : ) this is currently trivial to do (i.e. read module file and inject here) and so I'm not sure if for me this is a required feature.

However - it would make a user app's Dockerfile (i.e. the last thing in the chain for this scenario) much more about composing modules together than about creating a strict inheritance tree where some combinations are not possible - my 2 cents on what is a joy to work with otherwise :)

@Thermionix

This comment has been minimized.

Show comment
Hide comment
@Thermionix

Thermionix Sep 11, 2013

Contributor

+1 would also be good to reference out generic blocks (possibly to a github raw link?)

Contributor

Thermionix commented Sep 11, 2013

+1 would also be good to reference out generic blocks (possibly to a github raw link?)

@frankscholten

This comment has been minimized.

Show comment
Hide comment
@frankscholten

frankscholten commented Oct 3, 2013

+1

flavio added a commit to flavio/docker that referenced this issue Oct 15, 2013

docker build: initial work on the include command
Added the 'include' command to dockerfile build as suggested by issue #735.
Right now the include command works only with files in the same directory of
the main Dockerfile or with remote ones.
@emtq

This comment has been minimized.

Show comment
Hide comment
@emtq

emtq Oct 21, 2013

I'll re-implement this on top of #2266.

emtq commented Oct 21, 2013

I'll re-implement this on top of #2266.

@prologic

This comment has been minimized.

Show comment
Hide comment
@prologic

prologic Feb 10, 2014

Contributor

+1 Turns Docker and the Dockerfile into a portable rudimentary configuration management system for the constructions of portable Docker images :)

Contributor

prologic commented Feb 10, 2014

+1 Turns Docker and the Dockerfile into a portable rudimentary configuration management system for the constructions of portable Docker images :)

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Feb 14, 2014

This would help a lot. +1

ghost commented Feb 14, 2014

This would help a lot. +1

@deeky666

This comment has been minimized.

Show comment
Hide comment
@deeky666

deeky666 commented Feb 26, 2014

+1

@peenuty

This comment has been minimized.

Show comment
Hide comment
@peenuty

peenuty commented Mar 10, 2014

+1

@newhoggy

This comment has been minimized.

Show comment
Hide comment
@newhoggy

newhoggy commented Mar 10, 2014

+1

@jfinkhaeuser

This comment has been minimized.

Show comment
Hide comment
@jfinkhaeuser

jfinkhaeuser Mar 11, 2014

Not to sound negative here (pun intended), but -1.

I completely get the use cases for an include statement. But then I also get the need for parametrized Dockerfiles, and then for conditionals, etc. Continue down that path, and you'll end up implementing a programming language in Dockerfiles, which may even become turing complete. The cynicism in that statement is free of charge, by the way ;)

Why not use a preprocessing step instead? You don't even have to program your own, you could use m4 (used in autotools for this purpose) or the C preprocessor (used in IMake, which does a similar job as autotools but is pretty much defunct these days).

Makefile:

Dockerfile: Dockerfile.in *.docker
  cpp -o Dockerfile Dockerfile.in

build: Dockerfile
  docker build -rm -t my/image .

Dockerfile:

FROM ubuntu:latest
MAINTAINER me

#include "imagemagick.docker"
#include "redis.docker"

Run make and it'll re-build the Dockerfile if any input file changed. Run make build and it'll re-build the Dockerfile if any input file changed, and continue to build the image.

Look ma, no code!

jfinkhaeuser commented Mar 11, 2014

Not to sound negative here (pun intended), but -1.

I completely get the use cases for an include statement. But then I also get the need for parametrized Dockerfiles, and then for conditionals, etc. Continue down that path, and you'll end up implementing a programming language in Dockerfiles, which may even become turing complete. The cynicism in that statement is free of charge, by the way ;)

Why not use a preprocessing step instead? You don't even have to program your own, you could use m4 (used in autotools for this purpose) or the C preprocessor (used in IMake, which does a similar job as autotools but is pretty much defunct these days).

Makefile:

Dockerfile: Dockerfile.in *.docker
  cpp -o Dockerfile Dockerfile.in

build: Dockerfile
  docker build -rm -t my/image .

Dockerfile:

FROM ubuntu:latest
MAINTAINER me

#include "imagemagick.docker"
#include "redis.docker"

Run make and it'll re-build the Dockerfile if any input file changed. Run make build and it'll re-build the Dockerfile if any input file changed, and continue to build the image.

Look ma, no code!

@prologic

This comment has been minimized.

Show comment
Hide comment
@prologic

prologic Mar 11, 2014

Contributor

On Tue, Mar 11, 2014 at 6:45 PM, Jens Finkhaeuser
notifications@github.comwrote:

Not to sound negative here (pun intended), but -1.

I completely get the use cases for an include statement. But then I also
get the need for parametrized Dockerfiles, and then for conditionals, etc.
Continue down that path, and you'll end up implementing a programming
language in Dockerfiles, which may even become turing complete. The
cynicism in that statement is free of charge, by the way ;)

Why not use a preprocessing step instead? You don't even have to program
your own, you could use m4 (used in autotools for this purpose) or the C
preprocessor (used in IMake, which does a similar job as autotools but is
pretty much defunct these days).

I'm in agreement with this. Having toyed with design and implemtantions of
a few languages myself over the years
turning Dockerfile(s) into a "scripting" language even if it's not turning
complete sounds like something that Docker should not do.

As Jens clearly points out there are better more appropriate tools for this
job.

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

Contributor

prologic commented Mar 11, 2014

On Tue, Mar 11, 2014 at 6:45 PM, Jens Finkhaeuser
notifications@github.comwrote:

Not to sound negative here (pun intended), but -1.

I completely get the use cases for an include statement. But then I also
get the need for parametrized Dockerfiles, and then for conditionals, etc.
Continue down that path, and you'll end up implementing a programming
language in Dockerfiles, which may even become turing complete. The
cynicism in that statement is free of charge, by the way ;)

Why not use a preprocessing step instead? You don't even have to program
your own, you could use m4 (used in autotools for this purpose) or the C
preprocessor (used in IMake, which does a similar job as autotools but is
pretty much defunct these days).

I'm in agreement with this. Having toyed with design and implemtantions of
a few languages myself over the years
turning Dockerfile(s) into a "scripting" language even if it's not turning
complete sounds like something that Docker should not do.

As Jens clearly points out there are better more appropriate tools for this
job.

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

@hunterloftis

This comment has been minimized.

Show comment
Hide comment
@hunterloftis

hunterloftis Apr 9, 2014

+1

The slippery-slope argument for a turing-complete scripting language in Dockerfiles seems a bit extreme. INCLUDE (or FROM ./relative/path) just lets you create a common base image locally so you can reference a file system (for example, in your app's repository) instead of relying on the Docker registry for what should be a self-contained app definition.

hunterloftis commented Apr 9, 2014

+1

The slippery-slope argument for a turing-complete scripting language in Dockerfiles seems a bit extreme. INCLUDE (or FROM ./relative/path) just lets you create a common base image locally so you can reference a file system (for example, in your app's repository) instead of relying on the Docker registry for what should be a self-contained app definition.

@prologic

This comment has been minimized.

Show comment
Hide comment
@prologic

prologic Apr 9, 2014

Contributor

On Wed, Apr 9, 2014 at 12:15 PM, Hunter Loftis notifications@github.comwrote:

The slippery-slope argument for a turing-complete scripting language in
Dockerfiles seems a bit extreme. INCLUDE (or FROM ./relative/path) just
lets you create a common base image locally so you can reference a file
system (for example, in your app's repository) instead of relying on the
Docker registry for what should be a self-contained app definition.

I don't agree with this. The very notion of referencing and building a base
image is already there
and it only accesses the public registry (or a private onf if you're so
inclined) if you don't have said image.

I'm still -1 on this -- it adds more complexity for little gain. I'd rather
see Docker pick up some YAML-style
configuration format for "configuring one or more containers" ala fig et
all.

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

Contributor

prologic commented Apr 9, 2014

On Wed, Apr 9, 2014 at 12:15 PM, Hunter Loftis notifications@github.comwrote:

The slippery-slope argument for a turing-complete scripting language in
Dockerfiles seems a bit extreme. INCLUDE (or FROM ./relative/path) just
lets you create a common base image locally so you can reference a file
system (for example, in your app's repository) instead of relying on the
Docker registry for what should be a self-contained app definition.

I don't agree with this. The very notion of referencing and building a base
image is already there
and it only accesses the public registry (or a private onf if you're so
inclined) if you don't have said image.

I'm still -1 on this -- it adds more complexity for little gain. I'd rather
see Docker pick up some YAML-style
configuration format for "configuring one or more containers" ala fig et
all.

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

@jfinkhaeuser

This comment has been minimized.

Show comment
Hide comment
@jfinkhaeuser

jfinkhaeuser Apr 9, 2014

The slippery-slope argument stems from experience with a bunch of other DSLs. There's a general trend for DSLs to become turing complete over time.

The include statement in itself presents little danger here, but consider that in almost every language, include or import statements are linked to the concept of an include path, quite often set via an environment variable.

There's a reason for that: having include statements means you can collect building blocks into reusable libraries, and having reusable libraries means you'll want to use the same building blocks in multiple Dockerfiles. The best way to do that is to be agnostic to the actual file location in your include statement, but instead have a "canonical" name for a building block, which is usually a file path relative to some externally provided include path.

So what's wrong with that? Nothing, except (and I'm quoting you here):

(...) instead of relying on the Docker registry for what should be a self-contained app definition.

I agree. A Dockerfile should be a self-contained app definition. Once you include stuff from any location that's shared between projects, though, you have anything but that - the needs of one project will lead to modifications of the shared file and those may not reflect the needs of the other project any longer. Worse, any kind of traceability - this version of the Dockerfile should build the same image again - is completely gone.

Besides, once you have re-usable building blocks, parametrizing them is the obvious next step. That's how libraries work.

jfinkhaeuser commented Apr 9, 2014

The slippery-slope argument stems from experience with a bunch of other DSLs. There's a general trend for DSLs to become turing complete over time.

The include statement in itself presents little danger here, but consider that in almost every language, include or import statements are linked to the concept of an include path, quite often set via an environment variable.

There's a reason for that: having include statements means you can collect building blocks into reusable libraries, and having reusable libraries means you'll want to use the same building blocks in multiple Dockerfiles. The best way to do that is to be agnostic to the actual file location in your include statement, but instead have a "canonical" name for a building block, which is usually a file path relative to some externally provided include path.

So what's wrong with that? Nothing, except (and I'm quoting you here):

(...) instead of relying on the Docker registry for what should be a self-contained app definition.

I agree. A Dockerfile should be a self-contained app definition. Once you include stuff from any location that's shared between projects, though, you have anything but that - the needs of one project will lead to modifications of the shared file and those may not reflect the needs of the other project any longer. Worse, any kind of traceability - this version of the Dockerfile should build the same image again - is completely gone.

Besides, once you have re-usable building blocks, parametrizing them is the obvious next step. That's how libraries work.

@hunterloftis

This comment has been minimized.

Show comment
Hide comment
@hunterloftis

hunterloftis Apr 9, 2014

Then follow a common, well-understood example that has certainly not become turing complete:

https://developer.mozilla.org/en-US/docs/Web/CSS/@import

hunterloftis commented Apr 9, 2014

Then follow a common, well-understood example that has certainly not become turing complete:

https://developer.mozilla.org/en-US/docs/Web/CSS/@import

@ryedog

This comment has been minimized.

Show comment
Hide comment
@ryedog

ryedog Apr 14, 2014

+1 on include (or even variables that can be declared on the command line on build would be helpful)

ryedog commented Apr 14, 2014

+1 on include (or even variables that can be declared on the command line on build would be helpful)

@ChristianKniep

This comment has been minimized.

Show comment
Hide comment
@ChristianKniep

ChristianKniep Apr 23, 2014

+1 on include as well

I have to deal with one site where I have to set http_proxy and one without.

ChristianKniep commented Apr 23, 2014

+1 on include as well

I have to deal with one site where I have to set http_proxy and one without.

duglin added a commit to duglin/docker that referenced this issue Dec 30, 2016

Support INCLUDE in Dockerfiles
Note that as of now this is just a syntax feature. Meaning it just pulls
in the included Dockerfile and continues parsing. If the target Dockerfile
has a FROM then it is processed and new image will be created.

After this we could consider adding a flag like:
   INCLUDE --no-from ...
which will tell it to skip the FROM so that the modifications will be done
within the same/current image.

Also, note that you can do:  INLCUDE myfile.${foo} where 'foo' is an
env var or a build arg

Closes: #735

Signed-off-by: Doug Davis <dug@us.ibm.com>
@josephtyler

This comment has been minimized.

Show comment
Hide comment
@josephtyler

josephtyler commented Jan 27, 2017

+1000

@vito-c

This comment has been minimized.

Show comment
Hide comment
@vito-c

vito-c Mar 16, 2017

@jfinkhaeuser

Your suggestion is flawed in several ways:

  1. It doesn't take advantage of image caching.
  2. It adds two FROM statements and two CMD statements to the output
  3. /* is used in the *.docker files as a path expressions (ie rm -rf /var/lib/apt/lists/*) but is interpreted as a comment and has to be escaped /\* (most dockerfiles don't do this).
  4. A comment in a Dockerfile starts with # but is a preprocessing directive so you have to strip all of your Dockerfiles of comments.

I do see your point about a preprocessor though and I also think that INCLUDE is a bad idea.

Original Suggestion:
Makefile:

Dockerfile: Dockerfile.in *.docker
    cpp -o Dockerfile Dockerfile.in

build: Dockerfile
    docker build -rm -t my/image .

Dockerfile.in:

FROM ubuntu:latest
MAINTAINER me

#include "imagemagick.docker"
#include "redis.docker"

imagemagick.docker: https://hub.docker.com/r/acleancoder/imagemagick-full/~/dockerfile/
redis.docker: https://github.com/docker-library/redis/blob/6cb8a8015f126e2a7251c5d011b86b657e9febd6/3.0/Dockerfile

vito-c commented Mar 16, 2017

@jfinkhaeuser

Your suggestion is flawed in several ways:

  1. It doesn't take advantage of image caching.
  2. It adds two FROM statements and two CMD statements to the output
  3. /* is used in the *.docker files as a path expressions (ie rm -rf /var/lib/apt/lists/*) but is interpreted as a comment and has to be escaped /\* (most dockerfiles don't do this).
  4. A comment in a Dockerfile starts with # but is a preprocessing directive so you have to strip all of your Dockerfiles of comments.

I do see your point about a preprocessor though and I also think that INCLUDE is a bad idea.

Original Suggestion:
Makefile:

Dockerfile: Dockerfile.in *.docker
    cpp -o Dockerfile Dockerfile.in

build: Dockerfile
    docker build -rm -t my/image .

Dockerfile.in:

FROM ubuntu:latest
MAINTAINER me

#include "imagemagick.docker"
#include "redis.docker"

imagemagick.docker: https://hub.docker.com/r/acleancoder/imagemagick-full/~/dockerfile/
redis.docker: https://github.com/docker-library/redis/blob/6cb8a8015f126e2a7251c5d011b86b657e9febd6/3.0/Dockerfile

@jfinkhaeuser

This comment has been minimized.

Show comment
Hide comment
@jfinkhaeuser

jfinkhaeuser Mar 16, 2017

@vito-c

  1. I don't see your first point. The generated Dockerfile would be the same if none of its input files change, which means image caching works just as well with as without generating the file. Maybe I misunderstand something?
  2. It only adds FROM and CMD multiple times if you intend to put them into the included files. I don't think that's good form for something that's essentially a fragment of a Dockerfile. An INCLUDE statement would not change that unless it explicitly strips these from the included file.
  3. So quote paths. That's good form anyway because of variable expansion and spaces: RUN rm -rf $SOME_PATH/lists/* might not do what you want if $SOME_PATH contains / /tmp. Quotes are your friends with anything related to shell scripts and variables.
  4. So use /* */ for comments.

Really, these arguments make sense if you expect to just #include any downloaded third-party Dockerfile. But if you're building your own setup, you just have to adapt to the choices you've made. If your choice is to use the C preprocessor, then these are the consequences.

jfinkhaeuser commented Mar 16, 2017

@vito-c

  1. I don't see your first point. The generated Dockerfile would be the same if none of its input files change, which means image caching works just as well with as without generating the file. Maybe I misunderstand something?
  2. It only adds FROM and CMD multiple times if you intend to put them into the included files. I don't think that's good form for something that's essentially a fragment of a Dockerfile. An INCLUDE statement would not change that unless it explicitly strips these from the included file.
  3. So quote paths. That's good form anyway because of variable expansion and spaces: RUN rm -rf $SOME_PATH/lists/* might not do what you want if $SOME_PATH contains / /tmp. Quotes are your friends with anything related to shell scripts and variables.
  4. So use /* */ for comments.

Really, these arguments make sense if you expect to just #include any downloaded third-party Dockerfile. But if you're building your own setup, you just have to adapt to the choices you've made. If your choice is to use the C preprocessor, then these are the consequences.

@thangbn

This comment has been minimized.

Show comment
Hide comment
@thangbn

thangbn commented Mar 23, 2017

+1

duglin added a commit to duglin/docker that referenced this issue Apr 4, 2017

Support INCLUDE in Dockerfiles
Note that as of now this is just a syntax feature. Meaning it just pulls
in the included Dockerfile and continues parsing. If the target Dockerfile
has a FROM then it is processed and new image will be created.

After this we could consider adding a flag like:
   INCLUDE --no-from ...
which will tell it to skip the FROM so that the modifications will be done
within the same/current image.

Also, note that you can do:  INLCUDE myfile.${foo} where 'foo' is an
env var or a build arg

Closes: #735

Signed-off-by: Doug Davis <dug@us.ibm.com>

duglin added a commit to duglin/docker that referenced this issue Apr 4, 2017

Support INCLUDE in Dockerfiles
Note that as of now this is just a syntax feature. Meaning it just pulls
in the included Dockerfile and continues parsing. If the target Dockerfile
has a FROM then it is processed and new image will be created.

After this we could consider adding a flag like:
   INCLUDE --no-from ...
which will tell it to skip the FROM so that the modifications will be done
within the same/current image.

Also, note that you can do:  INLCUDE myfile.${foo} where 'foo' is an
env var or a build arg

Closes: #735

Signed-off-by: Doug Davis <dug@us.ibm.com>
@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Aug 10, 2017

Is this available already?
EDIT: In that case +1 :)

ghost commented Aug 10, 2017

Is this available already?
EDIT: In that case +1 :)

@duglin

This comment has been minimized.

Show comment
Hide comment
@duglin

duglin Aug 10, 2017

Contributor

@xanview nope

Contributor

duglin commented Aug 10, 2017

@xanview nope

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Aug 11, 2017

@xanview it is currently recommended to use FROM approach + multi-stage builds:
#2745 (comment)

Vanuan commented Aug 11, 2017

@xanview it is currently recommended to use FROM approach + multi-stage builds:
#2745 (comment)

@rulai-huajunzeng

This comment has been minimized.

Show comment
Hide comment
@rulai-huajunzeng

rulai-huajunzeng Aug 12, 2017

Multiple stage build is only a workaround and it's not the best workaround in my opinion. It brings additional burden of maintaining intermediate images and their versions and you can easily forget to rebuild those images when the codes are updated.

The best workaround till now is still cp someLib . && docker build . && rm someLib. Or similarly you can use six8's dockerfactory. Unfortunately these don't work when you want to do docker-compose build.

rulai-huajunzeng commented Aug 12, 2017

Multiple stage build is only a workaround and it's not the best workaround in my opinion. It brings additional burden of maintaining intermediate images and their versions and you can easily forget to rebuild those images when the codes are updated.

The best workaround till now is still cp someLib . && docker build . && rm someLib. Or similarly you can use six8's dockerfactory. Unfortunately these don't work when you want to do docker-compose build.

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Aug 13, 2017

you can easily forget to rebuild those images when the codes are updated.

similarly, you can easily forget to cp someLib

Vanuan commented Aug 13, 2017

you can easily forget to rebuild those images when the codes are updated.

similarly, you can easily forget to cp someLib

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Aug 13, 2017

@huajunzeng What do you mean by cp someLib && docker build . && rm someLib? - why would you do that? -- couldn't the docker file wget it or something, how does this address splitting a Dockerfile into multiple smaller chunks?

ghost commented Aug 13, 2017

@huajunzeng What do you mean by cp someLib && docker build . && rm someLib? - why would you do that? -- couldn't the docker file wget it or something, how does this address splitting a Dockerfile into multiple smaller chunks?

@rulai-huajunzeng

This comment has been minimized.

Show comment
Hide comment
@rulai-huajunzeng

rulai-huajunzeng Aug 13, 2017

@xanview My understanding of the whole point of this discussion is all about multiple build context. Whether to use INCLUDE, or to COPY from parent directory, etc. By using cp we can just manually pack different directory into a single build context and send to docker.

rulai-huajunzeng commented Aug 13, 2017

@xanview My understanding of the whole point of this discussion is all about multiple build context. Whether to use INCLUDE, or to COPY from parent directory, etc. By using cp we can just manually pack different directory into a single build context and send to docker.

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Aug 14, 2017

is all about multiple build context

No, I think it's about reusing multiple Dockerfiles. E.g. when you want to have nginx and node in the same image.

By using cp we can just manually pack different directory into a single build context and send to docker.

That's the thing, you can't copy something from an image. Consider you have nginx image:

FROM nginx

ADD nginx.conf /etc/nginx/nginx.conf

If you want to install node in addition to nginx:

FROM node

ADD . /src/
WORK_DIR /src/
RUN yarn install

Copy command doesn't help you here anyhow. Only if you use multistage builds you can copy files between images:

FROM nginx

FROM node
ADD . /src/
WORK_DIR /src/
RUN yarn install

COPY --from=0 /path/to/nginx/resources /path/to/nginx/resources
ADD nginx.conf /etc/nginx/nginx.conf

I agree that this doesn't cover other use cases like exposed ports, entrypoints (how would you combine them?), users (tricky), etc. And you also have to know which folders contain needed files and libraries. Base images also need to be compatible (library versions, link symbols, header locations). But it's a start.


What's people really intended to do with IMPORT is this:

Dockerfile1:

FROM ubuntu
RUN apt-get install nginx

Dockerfile2:

FROM debian
RUN apt-get install node

IMPORT Dockerfile1 # => RUN apt-get install nginx

But that's kind of counter-intuitive. The better way is to use some bash script ./install-nginx.sh and copy it where needed. That's probably what @huajunzeng meant.

Vanuan commented Aug 14, 2017

is all about multiple build context

No, I think it's about reusing multiple Dockerfiles. E.g. when you want to have nginx and node in the same image.

By using cp we can just manually pack different directory into a single build context and send to docker.

That's the thing, you can't copy something from an image. Consider you have nginx image:

FROM nginx

ADD nginx.conf /etc/nginx/nginx.conf

If you want to install node in addition to nginx:

FROM node

ADD . /src/
WORK_DIR /src/
RUN yarn install

Copy command doesn't help you here anyhow. Only if you use multistage builds you can copy files between images:

FROM nginx

FROM node
ADD . /src/
WORK_DIR /src/
RUN yarn install

COPY --from=0 /path/to/nginx/resources /path/to/nginx/resources
ADD nginx.conf /etc/nginx/nginx.conf

I agree that this doesn't cover other use cases like exposed ports, entrypoints (how would you combine them?), users (tricky), etc. And you also have to know which folders contain needed files and libraries. Base images also need to be compatible (library versions, link symbols, header locations). But it's a start.


What's people really intended to do with IMPORT is this:

Dockerfile1:

FROM ubuntu
RUN apt-get install nginx

Dockerfile2:

FROM debian
RUN apt-get install node

IMPORT Dockerfile1 # => RUN apt-get install nginx

But that's kind of counter-intuitive. The better way is to use some bash script ./install-nginx.sh and copy it where needed. That's probably what @huajunzeng meant.

@zh794390558

This comment has been minimized.

Show comment
Hide comment
@zh794390558

zh794390558 Aug 24, 2017

Does this feature can be accesed now?

zh794390558 commented Aug 24, 2017

Does this feature can be accesed now?

@sergeytrasko sergeytrasko referenced this issue Sep 29, 2017

Merged

Docker support #6 #17

5 of 5 tasks complete
@davesque

This comment has been minimized.

Show comment
Hide comment
@davesque

davesque Jan 24, 2018

Here's a use case for INCLUDE:

Heroku's support for docker containers allows one to push multiple images for different process types in this manner:
https://devcenter.heroku.com/articles/container-registry-and-runtime#pushing-multiple-images

Say you have a Django app which defines some asynchronous Celery tasks. Then you need a web process type and a worker process type. But they should both have essentially the same runtime environment. It would be convenient to just have your base Dockerfile.web which looks something like this:

FROM python:3.6

...

CMD ["web-entrypoint.sh"]

and then have Dockerfile.worker look like this:

INCLUDE Dockerfile.web

CMD ["worker-entrypoint.sh"]

As it stands, Heroku provides no way to specify a different command using the same image for a different process type. And Docker provides no way to do simple Dockerfile templating. So I'm stuck with duplicating the same Dockerfile and changing only one line.

davesque commented Jan 24, 2018

Here's a use case for INCLUDE:

Heroku's support for docker containers allows one to push multiple images for different process types in this manner:
https://devcenter.heroku.com/articles/container-registry-and-runtime#pushing-multiple-images

Say you have a Django app which defines some asynchronous Celery tasks. Then you need a web process type and a worker process type. But they should both have essentially the same runtime environment. It would be convenient to just have your base Dockerfile.web which looks something like this:

FROM python:3.6

...

CMD ["web-entrypoint.sh"]

and then have Dockerfile.worker look like this:

INCLUDE Dockerfile.web

CMD ["worker-entrypoint.sh"]

As it stands, Heroku provides no way to specify a different command using the same image for a different process type. And Docker provides no way to do simple Dockerfile templating. So I'm stuck with duplicating the same Dockerfile and changing only one line.

@duglin

This comment has been minimized.

Show comment
Hide comment
@duglin

duglin Jan 24, 2018

Contributor

@davesque you may want to continue to chat over here: #12749

Contributor

duglin commented Jan 24, 2018

@davesque you may want to continue to chat over here: #12749

@Stannieman

This comment has been minimized.

Show comment
Hide comment
@Stannieman

Stannieman Jul 14, 2018

I also felt the need for this for some multi stage dockerfiles where some of the stages are exactly the same for several dockerfiles. I know there are other ways to solve this, for example building the stages as separate images, but that's all a bit overkill for a relatively "simple" environment.

I ended up writing the PowerShell script attached.

IncludePreprocessor.txt

Call WritePreproccessedFile with the dockerfile to process and the output file as parameters.
It's fairly simple and maybe not super robust, but works perfect for me.

To include a file to your dockerfile do this:
INCLUDE path/to/../../other/file with spaces

This is recursive so you can have other includes in the included file.

Stannieman commented Jul 14, 2018

I also felt the need for this for some multi stage dockerfiles where some of the stages are exactly the same for several dockerfiles. I know there are other ways to solve this, for example building the stages as separate images, but that's all a bit overkill for a relatively "simple" environment.

I ended up writing the PowerShell script attached.

IncludePreprocessor.txt

Call WritePreproccessedFile with the dockerfile to process and the output file as parameters.
It's fairly simple and maybe not super robust, but works perfect for me.

To include a file to your dockerfile do this:
INCLUDE path/to/../../other/file with spaces

This is recursive so you can have other includes in the included file.

@himslm01

This comment has been minimized.

Show comment
Hide comment
@himslm01

himslm01 Aug 5, 2018

My use case is for reusing a block of ARGs with default values in multiple places.

I have one Dockerfile which downloads the source and libraries for a project. That Dockerfile needs to know the versions of the libraries to download.

I have a second Dockerfile for building the source into an executable. I build from the second Dockerfile multiple times to build for various architectures, having downloaded the source only once. The second Dockerfile needs all of the ARGs of the first Dockerfile for the versions of libraries, plus extra args for build parameters for the different architectures.

I can't even use a multi-stage build, because the ARGs are reset after every FROM, so even in a multi-stage build I do not have one-single-source-of-truth, I have to have the ARGs defined twice in one single file.

This situation is not architecturally right!

himslm01 commented Aug 5, 2018

My use case is for reusing a block of ARGs with default values in multiple places.

I have one Dockerfile which downloads the source and libraries for a project. That Dockerfile needs to know the versions of the libraries to download.

I have a second Dockerfile for building the source into an executable. I build from the second Dockerfile multiple times to build for various architectures, having downloaded the source only once. The second Dockerfile needs all of the ARGs of the first Dockerfile for the versions of libraries, plus extra args for build parameters for the different architectures.

I can't even use a multi-stage build, because the ARGs are reset after every FROM, so even in a multi-stage build I do not have one-single-source-of-truth, I have to have the ARGs defined twice in one single file.

This situation is not architecturally right!

@rodvdka

This comment has been minimized.

Show comment
Hide comment
@rodvdka

rodvdka commented Aug 13, 2018

+1

@bg172

This comment has been minimized.

Show comment
Hide comment
@bg172

bg172 commented Sep 8, 2018

+1

mido-gerrithub-sync pushed a commit to midonet/midonet-kubernetes that referenced this issue Sep 20, 2018

A script to generate Dockefiles from templates
As these files are getting more complex, it's more of burdern
to keep 5 copies of them in sync manually.

Use m4 to preprocess them for now, as Docker itself doesn't
likely support INCLUDE anytime soon.  [1]

build.sh now checks if those Dockerfiles are up-to-date before
building images.

It's still developers' responsibility to re-generate these files
and commit them to the repo.  It's done this way because Docker Hub
automated build doesn't seem to allow to generate Dockerfiles
on demand.

Extra blank linkes added to Dockefiles are side effects of
m4 include.

[1] moby/moby#735

Signed-off-by: YAMAMOTO Takashi <yamamoto@midokura.com>
Change-Id: If9a962ae059c3218f1a93e5b8645cf7541dc678e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment