Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
docker build: initial work on the include command #2108
This is some initial work to add the 'include' command to dockerfile build as suggested by issue #735.
When the client starts a build process it creates a tarball of the whole directory containing the
Including remote files works fine. Nested includes work fine too as long as they do not reference external files.
I think we could instruct the client to look for external include statements inside of the Dockerfile and take care of them by 1) temporarily copying these files into the working directory 2) update the path to the file to include. I'm not completely satisfied by this approach, but I do not have other ideas...
A couple of final notes:
http://docs.docker.io/en/latest/contributing/devenvironment/ explains how to run the tests.
referenced this pull request
Oct 8, 2013
Hey guys, I apologize for the slow review.
I am not comfortable merging an "INCLUDE" instruction as described here. It feels like C-style macros: it doesn't operate at the same level as the rest of the language, so introduces an extra layer of complexity and can cause side effects. For example if I include a snippet with a "FROM", it may reset the build process, or cause it to fail.
I agree there is a need to compose builds beyond what the current
Thanks guys for taking the time, and sorry for the disappointment. Always happy to discuss design on #docker-dev or the mailing list.
referenced this pull request
Jan 10, 2014
So I understand @shykes' position (in this thread and more clearly enumerated on #3562), but I think some things are missing from the discussion. #3562 is not an effective replacement for
a la carte dockerfiles
One case is multiple inheritance. Right now, in the real world, my dockerfiles have common "units". For example, it's common to install postgres, it's common to install python, it's common to install supervisor... Right now my Dockerfile is organized like
whereas with import it could be
This allows me to better share that "how do I install postgres" work between different images. It's possible to achieve this today with adding and running shell scripts, but that's clumsy, and doesn't do things like volumes or ports (which are important, for example, in the postgres case). @shykes solution, as I understand it, doesn't have the flexibility that would allow me to define high-level "install postgres" actions and insert them a la carte into a docker container. Import does.
It's worth pointing out that "how do I install [complicated package] in a separate file" problem is the problem people are filing that gets duped to this issue.
Any serious proposal in lieu of include should address this underlying motivation for include: composing Docker images from smaller units. #3562 doesn't do that, like, at all.
Over on #3763, I am trying to run two docker commands back-to-back, and I am annoyed that I have to save some GUID that's the output of one command and pipe it to the other. Like, the power of having two steps is great, but, in the common case, I don't need that power, and so there is this bookkeping step of piping this GUID from one step to the next that overcomplicates a core workflow.
The solution in #3562, in addition to being nonresponsive to the core problem, has a similar bookkeeping problem, in that: the common case is that I want to build one image, that for developer convenience is split up into multiple files. But in order to do that I have to introduce this abstract notion of a source and start to worry about what the source should be named and where it should be located on the filesystem and this clutters up listing images and such and now I am at least five minutes into the README learning a constellation of nouns and meanwhile all I wanted to do was refactor a few lines from my Dockerfile into another file.