New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow specifying of a dockerfile as a path, not piping in. #2112

Closed
peterbraden opened this Issue Oct 7, 2013 · 160 comments

Comments

Projects
None yet
@peterbraden
Contributor

peterbraden commented Oct 7, 2013

Would be nice to be able to specify docker build -t my/thing -f my-dockerfile . so I could ADD files, and also have multiple dockerfiles.

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Oct 8, 2013

Contributor

I was just looking into this

Usage: docker build [OPTIONS] PATH | URL | -

so if you run

docker build -t my/thing my-dockerfile

complains about the tar file being too short.

It seems to me that the 'PATH' option isn't doccoed, so it might have some legacy meaning?

So - I wonder if detecting if PATH isa file, and is not a Tarfile.

personally, I have a set of Dockerfiles that I use to test, and would much rather have them all in one directory and also have a full context.

Contributor

SvenDowideit commented Oct 8, 2013

I was just looking into this

Usage: docker build [OPTIONS] PATH | URL | -

so if you run

docker build -t my/thing my-dockerfile

complains about the tar file being too short.

It seems to me that the 'PATH' option isn't doccoed, so it might have some legacy meaning?

So - I wonder if detecting if PATH isa file, and is not a Tarfile.

personally, I have a set of Dockerfiles that I use to test, and would much rather have them all in one directory and also have a full context.

@tianon

This comment has been minimized.

Show comment
Hide comment
@tianon

tianon Oct 8, 2013

Member

That PATH refers to a directory, not a specific Dockerfile.

Member

tianon commented Oct 8, 2013

That PATH refers to a directory, not a specific Dockerfile.

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Oct 8, 2013

Contributor

oh, and then Tars that directory up to send to the server - cool!

so its possible to detect isafile, Tar up the dir its in, and then replace the Dockerfile in that tarball with the specified file.

or to use -f in the same way - allowing your Dockerfile definitions to live separately from the payload

now to work out how the tests work, try it out and see if it works for me

Contributor

SvenDowideit commented Oct 8, 2013

oh, and then Tars that directory up to send to the server - cool!

so its possible to detect isafile, Tar up the dir its in, and then replace the Dockerfile in that tarball with the specified file.

or to use -f in the same way - allowing your Dockerfile definitions to live separately from the payload

now to work out how the tests work, try it out and see if it works for me

@peterbraden

This comment has been minimized.

Show comment
Hide comment
@peterbraden

peterbraden Oct 8, 2013

Contributor

It doesn't tar anything up - the PATH is a directory in which it assumes there is a Dockerfile

Contributor

peterbraden commented Oct 8, 2013

It doesn't tar anything up - the PATH is a directory in which it assumes there is a Dockerfile

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Oct 8, 2013

Contributor

thats not all it does with that PATH - reading the code, the 'context' is sent to the server by taring up the directory.

(ok, so i still don't know go, and i've only been reading the code for the last few minutes, so take it with a grain of skepticism )

Contributor

SvenDowideit commented Oct 8, 2013

thats not all it does with that PATH - reading the code, the 'context' is sent to the server by taring up the directory.

(ok, so i still don't know go, and i've only been reading the code for the last few minutes, so take it with a grain of skepticism )

@tianon

This comment has been minimized.

Show comment
Hide comment
@tianon

tianon Oct 8, 2013

Member

Correct, so any nonremote files referenced via ADD must also be in that same directory or the daemon won't be able to access them.

Member

tianon commented Oct 8, 2013

Correct, so any nonremote files referenced via ADD must also be in that same directory or the daemon won't be able to access them.

@peterbraden

This comment has been minimized.

Show comment
Hide comment
@peterbraden

peterbraden Oct 8, 2013

Contributor

Ah, I see what you're saying - yes - that's exactly what I want - a way to specify a dockerfile with -f, and then the directory PATH that might be separate.

so I could have:

docker build -t my/thing fooproj
docker build -t my/thing -f debug-dockerfile fooproj
Contributor

peterbraden commented Oct 8, 2013

Ah, I see what you're saying - yes - that's exactly what I want - a way to specify a dockerfile with -f, and then the directory PATH that might be separate.

so I could have:

docker build -t my/thing fooproj
docker build -t my/thing -f debug-dockerfile fooproj
@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Oct 8, 2013

Contributor

#2108 (adding an include directive to Dockerfiles) adds an interesting wrinkle

should the include be relative to the specified dockerfile, or the PATH. not important yet though.

fun:

docker build -t my/thing fooproj
docker build -t my/thing -f ../../debug-dockerfile fooproj
docker build -t my/thing -f /opt/someproject/dockerfiles/debug-dockerfile fooproj
Contributor

SvenDowideit commented Oct 8, 2013

#2108 (adding an include directive to Dockerfiles) adds an interesting wrinkle

should the include be relative to the specified dockerfile, or the PATH. not important yet though.

fun:

docker build -t my/thing fooproj
docker build -t my/thing -f ../../debug-dockerfile fooproj
docker build -t my/thing -f /opt/someproject/dockerfiles/debug-dockerfile fooproj
@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Oct 8, 2013

Contributor

as extra bonus - there are no CmdBuild tests yet, so guess what I get to learn on first :)

Contributor

SvenDowideit commented Oct 8, 2013

as extra bonus - there are no CmdBuild tests yet, so guess what I get to learn on first :)

@peterbraden

This comment has been minimized.

Show comment
Hide comment
@peterbraden

peterbraden Oct 8, 2013

Contributor

@SvenDowideit are you working on this? I was thinking of maybe hacking on it today

Contributor

peterbraden commented Oct 8, 2013

@SvenDowideit are you working on this? I was thinking of maybe hacking on it today

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Oct 8, 2013

Contributor

I'm slowly getting myself familiar with the code and go, so go for it - I'm having too much fun just writing the unit tests (perhaps you can use the testing commits to help :)

Contributor

SvenDowideit commented Oct 8, 2013

I'm slowly getting myself familiar with the code and go, so go for it - I'm having too much fun just writing the unit tests (perhaps you can use the testing commits to help :)

@peterbraden

This comment has been minimized.

Show comment
Hide comment
@peterbraden

peterbraden Oct 8, 2013

Contributor

I will do :)

Contributor

peterbraden commented Oct 8, 2013

I will do :)

@codeaholics

This comment has been minimized.

Show comment
Hide comment
@codeaholics

codeaholics Oct 17, 2013

Contributor

I would like to see this functionality too. We have a system which can run in 3 different modes and we'd like to deploy 3 different containers -- 1 for each mode. That means 3 virtually identical docker files with just the CMD being different. But because of the paths only being directories, and the directories being the context for ADD commands, I cannot get this to work right now.

So: +1 from me!

Contributor

codeaholics commented Oct 17, 2013

I would like to see this functionality too. We have a system which can run in 3 different modes and we'd like to deploy 3 different containers -- 1 for each mode. That means 3 virtually identical docker files with just the CMD being different. But because of the paths only being directories, and the directories being the context for ADD commands, I cannot get this to work right now.

So: +1 from me!

peterbraden pushed a commit to peterbraden/docker that referenced this issue Oct 17, 2013

@gabrtv

This comment has been minimized.

Show comment
Hide comment
@gabrtv

gabrtv Oct 17, 2013

Seems like this may have the same end goal as #1618 (though with different approaches). The idea there is to use a single Dockerfile with multiple TAG instructions that result in multiple images, versus multiple Dockerfiles and an include system as outlined here. Thoughts?

gabrtv commented Oct 17, 2013

Seems like this may have the same end goal as #1618 (though with different approaches). The idea there is to use a single Dockerfile with multiple TAG instructions that result in multiple images, versus multiple Dockerfiles and an include system as outlined here. Thoughts?

@peterbraden

This comment has been minimized.

Show comment
Hide comment
@peterbraden

peterbraden Oct 17, 2013

Contributor

It seems as though if you can pipe a Dockerfile in, you should be able to specify a path as well. Interested to see what comes of #1618 but I think this offers many more possibilities.

Contributor

peterbraden commented Oct 17, 2013

It seems as though if you can pipe a Dockerfile in, you should be able to specify a path as well. Interested to see what comes of #1618 but I think this offers many more possibilities.

@peterbraden

This comment has been minimized.

Show comment
Hide comment
@dregin

This comment has been minimized.

Show comment
Hide comment
@dregin

dregin Dec 3, 2013

I was thrown by the fact that the documentation doesn't state clearly that the directory containing the Dockerfile is the build context - I made the wrong assumption that the build context was the current working directory, so if I passed a path to the Docerfile instead of it being in the current directory, files I tried to ADD from the current working directory bombed out with "no such file or directory" errors.

dregin commented Dec 3, 2013

I was thrown by the fact that the documentation doesn't state clearly that the directory containing the Dockerfile is the build context - I made the wrong assumption that the build context was the current working directory, so if I passed a path to the Docerfile instead of it being in the current directory, files I tried to ADD from the current working directory bombed out with "no such file or directory" errors.

@bscott

This comment has been minimized.

Show comment
Hide comment
@bscott

bscott Dec 11, 2013

I'm getting the same error, Any Ideas

docker build Dockerfile


Uploading context

2013/12/11 21:52:32 Error: Error build: Tarball too short

bscott commented Dec 11, 2013

I'm getting the same error, Any Ideas

docker build Dockerfile


Uploading context

2013/12/11 21:52:32 Error: Error build: Tarball too short

@tianon

This comment has been minimized.

Show comment
Hide comment
@tianon

tianon Dec 11, 2013

Member

@bscott try docker build . instead. Build takes a directory, not a file, and that's the "build context". :)

Member

tianon commented Dec 11, 2013

@bscott try docker build . instead. Build takes a directory, not a file, and that's the "build context". :)

@bscott

This comment has been minimized.

Show comment
Hide comment
@bscott

bscott Dec 11, 2013

Worked Thx!, I just would like to choose between different Docker files.

bscott commented Dec 11, 2013

Worked Thx!, I just would like to choose between different Docker files.

@thedeeno

This comment has been minimized.

Show comment
Hide comment
@thedeeno

thedeeno Feb 18, 2014

+1 from me.

I need to create multiple images from my source. Each image is a separate concern that needs the same context to be built. Polluting a single Dockerfile (as suggested in #1618) is wonky. It'd be much cleaner for me to keep 3 separate <image-name>.docker files in my source.

I'd love to see something like this implemented.

thedeeno commented Feb 18, 2014

+1 from me.

I need to create multiple images from my source. Each image is a separate concern that needs the same context to be built. Polluting a single Dockerfile (as suggested in #1618) is wonky. It'd be much cleaner for me to keep 3 separate <image-name>.docker files in my source.

I'd love to see something like this implemented.

@thedeeno

This comment has been minimized.

Show comment
Hide comment
@thedeeno

thedeeno Feb 18, 2014

This is more difficult to implement than it would at first seem. It appears that ./Dockerfile is pretty baked in. After initial investigation at least these files are involved:

api/client.go
archive/archive.go
buildfile.go
server.go

The client uses archive to tar build context and send it to the server which then uses archive to untar build context and hand the bytes off to buildfile.

The easiest implementation seems like it'd involve changing client and archive to overwrite the tar's ./Dockerfile with the file specified via this option. I'll investigate further.

thedeeno commented Feb 18, 2014

This is more difficult to implement than it would at first seem. It appears that ./Dockerfile is pretty baked in. After initial investigation at least these files are involved:

api/client.go
archive/archive.go
buildfile.go
server.go

The client uses archive to tar build context and send it to the server which then uses archive to untar build context and hand the bytes off to buildfile.

The easiest implementation seems like it'd involve changing client and archive to overwrite the tar's ./Dockerfile with the file specified via this option. I'll investigate further.

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Feb 18, 2014

Contributor

@thedeeno I'll take a look really quick and show you were the change should be made. I think it is only in one place.

Contributor

crosbymichael commented Feb 18, 2014

@thedeeno I'll take a look really quick and show you were the change should be made. I think it is only in one place.

@juanqui

This comment has been minimized.

Show comment
Hide comment
@juanqui

juanqui Mar 4, 2014

+1 from me!

I've been following both #1618 and #2112 and this is the most elegant solution.

There's one particular use case in my development where this feature would be incredibly handy... When working on applications that have both a "web" and "worker" roles. I would like to create two docker files for this situation "Dockerfile-web" and "Dockerfile-worker". I could then build them both, tag them, and push them to my image repository. I would then run multiple web front-end instances behind a load-balancer and multiple workers to handle the tasks being pushed into the queues.

juanqui commented Mar 4, 2014

+1 from me!

I've been following both #1618 and #2112 and this is the most elegant solution.

There's one particular use case in my development where this feature would be incredibly handy... When working on applications that have both a "web" and "worker" roles. I would like to create two docker files for this situation "Dockerfile-web" and "Dockerfile-worker". I could then build them both, tag them, and push them to my image repository. I would then run multiple web front-end instances behind a load-balancer and multiple workers to handle the tasks being pushed into the queues.

@bbradbury

This comment has been minimized.

Show comment
Hide comment
@bbradbury

bbradbury Mar 4, 2014

+1 as an alternative to #2745.

bbradbury commented Mar 4, 2014

+1 as an alternative to #2745.

@hunterloftis

This comment has been minimized.

Show comment
Hide comment
@hunterloftis

hunterloftis Mar 17, 2014

+1

I was astounded to find that Dockerfile is hardcoded in, as well as that the build context is forced to be the Dockerfile's directory and can't be overridden even with command-line flags. This severely limits the usefulness and flexibility of Docker as a development, test, and deployment tool.

hunterloftis commented Mar 17, 2014

+1

I was astounded to find that Dockerfile is hardcoded in, as well as that the build context is forced to be the Dockerfile's directory and can't be overridden even with command-line flags. This severely limits the usefulness and flexibility of Docker as a development, test, and deployment tool.

@lordi

This comment has been minimized.

Show comment
Hide comment
@lordi

lordi Mar 27, 2014

+1
I'd appreciate that change

lordi commented Mar 27, 2014

+1
I'd appreciate that change

@lqez

This comment has been minimized.

Show comment
Hide comment
@lqez

lqez Apr 3, 2014

+1
Really need this.

lqez commented Apr 3, 2014

+1
Really need this.

enokd added a commit to enokd/docker that referenced this issue Apr 4, 2014

#2112- Allow specifying a dockerfile with option --file
Docker-DCO-1.1-Signed-off-by: Djibril Koné <kone.djibril@gmail.com> (github: enokd)
@enokd

This comment has been minimized.

Show comment
Hide comment
@enokd

enokd Apr 4, 2014

Contributor

#5033 should allow this feature. cc/@crosbymichael

Contributor

enokd commented Apr 4, 2014

#5033 should allow this feature. cc/@crosbymichael

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Apr 7, 2014

Contributor

@shykes what do you think about this change? I don't think you agreed or maybe know of a better solution for solving the same problem.

Contributor

crosbymichael commented Apr 7, 2014

@shykes what do you think about this change? I don't think you agreed or maybe know of a better solution for solving the same problem.

@shykes

This comment has been minimized.

Show comment
Hide comment
@shykes

shykes Apr 7, 2014

Collaborator

I'm hesitant.

On the one hand, I don't want to limit people's ability to customize their build.

On the other hand, I worry that the same thing will happen as with run -v /host:/container and expose 80:80. In other words, it will allow the 1% who know what they're doing to add a cool customization - and the other 99% then shoot themselves in the foot quite badly.

For example, we have a lot of new Docker users who start out with host-mounted volumes instead of regular volumes. And we had to deprecate the expose 80:80 syntax altogether because too many people published images which couldn't be run more than once per host, for no good reason.

So my question is: don't we risk having lots of source repositories which cannot be built repeatably with docker build

, because now you have to read a README which tells you to run a shell script which then runs 'docker build -f ./path/to/my/dockerfile', simply because you didn't feel like putting a Dockerfile at the root of the repository? Or perhaps because you're a beginner user and just copy-pasted that technique from an unofficial tutorial?

Being able to drop a source repo, and have it be built automatically without ambiguity or human discovery is one of the reasons Dockerfiles are useful. Doesn't this pull request introduce the risk of breaking in that a lot of cases, for basically no good reason?

Collaborator

shykes commented Apr 7, 2014

I'm hesitant.

On the one hand, I don't want to limit people's ability to customize their build.

On the other hand, I worry that the same thing will happen as with run -v /host:/container and expose 80:80. In other words, it will allow the 1% who know what they're doing to add a cool customization - and the other 99% then shoot themselves in the foot quite badly.

For example, we have a lot of new Docker users who start out with host-mounted volumes instead of regular volumes. And we had to deprecate the expose 80:80 syntax altogether because too many people published images which couldn't be run more than once per host, for no good reason.

So my question is: don't we risk having lots of source repositories which cannot be built repeatably with docker build

, because now you have to read a README which tells you to run a shell script which then runs 'docker build -f ./path/to/my/dockerfile', simply because you didn't feel like putting a Dockerfile at the root of the repository? Or perhaps because you're a beginner user and just copy-pasted that technique from an unofficial tutorial?

Being able to drop a source repo, and have it be built automatically without ambiguity or human discovery is one of the reasons Dockerfiles are useful. Doesn't this pull request introduce the risk of breaking in that a lot of cases, for basically no good reason?

@cap10morgan

This comment has been minimized.

Show comment
Hide comment
@cap10morgan

cap10morgan Apr 7, 2014

Contributor

@shykes I'm running into the problem you describe because of this Dockerfile limitation. Here are a couple of use cases:

  1. I have a Docker-based build environment that produces an artifact (JAR file in this case). The build environment is different from the run environment (different dependencies, larger image, etc., so I don't want to inherit the build env into runtime. It makes the most sense to me to have the Dockerfile build and run the runtime env around the JAR. So I have a separate Dockerfile.build file that builds and runs the build env and creates the JAR. But, since I can't specify the Dockerfile, I had to create a scripts/build file that does a docker build < Dockerfile.build and then mounts the host volume w/ docker run -v ... to run the build (since I can't use ADD w/ piped-in Dockerfiles). What I'd like to do instead is just be able to run docker build -t foobar/builder -f Dockerfile.build, docker run foobar/builder, docker build -t foobar/runtime, docker run foobar/runtime and just use ADD commands in both Dockerfiles.
  2. With ONBUILD instructions, I'd like to be able to put Dockerfiles into subdirectories (or have Dockerfile.env, etc. files in the root) that have the root Dockerfile in their FROM instruction, but can still use the root of the project as their build context. This is useful for, for example, bringing in configuration parameters for different environments. The root Dockerfile would still produce a useful container, but the others would create different variants that we need.

So I guess the thing I'd really like is to be able to separate the concepts of "build context" from "Dockerfile location/name." There are lots of potential ways of doing that, of course, but this seems like a relatively straightforward one to me.

Contributor

cap10morgan commented Apr 7, 2014

@shykes I'm running into the problem you describe because of this Dockerfile limitation. Here are a couple of use cases:

  1. I have a Docker-based build environment that produces an artifact (JAR file in this case). The build environment is different from the run environment (different dependencies, larger image, etc., so I don't want to inherit the build env into runtime. It makes the most sense to me to have the Dockerfile build and run the runtime env around the JAR. So I have a separate Dockerfile.build file that builds and runs the build env and creates the JAR. But, since I can't specify the Dockerfile, I had to create a scripts/build file that does a docker build < Dockerfile.build and then mounts the host volume w/ docker run -v ... to run the build (since I can't use ADD w/ piped-in Dockerfiles). What I'd like to do instead is just be able to run docker build -t foobar/builder -f Dockerfile.build, docker run foobar/builder, docker build -t foobar/runtime, docker run foobar/runtime and just use ADD commands in both Dockerfiles.
  2. With ONBUILD instructions, I'd like to be able to put Dockerfiles into subdirectories (or have Dockerfile.env, etc. files in the root) that have the root Dockerfile in their FROM instruction, but can still use the root of the project as their build context. This is useful for, for example, bringing in configuration parameters for different environments. The root Dockerfile would still produce a useful container, but the others would create different variants that we need.

So I guess the thing I'd really like is to be able to separate the concepts of "build context" from "Dockerfile location/name." There are lots of potential ways of doing that, of course, but this seems like a relatively straightforward one to me.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Oct 29, 2014

Contributor

@ShawnMilo yes it is. Anything in the tar is within context.

Contributor

cpuguy83 commented Oct 29, 2014

@ShawnMilo yes it is. Anything in the tar is within context.

@ShawnMilo

This comment has been minimized.

Show comment
Hide comment
@ShawnMilo

ShawnMilo Oct 29, 2014

@cpuguy83 Sorry, my mistake. I hastily read it as just piping in the Dockerfile, not a whole tarball.

ShawnMilo commented Oct 29, 2014

@cpuguy83 Sorry, my mistake. I hastily read it as just piping in the Dockerfile, not a whole tarball.

@thedeeno

This comment has been minimized.

Show comment
Hide comment
@thedeeno

thedeeno Oct 30, 2014

@cpuguy83 Nice! I vote close then if it works like you suggest.

Side question, in my custom solution timestamps busted the cache when taring. That still an issue? If I tar the same folder multiple times will it use the cache on build?

Keep up the great work!

thedeeno commented Oct 30, 2014

@cpuguy83 Nice! I vote close then if it works like you suggest.

Side question, in my custom solution timestamps busted the cache when taring. That still an issue? If I tar the same folder multiple times will it use the cache on build?

Keep up the great work!

@davber

This comment has been minimized.

Show comment
Hide comment
@davber

davber Oct 30, 2014

Piping a whole context doesn't alleviate what we talk about above, at all.

What we want and need is a way to use the same context with various Dockerfiles. Such as the aforementioned "build with logging" and "build without logging", as individual images. I have a lot of other use cases where this is needed.

I fail to see how tar-balling a directory would help in this. Yes, one could create a special directory and copy the specific Dockerfile there, and then the whole context directory, or tar and append a new Dockerfile, and then gzipping. But how is that easier than the [quite terrible] workaround that we currently have to employ of having a pre-processor script putting the correct Dockerfile in place before running docker?

And, this won't help with the Docker ecology, as I noted above.

Have I missed something?

Again, a simple '-f' option. Please. Please, pretty please. Force it to be a relative path of the context. That is fine.

davber commented Oct 30, 2014

Piping a whole context doesn't alleviate what we talk about above, at all.

What we want and need is a way to use the same context with various Dockerfiles. Such as the aforementioned "build with logging" and "build without logging", as individual images. I have a lot of other use cases where this is needed.

I fail to see how tar-balling a directory would help in this. Yes, one could create a special directory and copy the specific Dockerfile there, and then the whole context directory, or tar and append a new Dockerfile, and then gzipping. But how is that easier than the [quite terrible] workaround that we currently have to employ of having a pre-processor script putting the correct Dockerfile in place before running docker?

And, this won't help with the Docker ecology, as I noted above.

Have I missed something?

Again, a simple '-f' option. Please. Please, pretty please. Force it to be a relative path of the context. That is fine.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Oct 30, 2014

Contributor

@davber What you really want is for Docker to handle tarballing the context and the Dockerfile for you.
And I'm not totally against this. Though I think nested builds may be a better solution to this.

Contributor

cpuguy83 commented Oct 30, 2014

@davber What you really want is for Docker to handle tarballing the context and the Dockerfile for you.
And I'm not totally against this. Though I think nested builds may be a better solution to this.

@davber

This comment has been minimized.

Show comment
Hide comment
@davber

davber Oct 30, 2014

@cpuguy83 : yes, so I want Docker to handle that for me, yes, which would include picking the context part from one place and the Dockerfile from potentially another place, or with a non-standard name. I.e., to support a separate '-f' flag :-)

davber commented Oct 30, 2014

@cpuguy83 : yes, so I want Docker to handle that for me, yes, which would include picking the context part from one place and the Dockerfile from potentially another place, or with a non-standard name. I.e., to support a separate '-f' flag :-)

@davber

This comment has been minimized.

Show comment
Hide comment
@davber

davber Oct 30, 2014

Nested builds don't solve the problems we face, which started this thread, and keeps it going.

I.e., we still want to use the same root context.

Yes, we can copy files, and, yes, we do to circumvent this for us strange coupling of context with an exact Dockerfile named 'Dockerfile'. But that is not ideal, and setting up rsync to ensuring the files are indeed identical to the original ones is just weird.

@cpuguy83 : can you explain how nested builds would help me, or any other of the "whiners" in here? :-)

davber commented Oct 30, 2014

Nested builds don't solve the problems we face, which started this thread, and keeps it going.

I.e., we still want to use the same root context.

Yes, we can copy files, and, yes, we do to circumvent this for us strange coupling of context with an exact Dockerfile named 'Dockerfile'. But that is not ideal, and setting up rsync to ensuring the files are indeed identical to the original ones is just weird.

@cpuguy83 : can you explain how nested builds would help me, or any other of the "whiners" in here? :-)

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Oct 30, 2014

Contributor

@davber
My take on it is this:

FROM scratch
BUILD myname1 path/to/dockerfile
BUILD myname2 path/to/another/dockerfile

And this:

docker build -t myimage .

Would yield 3 images, "myimage", "myimage-myname1", "myimage-myname2".
Each inner build would have access to the full build context as absolute paths. Relative paths would be relative to the Dockerfile.
And the "myimage" could have it's own stuff as well beyond just BUILD instructions.

Contributor

cpuguy83 commented Oct 30, 2014

@davber
My take on it is this:

FROM scratch
BUILD myname1 path/to/dockerfile
BUILD myname2 path/to/another/dockerfile

And this:

docker build -t myimage .

Would yield 3 images, "myimage", "myimage-myname1", "myimage-myname2".
Each inner build would have access to the full build context as absolute paths. Relative paths would be relative to the Dockerfile.
And the "myimage" could have it's own stuff as well beyond just BUILD instructions.

@davber

This comment has been minimized.

Show comment
Hide comment
@davber

davber Oct 30, 2014

As I mentioned earlier, a lot of the new tools in the greater (and great!) Docker ecology assume that each Dockerfile is associated with exactly one Docker image. The various "Fig"-like orchestrator tools out there. And a lot of the new and old cloud solutions having specific support for Docker also have this one-to-one assumption. Granted, they would in the world created by an '-f' option then have to not only get a context -- as a tar ball for instance -- but also a, potentially separate, Dockerfile. But each such upload would still correspond to exactly one Docker image.

If we go with the route of potentially separating the Dockerfile from the context root, I hope these tools will start to live with this scenario:

Each deployment/use of a Docker image is done with an upload of either:

   1. a Dockerfile solely, when no contextual operations are needed
   2. a context tar ball only, containing the context with a top-level Dockerfile
   3. both a context tar ball and a separate Dockerfile

The longer we stay with this strong coupling of 'Dockerfile' at the top level of context, the more engrained that will be in the ecology. I.e., we should act now, as the Docker world moves swiftly, due to the general awesomeness.

And, honestly, that is a reasonable and conceptually attractive assumption, to have an isomorphism between Dockerfiles and images, even though the former would strictly be a product space of context directories (tared up...) and Dockerfiles, defaulting to (null, file) if only Dockerfile is provided and (context, context/'Dockerfile') if only context is provided.

And even for local deployment: say that we want to use Fig for at least local orchestration: how would one go about doing that? What one would have to do is to pre-create the images from such a multi-build Dockerfile, and then refer to those images in Fig. Not optimal.

davber commented Oct 30, 2014

As I mentioned earlier, a lot of the new tools in the greater (and great!) Docker ecology assume that each Dockerfile is associated with exactly one Docker image. The various "Fig"-like orchestrator tools out there. And a lot of the new and old cloud solutions having specific support for Docker also have this one-to-one assumption. Granted, they would in the world created by an '-f' option then have to not only get a context -- as a tar ball for instance -- but also a, potentially separate, Dockerfile. But each such upload would still correspond to exactly one Docker image.

If we go with the route of potentially separating the Dockerfile from the context root, I hope these tools will start to live with this scenario:

Each deployment/use of a Docker image is done with an upload of either:

   1. a Dockerfile solely, when no contextual operations are needed
   2. a context tar ball only, containing the context with a top-level Dockerfile
   3. both a context tar ball and a separate Dockerfile

The longer we stay with this strong coupling of 'Dockerfile' at the top level of context, the more engrained that will be in the ecology. I.e., we should act now, as the Docker world moves swiftly, due to the general awesomeness.

And, honestly, that is a reasonable and conceptually attractive assumption, to have an isomorphism between Dockerfiles and images, even though the former would strictly be a product space of context directories (tared up...) and Dockerfiles, defaulting to (null, file) if only Dockerfile is provided and (context, context/'Dockerfile') if only context is provided.

And even for local deployment: say that we want to use Fig for at least local orchestration: how would one go about doing that? What one would have to do is to pre-create the images from such a multi-build Dockerfile, and then refer to those images in Fig. Not optimal.

@hanikesn

This comment has been minimized.

Show comment
Hide comment
@hanikesn

hanikesn Oct 30, 2014

an isomorphism between Dockerfiles and images

This asumption is already broken in how the people in this thread are using Dockerfiles. I.e. using scripting to replace the Dockerfile manually before executing docker build. They probably also have their own orchestration in place. This feature request isn't about changing the docker landscape, it's about making docker work for a specfic kind of use.

hanikesn commented Oct 30, 2014

an isomorphism between Dockerfiles and images

This asumption is already broken in how the people in this thread are using Dockerfiles. I.e. using scripting to replace the Dockerfile manually before executing docker build. They probably also have their own orchestration in place. This feature request isn't about changing the docker landscape, it's about making docker work for a specfic kind of use.

@davber

This comment has been minimized.

Show comment
Hide comment
@davber

davber Oct 30, 2014

@hanikesn : two comments:

  1. why is that assumption broken by having to copy Dockerfiles into place before building; it would still be one Dockerfile <-> one image?
  2. what I am arguing is that I want whatever solution we come up with here to work with the existing and growing Docker landscape, without too big changes needed in this landscape; and I think by keeping that isomorphism mentioned, we do so.

Other suggestions here have been to have one multi-image Dockerfile, potentially calling out to sub-Dockerfiles. That wouldn't work with how most tools (Fig etc.) currently use Docker.

davber commented Oct 30, 2014

@hanikesn : two comments:

  1. why is that assumption broken by having to copy Dockerfiles into place before building; it would still be one Dockerfile <-> one image?
  2. what I am arguing is that I want whatever solution we come up with here to work with the existing and growing Docker landscape, without too big changes needed in this landscape; and I think by keeping that isomorphism mentioned, we do so.

Other suggestions here have been to have one multi-image Dockerfile, potentially calling out to sub-Dockerfiles. That wouldn't work with how most tools (Fig etc.) currently use Docker.

@itsafire

This comment has been minimized.

Show comment
Hide comment
@itsafire

itsafire Oct 30, 2014

Contributor

@davber

Nested builds don't solve the problems we face, which started this thread, and keeps it going.

I propose this solution I already pointed out earlier:

$ docker-pre-processor [ --options ... ] . | docker build -

Whereas --options are the rules the context in the (here) current directory is to be altered and passed to docker. This has to be done on the fly by creating a temporary tar archive containing the context. That way the source context can stay untouched. It's easier to change the pre-processor than Dockerfile syntax.

Contributor

itsafire commented Oct 30, 2014

@davber

Nested builds don't solve the problems we face, which started this thread, and keeps it going.

I propose this solution I already pointed out earlier:

$ docker-pre-processor [ --options ... ] . | docker build -

Whereas --options are the rules the context in the (here) current directory is to be altered and passed to docker. This has to be done on the fly by creating a temporary tar archive containing the context. That way the source context can stay untouched. It's easier to change the pre-processor than Dockerfile syntax.

@davber

This comment has been minimized.

Show comment
Hide comment
@davber

davber Oct 30, 2014

@itsafire

What about tools expecting Dockerfiles today? They are becoming more plentiful, using Amazon, Google, or such. And Fig and similar orchestration frameworks.

We would then have to push a standardized 'docker-pre-processor' tool and the use of such tool out to those frameworks, providers and tools.

It would sure be much easier having 'docker' proper support at least the option triggering this thread.

davber commented Oct 30, 2014

@itsafire

What about tools expecting Dockerfiles today? They are becoming more plentiful, using Amazon, Google, or such. And Fig and similar orchestration frameworks.

We would then have to push a standardized 'docker-pre-processor' tool and the use of such tool out to those frameworks, providers and tools.

It would sure be much easier having 'docker' proper support at least the option triggering this thread.

@jakehow

This comment has been minimized.

Show comment
Hide comment
@jakehow

jakehow Oct 30, 2014

@itsafire everyone with this issue who has solved it is already using some sort of preprocessor or wrapper around docker build to achieve this goal.

The fragmentation around this situation is in conflict with the @docker team's stated goal of 'repeatability'. This discussion and the others are about resolving this issue.

jakehow commented Oct 30, 2014

@itsafire everyone with this issue who has solved it is already using some sort of preprocessor or wrapper around docker build to achieve this goal.

The fragmentation around this situation is in conflict with the @docker team's stated goal of 'repeatability'. This discussion and the others are about resolving this issue.

@unleashed

This comment has been minimized.

Show comment
Hide comment
@unleashed

unleashed Nov 6, 2014

1+ year and 130+ comments and counting for a simple issue affecting most of the users... I'm impressed. Keep up the good work, Docker!

unleashed commented Nov 6, 2014

1+ year and 130+ comments and counting for a simple issue affecting most of the users... I'm impressed. Keep up the good work, Docker!

@zedtux

This comment has been minimized.

Show comment
Hide comment
@zedtux

zedtux commented Nov 16, 2014

+1

@farwayer

This comment has been minimized.

Show comment
Hide comment
@farwayer

farwayer Nov 17, 2014

Tools should help people to follow their own way, but not to impose the "right" way. A simple case that brought me to this discussion:

`-- project
     |-- deploy
     |    |-- .dockerignore
     |    |-- Dockerfile
     |    ...
     `-- src

My way is to keep project root clean. But ADD ../src and -f deploy/Dockerfile doesn't work. For now I have Dockerfile and .dockerignore in project root, but it is pain for me.

farwayer commented Nov 17, 2014

Tools should help people to follow their own way, but not to impose the "right" way. A simple case that brought me to this discussion:

`-- project
     |-- deploy
     |    |-- .dockerignore
     |    |-- Dockerfile
     |    ...
     `-- src

My way is to keep project root clean. But ADD ../src and -f deploy/Dockerfile doesn't work. For now I have Dockerfile and .dockerignore in project root, but it is pain for me.

@zedtux

This comment has been minimized.

Show comment
Hide comment
@zedtux

zedtux Nov 17, 2014

On my side I have built a script which prepare a folder with the required files and execute the standard command line docker build -t my/image . as I have encountered the issue that the .dockerignore file is ignored by the ADD...

zedtux commented Nov 17, 2014

On my side I have built a script which prepare a folder with the required files and execute the standard command line docker build -t my/image . as I have encountered the issue that the .dockerignore file is ignored by the ADD...

@joegoggins

This comment has been minimized.

Show comment
Hide comment
@joegoggins

joegoggins Nov 24, 2014

+1 sure would like to be able to have multiple Dockerfiles in a single repo. My use case: One image is for production use and deployment, another image is a reporting instance designed to use the same backend tools and database connectivity, but requires no front-end, web, system service, or process supervision...

joegoggins commented Nov 24, 2014

+1 sure would like to be able to have multiple Dockerfiles in a single repo. My use case: One image is for production use and deployment, another image is a reporting instance designed to use the same backend tools and database connectivity, but requires no front-end, web, system service, or process supervision...

@sporto

This comment has been minimized.

Show comment
Hide comment
@sporto

sporto Nov 28, 2014

+1 for this. I need to add files to different images from the same folder, for different servers.

sporto commented Nov 28, 2014

+1 for this. I need to add files to different images from the same folder, for different servers.

@jfgreen

This comment has been minimized.

Show comment
Hide comment
@jfgreen

jfgreen Dec 2, 2014

+1 I'm enjoying Docker so far, but due to they way my team i setup I really need a way of building different deployables that share a good chunk of code, from one repo. Not particularly keen to build them all into an uberdocker container as then their deployment/release cycles are needlessly tied together. What's the best practice for getting round this?

jfgreen commented Dec 2, 2014

+1 I'm enjoying Docker so far, but due to they way my team i setup I really need a way of building different deployables that share a good chunk of code, from one repo. Not particularly keen to build them all into an uberdocker container as then their deployment/release cycles are needlessly tied together. What's the best practice for getting round this?

@ShawnMilo

This comment has been minimized.

Show comment
Hide comment
@ShawnMilo

ShawnMilo Dec 2, 2014

@jfgreen: Put multiple Dockerfiles wherever you like, and name them whatever you like. Then have a bash script that copies them one at a time to the repo root as "./Dockerfile," runs "docker build," then deletes them. That's what I do for multiple projects and it works perfectly. I have a "dockerfiles" folder containing files named things like database, base, tests, etc. which are all Dockerfiles.

ShawnMilo commented Dec 2, 2014

@jfgreen: Put multiple Dockerfiles wherever you like, and name them whatever you like. Then have a bash script that copies them one at a time to the repo root as "./Dockerfile," runs "docker build," then deletes them. That's what I do for multiple projects and it works perfectly. I have a "dockerfiles" folder containing files named things like database, base, tests, etc. which are all Dockerfiles.

@jfgreen

This comment has been minimized.

Show comment
Hide comment
@jfgreen

jfgreen Dec 2, 2014

@ShawnMilo Thanks, that seems like a very reasonable workaround.

jfgreen commented Dec 2, 2014

@ShawnMilo Thanks, that seems like a very reasonable workaround.

@andrewcstewart

This comment has been minimized.

Show comment
Hide comment

andrewcstewart commented Dec 2, 2014

+1

@galo

This comment has been minimized.

Show comment
Hide comment
@galo

galo commented Dec 16, 2014

+1

@metcalfc

This comment has been minimized.

Show comment
Hide comment
@metcalfc

metcalfc Dec 17, 2014

+1 to a -f. There is a sane default which if the flag is omitted Docker will do the right thing. In the case that for whatever reason the very specific view of the right thing is for a user the wrong thing there is -f. Above comparisons to tools like make I think are reasonable. make is also a very opinionated utility originally written in 1976 and as such its had a reasonable amount of time to stabilize a feature set. I think its instructive that in man make the only flag that gets a mention in the very brief synopsis is... -f.

       make [ -f makefile ] [ options ] ... [ targets ] ...

There is no problem in opinionated, ux centered utilities, but some things likes like -f are pragmatic nods to the real world users live in. A tool should make your life easier not harder. If I have to write a Makefile or a shell script to work around a tool without a -f thats a failure of the tool. Clearly from the number of users that have taken the time to comment and weighed in to +1 the feature has significant utility even a year after it was initially proposed.

metcalfc commented Dec 17, 2014

+1 to a -f. There is a sane default which if the flag is omitted Docker will do the right thing. In the case that for whatever reason the very specific view of the right thing is for a user the wrong thing there is -f. Above comparisons to tools like make I think are reasonable. make is also a very opinionated utility originally written in 1976 and as such its had a reasonable amount of time to stabilize a feature set. I think its instructive that in man make the only flag that gets a mention in the very brief synopsis is... -f.

       make [ -f makefile ] [ options ] ... [ targets ] ...

There is no problem in opinionated, ux centered utilities, but some things likes like -f are pragmatic nods to the real world users live in. A tool should make your life easier not harder. If I have to write a Makefile or a shell script to work around a tool without a -f thats a failure of the tool. Clearly from the number of users that have taken the time to comment and weighed in to +1 the feature has significant utility even a year after it was initially proposed.

@aalexgabi

This comment has been minimized.

Show comment
Hide comment
@aalexgabi

aalexgabi commented Dec 20, 2014

+1

@rehanift

This comment has been minimized.

Show comment
Hide comment
@rehanift

rehanift Dec 31, 2014

+1 (or an official blog post with the recommended workaround)

rehanift commented Dec 31, 2014

+1 (or an official blog post with the recommended workaround)

@martinraison

This comment has been minimized.

Show comment
Hide comment

martinraison commented Jan 4, 2015

+1

@rokcarl

This comment has been minimized.

Show comment
Hide comment
@rokcarl

rokcarl commented Jan 6, 2015

+1

@duglin

This comment has been minimized.

Show comment
Hide comment
@duglin

duglin Jan 7, 2015

Contributor

@crosbymichael I believe this can be closed now due to #9707 being merged

Contributor

duglin commented Jan 7, 2015

@crosbymichael I believe this can be closed now due to #9707 being merged

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Jan 7, 2015

Contributor

VICTORY.

If you all want to try this new feature out you can download the binaries for master on:

master.dockerproject.com

Contributor

crosbymichael commented Jan 7, 2015

VICTORY.

If you all want to try this new feature out you can download the binaries for master on:

master.dockerproject.com

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Jan 7, 2015

Contributor

Thanks @duglin !

Contributor

crosbymichael commented Jan 7, 2015

Thanks @duglin !

@jakehow

This comment has been minimized.

Show comment
Hide comment
@jakehow

jakehow commented Jan 7, 2015

👏

@zedtux

This comment has been minimized.

Show comment
Hide comment
@zedtux

zedtux Jan 7, 2015

Thank you @crosbymichael for the binary ! 👍

zedtux commented Jan 7, 2015

Thank you @crosbymichael for the binary ! 👍

@palladius

This comment has been minimized.

Show comment
Hide comment
@palladius

palladius Feb 19, 2015

Well done guys! 👏

palladius commented Feb 19, 2015

Well done guys! 👏

@olalonde

This comment has been minimized.

Show comment
Hide comment
@olalonde

olalonde Oct 29, 2015

So it's docker build -f Dockerfile.dev .? edit: yep

olalonde commented Oct 29, 2015

So it's docker build -f Dockerfile.dev .? edit: yep

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment