New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow specifying of a dockerfile as a path, not piping in. #2112
Comments
I was just looking into this Usage: docker build [OPTIONS] PATH | URL | - so if you run docker build -t my/thing my-dockerfile complains about the tar file being too short. It seems to me that the 'PATH' option isn't doccoed, so it might have some legacy meaning? So - I wonder if detecting if PATH isa file, and is not a Tarfile. personally, I have a set of Dockerfiles that I use to test, and would much rather have them all in one directory and also have a full context. |
That PATH refers to a directory, not a specific Dockerfile. |
oh, and then Tars that directory up to send to the server - cool! so its possible to detect isafile, Tar up the dir its in, and then replace the Dockerfile in that tarball with the specified file. or to use -f in the same way - allowing your Dockerfile definitions to live separately from the payload now to work out how the tests work, try it out and see if it works for me |
It doesn't tar anything up - the PATH is a directory in which it assumes there is a |
thats not all it does with that PATH - reading the code, the 'context' is sent to the server by taring up the directory. (ok, so i still don't know go, and i've only been reading the code for the last few minutes, so take it with a grain of skepticism ) |
Correct, so any nonremote files referenced via ADD must also be in that same directory or the daemon won't be able to access them. |
Ah, I see what you're saying - yes - that's exactly what I want - a way to specify a dockerfile with -f, and then the directory PATH that might be separate. so I could have:
|
#2108 (adding an include directive to Dockerfiles) adds an interesting wrinkle should the include be relative to the specified dockerfile, or the PATH. not important yet though. fun:
|
as extra bonus - there are no CmdBuild tests yet, so guess what I get to learn on first :) |
@SvenDowideit are you working on this? I was thinking of maybe hacking on it today |
I'm slowly getting myself familiar with the code and go, so go for it - I'm having too much fun just writing the unit tests (perhaps you can use the testing commits to help :) |
I will do :) |
I would like to see this functionality too. We have a system which can run in 3 different modes and we'd like to deploy 3 different containers -- 1 for each mode. That means 3 virtually identical docker files with just the CMD being different. But because of the paths only being directories, and the directories being the context for ADD commands, I cannot get this to work right now. So: +1 from me! |
Seems like this may have the same end goal as #1618 (though with different approaches). The idea there is to use a single Dockerfile with multiple |
It seems as though if you can pipe a Dockerfile in, you should be able to specify a path as well. Interested to see what comes of #1618 but I think this offers many more possibilities. |
This is where I'm at: https://github.com/peterbraden/docker/compare/2112-specify-dockerfile?expand=1 |
I was thrown by the fact that the documentation doesn't state clearly that the directory containing the Dockerfile is the build context - I made the wrong assumption that the build context was the current working directory, so if I passed a path to the Docerfile instead of it being in the current directory, files I tried to ADD from the current working directory bombed out with "no such file or directory" errors. |
I'm getting the same error, Any Ideas docker build Dockerfile Uploading context 2013/12/11 21:52:32 Error: Error build: Tarball too short |
@bscott try |
Worked Thx!, I just would like to choose between different Docker files. |
+1 from me. I need to create multiple images from my source. Each image is a separate concern that needs the same context to be built. Polluting a single Dockerfile (as suggested in #1618) is wonky. It'd be much cleaner for me to keep 3 separate I'd love to see something like this implemented. |
This is more difficult to implement than it would at first seem. It appears that
The The easiest implementation seems like it'd involve changing |
@thedeeno I'll take a look really quick and show you were the change should be made. I think it is only in one place. |
@itsafire everyone with this issue who has solved it is already using some sort of preprocessor or wrapper around docker build to achieve this goal. The fragmentation around this situation is in conflict with the @docker team's stated goal of 'repeatability'. This discussion and the others are about resolving this issue. |
1+ year and 130+ comments and counting for a simple issue affecting most of the users... I'm impressed. Keep up the good work, Docker! |
+1 |
Tools should help people to follow their own way, but not to impose the "right" way. A simple case that brought me to this discussion:
My way is to keep project root clean. But |
On my side I have built a script which prepare a folder with the required files and execute the standard command line |
+1 sure would like to be able to have multiple Dockerfiles in a single repo. My use case: One image is for production use and deployment, another image is a reporting instance designed to use the same backend tools and database connectivity, but requires no front-end, web, system service, or process supervision... |
+1 for this. I need to add files to different images from the same folder, for different servers. |
+1 I'm enjoying Docker so far, but due to they way my team i setup I really need a way of building different deployables that share a good chunk of code, from one repo. Not particularly keen to build them all into an uberdocker container as then their deployment/release cycles are needlessly tied together. What's the best practice for getting round this? |
@jfgreen: Put multiple Dockerfiles wherever you like, and name them whatever you like. Then have a bash script that copies them one at a time to the repo root as "./Dockerfile," runs "docker build," then deletes them. That's what I do for multiple projects and it works perfectly. I have a "dockerfiles" folder containing files named things like database, base, tests, etc. which are all Dockerfiles. |
@ShawnMilo Thanks, that seems like a very reasonable workaround. |
+1 |
+1 |
+1 to a
There is no problem in opinionated, ux centered utilities, but some things likes like |
+1 |
+1 (or an official blog post with the recommended workaround) |
+1 |
1 similar comment
+1 |
@crosbymichael I believe this can be closed now due to #9707 being merged |
VICTORY. If you all want to try this new feature out you can download the binaries for master on: master.dockerproject.com |
Thanks @duglin ! |
👏 |
Thank you @crosbymichael for the binary ! 👍 |
Well done guys! 👏 |
So it's |
Gracefully remove LB endpoints from services
Would be nice to be able to specify
docker build -t my/thing -f my-dockerfile .
so I could ADD files, and also have multiple dockerfiles.The text was updated successfully, but these errors were encountered: