-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"ADD <url> /" in Dockerfile copy the file instead of decompressing it #2369
Comments
Ran into the same issue. Dockerfile:
Result:
|
@crosbymichael IMO the docs don't make it clear that
takes precedence over
|
So it comes down to this two options:
|
Extracting remote files would be great, it's far better than add tarball in the repo. |
My vote would be to update the docs. I don't think the cmd ping @vieux @metalivedev @shykes So far we have two votes and they both cancel each other out ;) |
If we update the docs to not mention decompression, then we ought to also update the ADD command code to not decompress local archives, which will break our base image debootstrap Dockerfiles. The turtles at the bottom are biting at our heels. :) |
@tianon I didn't say remove the existing feature but just not implement it for URLS. ;) |
Ahh, so you just want to clarify the existing documentation. I'm definitely cool with that. :) 👍 |
For some strange reason I keep having to change my Dockerfile related to this tar.gz download:
So I have been commenting/uncommenting the last two lines. Seems like this morning I had to change it to be one way, and then this afternoon a different way. I don't know if docker is behaving differently on two different machines, or at different times, or if somehow the elasticsearch people keep uploading an archive structured differently, or if I am crazy. |
I find this discrepancy in behavior between remote and local files quite annoying. I'm generating a Dockerfile from a configuration file, and I'd like for a resource to be specified as either an URL, or a local file that was manually downloaded. Right now it's quite difficult to do so, because there is no valid combination that works for both. ADD will extract from local files, but not from remote ones, meaning I don't know whether I should or not do the extraction manually. COPY does not support remote files. I'd be really nice if COPY were made to support remote files, or if disabling extraction on ADD would be possible. |
+1 to danielkza comment. inconsistent behavior between local tar.gz files and remote ones |
It would be nice to have ability to untar url |
Definitely. Storing tarballs in git is rather obscene. |
++ |
+1 ! |
although this had already been closed I'm also in strong favor of also extracting remote tar files instead of simply adding them. |
How can I just add archive to the container without decompressing it? EDIT: used COPY command. What a strange semantics... |
@geerk if it's a local file, But, please, the GitHub issue-tracker is not a support forum. Use #docker IRC channel on freenode or docker-user for support questions. |
+1 |
I am seeing a different issue where the ADD is copying the
|
@doomsbuster is that file present in the same directory as your dockerfile? |
@thaJeztah I am downloading the file in the
|
@doomsbuster that's not what In your example, you're trying to copy a |
aah! I see. That clarifies it. While we talk i did something like this to get it to work:
|
Yes; that's probably the best option; as can be seen in the discussion above, |
2018 and this problem still annoys me >.< |
So why do you guys remove the |
@HuKeping what do you mean; you mean removing the feature from |
@thaJeztah it seems quite a lot of people want that feature(when download from URL, also extract it automatically) of Recently I came into the same issue, my Dockerfile is somewhat like
For some security reason they would prefer to ADD from a remote address which can be accessed by public so that they could generate the image wherever they want. AFAIK, one of the reason why you drop this feature is because there is no guarantee how huge the package size might be and it might cause the docker daemon got killed and then other running container down. I'll buy this but I still think if people want shoot themselves in the foot, just let them go, it should not blame on the feature. |
We can't change the default behavior as it would break many Dockerfiles, and there's definitely use cases for adding a remote tar without decompressing it. Would a multi-stage build solve your issue? i.e., decompress in an intermediate step, and copy to the final stage? |
it not work if build from scratch.
Definitely agree, I don't think it's a good idea to change the default behavior, I was thinking add some option to |
Also, if your use case is to create an image from a rootfs; you can use |
Just wonder if there is a way to reproduce the environment(image) in one process. |
Perhaps when using update this feature is no longer experimental so no longer needed to use the Here's one based on my "silly experiments with # syntax=docker/dockerfile:1
FROM scratch
ARG DOCKER_VERSION=18.09.1
RUN --mount=from=busybox:latest,src=/bin/,dst=/bin/ \
wget -O - https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}.tgz | tar zxf - docker build -t myimage .
docker run --rm myimage docker/docker --version
Docker version 18.09.1, build 4c52b90 |
Thanks @thaJeztah , it's helpful! |
Embarrassing bug, you must add the tarball in the repository.
The text was updated successfully, but these errors were encountered: