Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CMD doesn't survive docker run, docker commit #4362

pda opened this issue Feb 26, 2014 · 22 comments

CMD doesn't survive docker run, docker commit #4362

pda opened this issue Feb 26, 2014 · 22 comments


Copy link

pda commented Feb 26, 2014

When an image that has a CMD is run, and the resulting container committed as
a new image, the new image has no CMD.

As a counterpoint, when an image that has a CMD is used as the FROM line in
a Dockerfile, the new image DOES have a CMD. (Perhaps subject to bug(s)
vaguely described in #3762).

I think it should be default, or at least possible, for the resulting image
to retain the CMD / config.Cmd.


Here's a minimal reproduction:

set -e

# Create image with explicit CMD in Dockerfile
cat > Dockerfile <<END
FROM ubuntu:13.10
CMD ["true"]
docker build --rm --tag cmdtest:dockerfile . > /dev/null 2>&1

# Run and commit a container as a new image.
rm -f container_id
docker run --cidfile=container_id cmdtest:dockerfile
docker commit $(cat container_id) cmdtest:dockerrun > /dev/null

# Output: Inspect CMD
set -x
docker inspect --format='{{.config.Cmd}}' cmdtest:dockerfile
docker inspect --format='{{.config.Cmd}}' cmdtest:dockerrun
docker run cmdtest:dockerrun


+ docker inspect '--format={{.config.Cmd}}' cmdtest:dockerfile
+ docker inspect '--format={{.config.Cmd}}' cmdtest:dockerrun
<no value>
+ docker run cmdtest:dockerrun
2014/02/26 23:52:00 Error: create: No command specified

How it works in Dockerfile

The Dockerfile CMD line does this in buildfile.go (error handling trimmed):

func (b *buildFile) CmdCmd(args string) error {
    cmd := b.buildCmdFromJson(args)
    b.config.Cmd = cmd
    _ := b.commit("", b.config.Cmd, fmt.Sprintf("CMD %v", cmd))

I can't find a docker command to manipulate an image config to reproduce
b.config.Cmd = cmd outside of a Dockerfile. Nothing else inside Docker
seems to write to config.Cmd.


My use-case is a build pipeline which takes a developer-friendly image
(codebase is mounted in as a volume), and builds a deployable image by adding
and building the codebase using something roughly like this:

docker run \
  --volume=/codebase:/mnt/codebase \
  bash -c "cp -a /mnt/codebase /codebase && cd /codebase && make"

docker commit $(cat build.cid) release

I'd like the release image to retain the CMD from the original image.


$ docker -v
Docker version 0.8.1, build a1598d1

$ docker info
Containers: 25
Images: 68
Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Dirs: 118

$ uname -a
Linux ubuntu-13 3.11.0-17-generic #31-Ubuntu SMP Mon Feb 3 21:52:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Copy link
Contributor Author

pda commented Feb 27, 2014

I've just discovered the --run parameter to docker commit which might make this issue report moot.

It would still be nice to be able to opt-in to inheriting the parent image config, but docker commit --run at least gives a mechanism to achieve that result.

Copy link

harto commented Mar 1, 2014

Looks like this might be addressed by #4000

Copy link
Contributor Author

pda commented Mar 1, 2014

Looks like this might be addressed by #4000

Hopefully this wont be a problem:

We made commit drop the config for a reason. I need to check the use cases and see if there are impossible scenario with this PR.

Edit: sorry for the noise, I thought I was commenting in a related thread of a private repo!

Copy link

@pda I think this is resolved by #4000 ( minus your other concerns in your recent PR ;) )

Copy link
Contributor Author

pda commented Mar 28, 2014

Yes — I'll close this as it's mostly resolved by #4000 and redundant with #4885. Cheers!

@pda pda closed this as completed Mar 28, 2014
Copy link

@pda CMD reserved the same after commit? I have got a problem to run my container. Could you help me out?

Copy link

vaab commented Dec 10, 2014

Hmm, neither #4000 nor this one seems working for me. Here's to reproduce the loss of CMD (and ENTRYPOINT) after run/commit ...

docker pull ubuntu:latest  ## just to take any base image
docker tag ubuntu:latest mytest ## we'll commit in this 'mytest' tag after
echo "BEFORE:"
docker inspect --format='CMD: {{json .Config.Cmd}}{{"\n"}}ENTRYPOINT: {{json .Config.Entrypoint}}' mytest
container_id=$(docker run -d --entrypoint /bin/bash mytest -c "touch a")
docker wait "$container_id"
docker commit "$container_id" mytest
echo "AFTER:"
docker inspect --format='CMD: {{json .Config.Cmd}}{{"\n"}}ENTRYPOINT: {{json .Config.Entrypoint}}' mytest

Running this suite of command will print out:

 CMD: ["/bin/bash"]
 CMD: ["-c","touch a"]
 ENTRYPOINT: ["/bin/bash"]

I'm clearly not expecting "commit" to change CMD and ENTRYPOINT of my image, but only the filesystem stored.

So I don't understand why all the reported issues seems to be closed as fixed... What did I miss ? Thanks for any insights.

Copy link

@vaab from what I can see of your example, docker is doing exactly what you've asked it to do.

you are creating a container container_id, which you've set to use an ENTRYPOINT of /bin/bash and a CMD of -c "touch a"

then you docker commit container_id

its important to understand that docker commit makes an image based on a container which you've run (or created).

Copy link

vaab commented Dec 16, 2014

I understand your perspective, and I think you are right. What is then missing is a way to commit the image's filesystem without touching to the configuration. For instance, it does seems that there are no way to do an "apt-get update && apt-get upgrade" on an image that would have set its ENTRYPOINT in only one commit. I have to:
1- save the configuration
2- use "docker run --entrypoint /bin/bash IMAGENAME -c 'apt-get update && apt-get upgrade'" to remove any other entrypoint the image would have
3- commit the container (which commits the config and the filesystem)
4- reset the configuration with saved values by doing another commit.

The "apt-get update && apt-get upgrade" is an example. I have great use of running just a command in a docker and saving the result. The update is then easily shippable because very small and quickly built and you don't need to recalculate and send huge images. It's meant like adding a new "RUN" live to manufacture a docker image.

What I understand is that I should probably move this concern to another place. Thanks !.

Copy link

msumme commented Jan 23, 2015

Is there a workaround to this issue?

I'm not sure how to run a sensible build process (commented on #4885 not noticing it was a PR and not an issue).

Copy link

msumme commented Jan 23, 2015

@vaab Have you discovered a way to modify the filesystem for a container and commit that without affecting the rest of the configuration for the new image?

Copy link

vaab commented Jan 26, 2015

@msumme No, I had resort doing what I explained before in my last post, which is a bulky workaround. But it still a wokaround. This create 2 commits instead of one. The first is the the image change + the unwanted entrypoint changes, the second commit re-set the entrypoint to its previous value. I did a quick bash script around all this, and I moved forward.
But having 2 commits for this has other consequence I must deal with.

Copy link

msumme commented Jan 26, 2015

@vaab That's irritating. I built a script to create a temporary working directory with a new Dockerfile that runs the commands. This only works if you don't need something mounted and can run the commands non-interactively.

I had to package up the files I wanted to add into a tarball, and use an ADD command in the temporary working directory - and then inherit it from the image in question.

Not the cleanest solution either, but good enough for my use case. I wish they would add this feature somehow.... or at least let you change the entrypoint/cmd when you commit the image.

Copy link

vaab commented Jan 27, 2015

@msumme this is basically the script I use (warning, bad code) to bypass this limitation:

Look for the docker_update function. I use this script with any types of image updating.

Copy link

msumme commented Jan 27, 2015

@vaab Ah that's interesting. Seems like a reasonable approach if you need to mount a volume to do the work.

If you don't though, is there any reason not to just put the command you need to run directly in a Dockerfile?

Something like

echo "FROM $image" >> /tmp/working_dir/Dockerfile
echo "RUN $cmd" >> /tmp/working_dir/Dockerfile

cd /tmp/working_dir && build -t $new_img_name .

Copy link
Contributor Author

pda commented Jan 27, 2015

@msumme In addition to using docker run for steps that require volumes and links, @vaab's Gist demonstrates a nice approach to dynamic Dockerfile, without needing a temporary file (nor build context, I think):

docker build -t $tag - <<END
FROM $image
RUN cd /somewhere && make
# …

… it just doesn't handle --volume, --volumes-from, --link etc that docker run does, and these are often necessary during build steps. For example I'm currently working on a third-party Rails app which unfortunately needs --link=redis:redis --link=postgresql:postgresql to be able to rake assets:precompile, so docker build isn't enough.

Copy link

msumme commented Jan 27, 2015

@pda I noticed that, although in many cases you either need a build context or to use --volume. So it's kind of a trade-off in complexity.

My use case was that I needed to take the output of a build process and put that into a minimal runtime environment.

I used mktemp -d to create a temporary working directory in which to drop my app.tar - which was generated from a docker container that did need volumes mounted etc. I built a dockerfile in that directory using the image name I started with, and that's that. Once that was scripted out, I could package up a new production-ready image from any runtime container with any app, assuming a few conventions about the containers and packaged app.

@vaab's approach would work for that as well. I thought about going down that road, but it seemed more complicated to me since I was just adding files to an otherwise finished build.

@vaab's approach is a good one, I think. If you added --volume support in a generic way, it would let you do some very interesting things that my approach would not.

Copy link

vaab commented Jan 28, 2015

@msumme Why not in Dockerfile ? I can't put my commands (often several lines) as I would need to comply to the Dockerfile strict rules and specifically would have issues with new lines and quotes. It would soon become a nightmare. I want direct readable bash to feed in. Not to mention all the issues with volumes and all.

@msumme Support of Volume? it's already there:

 docker-update MYIMAGE -v /srv/a:/mnt <<

 ## do whatever you want in full bash and with /mnt files


A last thing about the overall docker-update script, is that it tries to be clever (I know that's often the beginning of hell): if you apply the same code to the same image ID, it won't execute it, but will use the previous result instead. The double commit workaround is making this harder to implement correctly.

You can disable the cache by inserting # docker: ALWAYS in your code.

And yes, I do very interesting things with this.

Copy link

vaab commented Jan 28, 2015

@msumme @pda I've published a quick blog post around all this to avoid polluting here too much:

Copy link
Contributor Author

pda commented Jan 28, 2015

Nice, thanks.

Copy link

msumme commented Jan 28, 2015

@vaab - that's very cool. Thanks for explaining how it works. I had missed the way in which args + stdin are passed to the docker run command when I first read the script. That's a very cool way of doing that. It would be nice if there was a way to make commit behave in the way I initially expected - but this is a great workaround.

Copy link

odigity commented Oct 11, 2015

I see no mention of a --run option to docker commit in the docs or built-in help.

I managed to work around this problem by doing this instead:

  1. start the app container (gitlab/gitlab-ce in my case) with no options other than --detach (no ports, no volumes, no /bin/bash command, etc)
  2. connect to the running container via docker exec -it <container id> /bin/bash
  3. make whatever changes I need (installing another gem in my case)
  4. exit
  5. use docker commit

Ta da! The original config is preserved (including CMD, which is what screwed me the first time I did this).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet

No branches or pull requests

9 participants