Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker exec does not exit when container process terminates #13052

Closed
bendenoz opened this issue May 7, 2015 · 34 comments
Closed

docker exec does not exit when container process terminates #13052

bendenoz opened this issue May 7, 2015 · 34 comments

Comments

@bendenoz
Copy link

bendenoz commented May 7, 2015

Description of problem:

docker exec does not exit when the container process terminates.
This only happens when running in interactive but no with no tty.

ie:

docker exec -i <container> ps

hangs until Ctrl-C or Enter is pressed. It seems and input from STDIN is necessary for it to check the process state...

it works fine with docker exec -it <container> ps and docker exec <container> ps

This is an issue when using it to run rsync or scp command through an ssh tunnel (key based forced commands). the sync / copy works fine but never exits...

docker version:

Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 4749651
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 4749651
OS/Arch (server): linux/amd64

tested on 1.5 too

docker info:

Containers: 6
Images: 100
Storage Driver: devicemapper
 Pool Name: docker-202:81-917511-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file:
 Metadata file:
 Data Space Used: 3.734 GB
 Data Space Total: 107.4 GB
 Data Space Available: 103.6 GB
 Metadata Space Used: 5.931 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.142 GB
 Udev Sync Supported: false
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Kernel Version: 3.13.0-44-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 1
Total Memory: 3.676 GiB
Name: docker1
ID: 645K:6725:O2Y7:Q5R4:5COY:353S:PX64:BUYO:6RTN:5YUS:R54G:QCDF
WARNING: No swap limit support

uname -a:

Linux docker1 3.13.0-44-generic #73-Ubuntu SMP Tue Dec 16 00:22:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Steps to Reproduce:

docker exec -i <container> ps

Actual Results:

process does not exit

Expected Results:

process exits (returns to prompt)

Additional info:

hangs until Ctrl-C or Enter is pressed. It seems and input from STDIN is necessary for it to check the process state...

it works fine with docker exec -it <container> ps and docker exec <container> ps

This is an issue when using it to run rsync or scp command through an ssh tunnel (key based forced commands). the sync / copy works fine but never exits...

@cpuguy83
Copy link
Member

cpuguy83 commented May 7, 2015

How come you are using -i without any stdin?

@bendenoz
Copy link
Author

bendenoz commented May 7, 2015

I just updated the issue with the requested info.

Hi @cpuguy83 . Actually, this seems to be required to get scp / sftp to work throug ssh. My SCP client (WinSCP) does not work if -t is used. Using docker exec -i ... works fine with sftp (but hangs with rsync)

here is my authorized_keys if this helps :

no-port-forwarding,no-X11-forwarding,command="[[ -z $SSH_ORIGINAL_COMMAND ]] &&
SSH_ORIGINAL_COMMAND=bash; PARMS=i; [[ -n $SSH_TTY ]] && PARMS=it; exec sudo doc
ker exec -$PARMS util $SSH_ORIGINAL_COMMAND" ssh-rsa AAA....

@cpuguy83
Copy link
Member

cpuguy83 commented May 7, 2015

I can confirm this is definitely a bug and we are leaking a goroutine here when stdin is not used.

However, the whole point of -i is to tell docker to open stdin, but if you aren't using stdin you shouldn't use -i

@bendenoz
Copy link
Author

bendenoz commented May 7, 2015

Thanks. Actually stdin is used by the scp client, with no tty. (but after some testing I found this was an easier repro case to explain the problem).

@cpuguy83
Copy link
Member

cpuguy83 commented May 7, 2015

@bendenoz But the stdin here is for stdin to the docker client process.
If you actually send something to stdin it does return correctly.
For example echo hello | docker run -i <container> ps, you will get a return (and of course if you use cat you will see hello outputed).

@cpuguy83
Copy link
Member

cpuguy83 commented May 7, 2015

And actually this appears to be fixed on master.
Can you test with a binary from master.dockerproject.com?

@bendenoz
Copy link
Author

bendenoz commented May 7, 2015

yes but the problem is not so much with ps but actually with rsync or scp. However it was difficult for me to give simple repro steps.

From what I gathered, when using scp as a client, scp -f is the server process run in the container.
I'm not sure about the exact protocol details, but when retrieving a file, some commands are sent on STDIN only at the beginning of the exchange (presumably the file name), then the file is retrieved from STDOUT. The server process (in the container) then exits expecting nothing back from the client, but the client doesn't realize because docker exec waits for more input before exiting.

I'm not sure this is clear?
Ok I'm going to try with the master build and report.
Thanks !!

@bendenoz
Copy link
Author

bendenoz commented May 7, 2015

can't run the static build on my system apparently, I will try again when I can get a new setup running.
using nsenter as a work-around for now

@phemmer
Copy link
Contributor

phemmer commented May 7, 2015

I reported what appears to be this exact same issue in #9860, which was resolved back in January.
If this popped up again, then it sounds like there's no test for the issue. If so, shouldn't we add one?

Edit, the fix for #9860 did add a test. Is it working right?

@bendenoz
Copy link
Author

Sorry late update, but I can confirm this issues doesn't repro on today's docker-1.7.0-dev from master.dockerproject.com.
It is still present in 1.6.1 though.
Will update when 1.7 is released. Thanks !

@runcom
Copy link
Member

runcom commented May 23, 2015

I'm closing this since it's fixed on master, @bendenoz feel free to comment here if this still happens and I'll reopen this issue

@paralin
Copy link

paralin commented Jul 29, 2015

I'm getting this issue again with Docker 1.7.1

@paralin
Copy link

paralin commented Jul 29, 2015

With the following command:

docker exec -u root -i contid echo "hello world"

Hello world is printed and then nothing happens until I send some input.

Note, this is without a tty.

To reproduce:

ssh localhost docker exec -i testlog ps

@samatwork
Copy link

I have tested this with both Docker Engine 1.7.0 and 1.7.1 -- same results as originally reported. I'm using this command:

rsync -e "docker -H OTHERSERVER:2376 --tls exec -i" -av CONTAINERNAME:/DIR .

rsync connects and transfers the files, it just never exits until I press CTRL-C. If I try the "docker exec" command without "-i", rsync reports "connection unexpectedly closed". If I try it with "-it", I see "cannot enable tty mode on non tty input".

I'm looking forward to using rsync this way to copy files between running containers where there are potentially hundreds of gigabytes to sync. The "docker cp" command works fine, it just does a ton of extra work that rsync can skip. I've tried using "tar" to gather the files on the remote server and push them to the destination but again, it's transferring far more data than needed.

TL;DR: This issue is not fixed, please reopen it. Thanks!

@paralin
Copy link

paralin commented Aug 5, 2015

Yes, my last comment still stands. Reopen please.

@duglin
Copy link
Contributor

duglin commented Aug 5, 2015

Can you please try with the very very latest - I just tried this and it seems to work just fine.
https://master.dockerproject.org/

@samatwork
Copy link

Sorry it took so long to get back to this issue, but I just retested it with the 1.8.1 RPM on CentOS 7. It still doesn't work -- same as before, rsync transfers the files but hangs forever and has to be killed. I also just tried the latest build of 1.9.0-dev from master.dockerproject.org (commit bba762b) with no success.

I'm curious what's different about your environment that lets it work for you but not for me. I'm using two different physical servers, both running CentOS 7.1. The Docker daemons communicate with TLS using a CA and certificates I created following the instructions on the docker site. There isn't really anything special I'm doing inside the containers -- my target container is based on the centos:7 image from the public registry and is running an Apache process.

If there's any other information I can provide to make it easier to track this down, please let me know!

@paralin
Copy link

paralin commented Aug 17, 2015

Rsync still doesn't work for me.

Probably still has to do with stdin handling.

On Mon, Aug 17, 2015, 10:12 AM Sam At Work notifications@github.com wrote:

Sorry it took so long to get back to this issue, but I just retested it
with the 1.8.1 RPM on CentOS 7. It still doesn't work -- same as before,
rsync transfers the files but hangs forever and has to be killed. I also
just tried the latest build of 1.9.0-dev from master.dockerproject.org
(commit bba762b
bba762b)
with no success.

I'm curious what's different about your environment that lets it work for
you but not for me. I'm using two different physical servers, both running
CentOS 7.1. The Docker daemons communicate with TLS using a CA and
certificates I created following the instructions on the docker site. There
isn't really anything special I'm doing inside the containers -- my target
container is based on the centos:7 image from the public registry and is
running an Apache process.

If there's any other information I can provide to make it easier to track
this down, please let me know!


Reply to this email directly or view it on GitHub
#13052 (comment).

@icy
Copy link

icy commented Dec 10, 2015

I got the same issue with Docker-1.8.3 and Ubuntu Trusty

@icy
Copy link

icy commented Dec 11, 2015

Solved with nsenter https://github.com/jpetazzo/nsenter/issues/67

@glyph
Copy link

glyph commented Dec 16, 2015

This is definitely still an issue in docker 1.9.1.

@cpuguy83
Copy link
Member

@glyph Please provide reproducible steps.

@glyph
Copy link

glyph commented Dec 17, 2015

My mistake; this works locally but doesn't work against a swarm cluster; I was confused about which environment I was using when testing.

# Configure to use a docker-machine dev environment
$ eval "$(docker-machine env dev)"
$ echo test | docker run --rm -i debian bash -c 'echo start; echo "$(cat)"; echo end'
start
test
end
# configure to use a rackspace carina swarm environment
$ . ~/Downloads/cluster1/docker.env 
$ echo test | docker run --rm -i debian bash -c 'echo start; echo "$(cat)"; echo end'
start

Here it hangs, and I have to kill it with docker rm -f in another terminal.

@cpuguy83
Copy link
Member

@glyph Correct, this was fixed in swarm 1.0... possibly 1.0.1 for TLS conns.

@glyph
Copy link

glyph commented Dec 17, 2015

@cpuguy83 you wouldn't happen to have a link to the issue, would you?

@cpuguy83
Copy link
Member

@glyph
Copy link

glyph commented Dec 17, 2015

@cpuguy83 thanks a bunch!

@samatwork
Copy link

I am still experiencing problems with this in Docker 1.9.1. My environment is very simple, just two hosts running Docker Engine on the same network. They're both CentOS 7, identical patch levels, same image on both hosts. Not using Compose, Swarm, Network or anything else, just manually-managed containers.

Both hosts use the same internal CA to listen on TLS. Docker Engine and rsync are installed inside the image, as is the CA cert, so the docker CLI works fine from inside the container. From server A, I start bash in container A and try to use rsync to copy files from container B on server B. For a very small amount of data, I see this:

[root@serverA:/home/samatwork]$ docker exec -it containerA /bin/bash
[root@containerA srv]# mkdir home/tmp2
[root@containerA srv]# cd home/tmp2
[root@containerA tmp2]# rsync -e "docker -H serverB:2376 --tls exec -i" -av containerB:/srv/home/logos .
receiving incremental file list
logos/
logos/myapp-logo-original.png
logos/myapp-logo-scaled.png

sent 53 bytes  received 2424 bytes  1651.33 bytes/sec
total size is 2241  speedup is 0.90
[root@containerA tmp2]#

Works fine. Total file size is about 2.5 KB.

But when I try to transfer more, it breaks every time:

[root@containerA tmp2]# rsync -e "docker -H serverB:2376 --tls exec -i" -av containerB:/srv/home .
receiving incremental file list
home/
home/.myapp-home.lock
home/dbconfig.xml
home/myapp-config.properties
home/analytics-logs/a60f9a3901276ffa4e5e71d9f996caee.foocorp-analytics.log
write /dev/stdout: resource temporarily unavailable
rsync: connection unexpectedly closed (1037682 bytes received so far) [receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) [receiver=3.0.9]
rsync: connection unexpectedly closed (362 bytes received so far) [generator]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) [generator=3.0.9]
[root@containerA tmp2]#

I've repeated this test many times, it never works. The total amount of data I would like to transfer is about 125 GB, but it always stops after a few tens of KB.

Please reopen this issue.

@cpuguy83
Copy link
Member

@samatwork I don't see how your problem has to do with the exec instances not closing down when the container is stopped.

@samatwork
Copy link

@cpuguy83 I don't either, except that the high-level problem (using rsync between containers) is not fixed. It's true the exec instance no longer hangs forever but now it seems to exit too quickly. rsync reports just under 1 MB of data transferred, which feels suspiciously like a buffer size limit to me.

When I first reported this problem back in August, I was directed to this ticket. Should I be reporting it somewhere else?

@cpuguy83
Copy link
Member

@samatwork #13660

@haridsv
Copy link

haridsv commented Aug 10, 2016

I am noticing the same issue, but even without using -i option. I am running docker exec on a bash script which calls an exit 2 at the end. Using bash -x option, I can verify that the exit was called, and using another terminal, I confirmed that the corresponding process is nonexistent in the container, but the docker exec doesn't finish on the host. This is not happening for all execution, so it is not consistent. The invocation is very straightforward: docker exec container_name bash -c /path/to/script.sh. Is there a way I can check what docker is still waiting for on the host? I am using Docker version 1.11.2, build b9f10c9.

@MatthiasLohr
Copy link

Anything new about this topic? I maybe ran in the same problem...

@thaJeztah
Copy link
Member

@MatthiasLohr please open a new issue; the issue being discussed here was resolved two years ago; if you're running into this, it's most likely different

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests