-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker exec does not exit when container process terminates #13052
Comments
How come you are using |
I just updated the issue with the requested info. Hi @cpuguy83 . Actually, this seems to be required to get scp / sftp to work throug ssh. My SCP client (WinSCP) does not work if here is my authorized_keys if this helps :
|
I can confirm this is definitely a bug and we are leaking a goroutine here when stdin is not used. However, the whole point of |
Thanks. Actually stdin is used by the scp client, with no tty. (but after some testing I found this was an easier repro case to explain the problem). |
@bendenoz But the stdin here is for stdin to the docker client process. |
And actually this appears to be fixed on master. |
yes but the problem is not so much with From what I gathered, when using I'm not sure this is clear? |
can't run the static build on my system apparently, I will try again when I can get a new setup running. |
Sorry late update, but I can confirm this issues doesn't repro on today's |
I'm closing this since it's fixed on master, @bendenoz feel free to comment here if this still happens and I'll reopen this issue |
I'm getting this issue again with Docker 1.7.1 |
With the following command: docker exec -u root -i contid echo "hello world" Hello world is printed and then nothing happens until I send some input. Note, this is without a tty. To reproduce:
|
I have tested this with both Docker Engine 1.7.0 and 1.7.1 -- same results as originally reported. I'm using this command: rsync -e "docker -H OTHERSERVER:2376 --tls exec -i" -av CONTAINERNAME:/DIR . rsync connects and transfers the files, it just never exits until I press CTRL-C. If I try the "docker exec" command without "-i", rsync reports "connection unexpectedly closed". If I try it with "-it", I see "cannot enable tty mode on non tty input". I'm looking forward to using rsync this way to copy files between running containers where there are potentially hundreds of gigabytes to sync. The "docker cp" command works fine, it just does a ton of extra work that rsync can skip. I've tried using "tar" to gather the files on the remote server and push them to the destination but again, it's transferring far more data than needed. TL;DR: This issue is not fixed, please reopen it. Thanks! |
Yes, my last comment still stands. Reopen please. |
Can you please try with the very very latest - I just tried this and it seems to work just fine. |
Sorry it took so long to get back to this issue, but I just retested it with the 1.8.1 RPM on CentOS 7. It still doesn't work -- same as before, rsync transfers the files but hangs forever and has to be killed. I also just tried the latest build of 1.9.0-dev from master.dockerproject.org (commit bba762b) with no success. I'm curious what's different about your environment that lets it work for you but not for me. I'm using two different physical servers, both running CentOS 7.1. The Docker daemons communicate with TLS using a CA and certificates I created following the instructions on the docker site. There isn't really anything special I'm doing inside the containers -- my target container is based on the centos:7 image from the public registry and is running an Apache process. If there's any other information I can provide to make it easier to track this down, please let me know! |
Rsync still doesn't work for me. Probably still has to do with stdin handling. On Mon, Aug 17, 2015, 10:12 AM Sam At Work notifications@github.com wrote:
|
I got the same issue with Docker-1.8.3 and Ubuntu Trusty |
Solved with |
This is definitely still an issue in docker 1.9.1. |
@glyph Please provide reproducible steps. |
My mistake; this works locally but doesn't work against a swarm cluster; I was confused about which environment I was using when testing. # Configure to use a docker-machine dev environment
$ eval "$(docker-machine env dev)"
$ echo test | docker run --rm -i debian bash -c 'echo start; echo "$(cat)"; echo end'
start
test
end
# configure to use a rackspace carina swarm environment
$ . ~/Downloads/cluster1/docker.env
$ echo test | docker run --rm -i debian bash -c 'echo start; echo "$(cat)"; echo end'
start Here it hangs, and I have to kill it with |
@glyph Correct, this was fixed in swarm 1.0... possibly 1.0.1 for TLS conns. |
@cpuguy83 you wouldn't happen to have a link to the issue, would you? |
docker-archive/classicswarm#1305 (1.0.0) |
@cpuguy83 thanks a bunch! |
I am still experiencing problems with this in Docker 1.9.1. My environment is very simple, just two hosts running Docker Engine on the same network. They're both CentOS 7, identical patch levels, same image on both hosts. Not using Compose, Swarm, Network or anything else, just manually-managed containers. Both hosts use the same internal CA to listen on TLS. Docker Engine and rsync are installed inside the image, as is the CA cert, so the docker CLI works fine from inside the container. From server A, I start bash in container A and try to use rsync to copy files from container B on server B. For a very small amount of data, I see this:
Works fine. Total file size is about 2.5 KB. But when I try to transfer more, it breaks every time:
I've repeated this test many times, it never works. The total amount of data I would like to transfer is about 125 GB, but it always stops after a few tens of KB. Please reopen this issue. |
@samatwork I don't see how your problem has to do with the exec instances not closing down when the container is stopped. |
@cpuguy83 I don't either, except that the high-level problem (using rsync between containers) is not fixed. It's true the exec instance no longer hangs forever but now it seems to exit too quickly. rsync reports just under 1 MB of data transferred, which feels suspiciously like a buffer size limit to me. When I first reported this problem back in August, I was directed to this ticket. Should I be reporting it somewhere else? |
I am noticing the same issue, but even without using |
Anything new about this topic? I maybe ran in the same problem... |
@MatthiasLohr please open a new issue; the issue being discussed here was resolved two years ago; if you're running into this, it's most likely different |
Description of problem:
docker exec
does not exit when the container process terminates.This only happens when running in interactive but no with no tty.
ie:
hangs until Ctrl-C or Enter is pressed. It seems and input from STDIN is necessary for it to check the process state...
it works fine with
docker exec -it <container> ps
anddocker exec <container> ps
This is an issue when using it to run rsync or scp command through an ssh tunnel (key based forced commands). the sync / copy works fine but never exits...
docker version
:tested on 1.5 too
docker info
:uname -a
:Steps to Reproduce:
docker exec -i <container> ps
Actual Results:
process does not exit
Expected Results:
process exits (returns to prompt)
Additional info:
hangs until Ctrl-C or Enter is pressed. It seems and input from STDIN is necessary for it to check the process state...
it works fine with
docker exec -it <container> ps
anddocker exec <container> ps
This is an issue when using it to run rsync or scp command through an ssh tunnel (key based forced commands). the sync / copy works fine but never exits...
The text was updated successfully, but these errors were encountered: