Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker pull: always show me "Layer already being pulled by another client. Waiting." #15603

Closed
EasonYi opened this issue Aug 15, 2015 · 63 comments
Milestone

Comments

@EasonYi
Copy link

EasonYi commented Aug 15, 2015

Question

Yesterday I tried the docker 1.8 on my Mac pro, and created two docker machines (default and dev) first and then removed the dev machine.
Today when I tried to pull the redis image but always got the "Layer already being pulled by another client. Waiting." error message even I reboot my mac. What should I do to solve it?

Environment

➜ ~ docker version
Client:
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: 0d03096
Built: Tue Aug 11 17:17:40 UTC 2015
OS/Arch: darwin/amd64

Server:
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: 0d03096
Built: Tue Aug 11 17:17:40 UTC 2015
OS/Arch: linux/amd64

➜ ~ docker info
Containers: 2
Images: 41
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 45
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.0.9-boot2docker
Operating System: Boot2Docker 1.8.0 (TCL 6.3); master : 7f12e95 - Tue Aug 11 17:55:16 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: FAS3:MCST:CB2E:IFIE:KTB6:EIDL:YGST:6I65:N64P:AQ6O:5J7N:OHJN
Debug mode (server): true
File Descriptors: 15
Goroutines: 25
System Time: 2015-08-15T02:03:44.917069573Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: easonyi
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox

➜ ~ uname -a
Darwin Eason 14.4.0 Darwin Kernel Version 14.4.0: Thu May 28 11:35:04 PDT 2015; root:xnu-2782.30.5~1/RELEASE_X86_64 x86_64

Step 1:

Last login: Fri Aug 14 23:35:21 on ttys006
------Welcome back!------ [2015-08-15 06:35:26]
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default virtualbox Stopped
➜ ~ docker-machine start default
Starting VM...
Started machines may have new IP addresses. You may need to re-run the docker-machine env command.
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100:2376
➜ ~ eval $(docker-machine env default)
➜ ~ docker pull redis
Using default tag: latest
latest: Pulling from library/redis

a81e536d75d2: Pull complete
b5385b7b49ae: Pull complete
5eced7370894: Pull complete
9b5f6b31f2b8: Pull complete
68145731e6c3: Pull complete
e07b98fdf391: Pull complete
e53cade0786f: Pull complete
d41c1101f46b: Pulling fs layer
76ef21f24f06: Download complete
4736cd3389a7: Download complete
d794940dc881: Download complete
bfb2f87f9ad0: Download complete
1c03c2a5aa29: Pulling fs layer
2ff62b1c4295: Download complete
0ff407d5a7d9: Download complete
0ff407d5a7d9: Layer already being pulled by another client. Waiting.
60c52dbe9d91: Already exists
Pulling repository docker.io/library/redis

^C%
➜ ~ docker pull redis
Using default tag: latest
latest: Pulling from library/redis
d41c1101f46b: Pulling fs layer
76ef21f24f06: Pulling fs layer
d41c1101f46b: Verifying Checksum
d794940dc881: Layer already being pulled by another client. Waiting.
bfb2f87f9ad0: Layer already being pulled by another client. Waiting.
1c03c2a5aa29: Layer already being pulled by another client. Waiting.
2ff62b1c4295: Layer already being pulled by another client. Waiting.
0ff407d5a7d9: Layer already being pulled by another client. Waiting.
0ff407d5a7d9: Layer already being pulled by another client. Waiting.
60c52dbe9d91: Already exists
a81e536d75d2: Already exists
b5385b7b49ae: Already exists
5eced7370894: Already exists
9b5f6b31f2b8: Already exists
68145731e6c3: Already exists
e07b98fdf391: Already exists
e53cade0786f: Already exists
Pulling repository docker.io/library/redis

➜ ~ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mysql latest c45e4ba02f47 2 days ago 283.8 MB
nginx latest 6886fb5a9b8d 3 weeks ago 132.9 MB
e53cade0786f 3 weeks ago 100.1 MB

Step 2:

After I rebooted my mac pro, I've tried again and got the same error:

Last login: Sat Aug 15 07:58:54 on ttys000
------Welcome back!------ [2015-08-15 07:59:29]
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default virtualbox Stopped
➜ ~ docker-machine start default
Starting VM...
Started machines may have new IP addresses. You may need to re-run the docker-machine env command.
➜ ~ eval $(docker-machine env default)
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100:2376
➜ ~ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mysql latest c45e4ba02f47 2 days ago 283.8 MB
nginx latest 6886fb5a9b8d 3 weeks ago 132.9 MB
e53cade0786f 3 weeks ago 100.1 MB
➜ ~ docker pull redis
Using default tag: latest
latest: Pulling from library/redis

d41c1101f46b: Pull complete
76ef21f24f06: Pull complete
4736cd3389a7: Pull complete
d794940dc881: Pull complete
bfb2f87f9ad0: Pull complete
1c03c2a5aa29: Pulling fs layer
2ff62b1c4295: Download complete
0ff407d5a7d9: Download complete
4c8cbfd2973e: Already exists
60c52dbe9d91: Already exists
a81e536d75d2: Already exists
b5385b7b49ae: Already exists
5eced7370894: Already exists
9b5f6b31f2b8: Already exists
68145731e6c3: Already exists
e07b98fdf391: Already exists
e53cade0786f: Already exists
Pulling repository docker.io/library/redis
Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/redis/images. You may want to check your internet connection or if you are behind a proxy.
➜ ~ docker pull redis
Using default tag: latest
latest: Pulling from library/redis
1c03c2a5aa29: Pull complete
2ff62b1c4295: Layer already being pulled by another client. Waiting.
0ff407d5a7d9: Layer already being pulled by another client. Waiting.
4c8cbfd2973e: Already exists
60c52dbe9d91: Already exists
a81e536d75d2: Already exists
b5385b7b49ae: Already exists
5eced7370894: Already exists
9b5f6b31f2b8: Already exists
68145731e6c3: Already exists
e07b98fdf391: Already exists
e53cade0786f: Already exists
d41c1101f46b: Already exists
76ef21f24f06: Already exists
4736cd3389a7: Already exists
d794940dc881: Already exists
bfb2f87f9ad0: Already exists

^C%
➜ ~

@tonistiigi
Copy link
Member

This looks very similar to what #15646 tries to solve. Only thing I don't understand is that how this could appear on the first request after restart as we don't store the pulling pool on the disk. That VM was rebooted after Mac restart? Not restored from snapshot/state?

@powdahound
Copy link

I ran into this problem as well and resolved it by stopping the VM, removing all my unused images, and then restarting the VM. Not sure if removing the images was necessary, but I'd wanted to clean them up anyway.

$ docker-machine stop default
$ docker images -q | xargs docker rmi
$ docker-machine start default

@EasonYi
Copy link
Author

EasonYi commented Aug 23, 2015

Hi tonistiigi,
Sorry for the late reply to your question That VM was rebooted after Mac restart?
The answer is "Yes", which you can see in my post as following:

➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default virtualbox Stopped
➜ ~ docker-machine start default
Starting VM...
Started machines may have new IP addresses. You may need to re-run the docker-machine env command.

But this morning after my Mac upgraded and restarted, I've pulled the redis image successfully.

Last login: Sun Aug 23 07:13:31 on console
------Welcome back!------ [2015-08-23 07:15:13]
➜  ~  docker-machine start default
Starting VM...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
➜  ~  docker version
Client:
 Version:      1.8.0
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0d03096
 Built:        Tue Aug 11 17:17:40 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.0
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0d03096
 Built:        Tue Aug 11 17:17:40 UTC 2015
 OS/Arch:      linux/amd64
➜  ~  docker info
Containers: 2
Images: 47
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 51
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.0.9-boot2docker
Operating System: Boot2Docker 1.8.0 (TCL 6.3); master : 7f12e95 - Tue Aug 11 17:55:16 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: 2J2O:SBU2:HKMV:6FQP:UHJS:7CEV:UKR5:IWR5:TO3J:5VS2:TABG:IJ7W
Debug mode (server): true
File Descriptors: 11
Goroutines: 16
System Time: 2015-08-22T23:56:05.020724404Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: easonyi
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
➜  ~  docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
kalilinux/kali-linux-docker   latest              11cd69e79221        9 days ago          325.1 MB
mysql                         latest              c45e4ba02f47        10 days ago         283.8 MB
nginx                         latest              6886fb5a9b8d        5 weeks ago         132.9 MB
<none>                        <none>              1c03c2a5aa29        5 weeks ago         109.5 MB
➜  ~  docker pull redis
Using default tag: latest
latest: Pulling from library/redis
2ff62b1c4295: Pull complete
0ff407d5a7d9: Already exists
4c8cbfd2973e: Already exists
60c52dbe9d91: Already exists
a81e536d75d2: Already exists
b5385b7b49ae: Already exists
5eced7370894: Already exists
9b5f6b31f2b8: Already exists
68145731e6c3: Already exists
e07b98fdf391: Already exists
e53cade0786f: Already exists
d41c1101f46b: Already exists
76ef21f24f06: Already exists
4736cd3389a7: Already exists
d794940dc881: Already exists
bfb2f87f9ad0: Already exists
1c03c2a5aa29: Already exists
library/redis:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:7fd29d65ad8fe915de46ed3660b037bb7325a360fdd76bfdef3011094ebeb139
Status: Downloaded newer image for redis:latest
➜  ~  docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
kalilinux/kali-linux-docker   latest              11cd69e79221        9 days ago          325.1 MB
mysql                         latest              c45e4ba02f47        10 days ago         283.8 MB
nginx                         latest              6886fb5a9b8d        5 weeks ago         132.9 MB
redis                         latest              0ff407d5a7d9        5 weeks ago         109.5 MB

@GordonTheTurtle
Copy link

USER POLL

The best way to get notified when there are changes in this discussion is by clicking the Subscribe button in the top right.

The people listed below have appreciated your meaningfull discussion with a random +1:

@zakyvr
@SaiOngole

@calavera calavera modified the milestones: 1.9.0, 1.8.2 Sep 10, 2015
@jacobsvante
Copy link

+1 (Just testing the turtle!)

@arun-gupta
Copy link
Contributor

Have only one Docker Machine and freshly booted. Tried to run the containers using the https://github.com/javaee-samples/docker-java/blob/master/attendees/cicd/docker-compose.yml and stuck at the message "Layer already being pulled by another client. Waiting.".

Restarted the Docker Machine. docker-compose up -d moved past that error and stuck at a different image now.

Restarted Docker Machine again, and finally got it working.

Here are version details:

Containers: 10
Images: 127
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 147
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.0.9-boot2docker
Operating System: Boot2Docker 1.8.1 (TCL 6.3); master : 7f12e95 - Thu Aug 13 03:24:56 UTC 2015
CPUs: 1
Total Memory: 996.2 MiB
Name: docker-java
ID: EPX6:EEVU:T4KV:35OM:PZEK:SARX:UUNF:BNCN:IDLI:NI34:SGQE:UQPA
Debug mode (server): true
File Descriptors: 31
Goroutines: 48
System Time: 2015-09-16T14:58:20.615785119Z
EventsListeners: 0
Init SHA1: 
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: arungupta
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox

@arun-gupta
Copy link
Contributor

Building an image that inherits from jenkinsci/jenkins always causes the following issue:

Sending build context to Docker daemon 17.92 kB
Step 0 : FROM jenkinsci/jenkins
latest: Pulling from jenkinsci/jenkins
902b87aaaec9: Verifying Checksum 
9a61b6b1315e: Download complete 
1ff9f26f09fb: Downloading [>                                                  ] 199.6 kB/18.54 MB
607e965985c1: Pulling fs layer 
682b997ad926: Downloading [======================>                            ] 136.1 kB/303.1 kB
a594f78c2a03: Pulling fs layer 
8859a87b6160: Pulling fs layer 
9dd7ba0ee3fe: Pulling fs layer 
93934c1ae19e: Pulling fs layer 
2262501f7b5a: Pulling fs layer 
bfb63b0f4db1: Pulling fs layer 
49ebfec495e1: Pulling fs layer 
dffc38e078ec: Downloading [=============>                                     ] 135.8 kB/521.5 kB
85c4a7072f1d: Download complete 
8a65ac3e251d: Pulling fs layer 
35ca7575d3f1: Pulling fs layer 
3f88b332fe98: Pulling fs layer 
ccffd44f3697: Pulling fs layer 
ea6b780d316b: Pulling fs layer 
245f6094be5b: Pulling fs layer 
1ca6d57a59d5: Pulling fs layer 
1497cb6a4049: Pulling fs layer 
abc98d24647e: Pulling fs layer 
9560ae5b3e2d: Pulling fs layer 
e7f6b23ecf92: Pulling fs layer 
1f301f9e9e47: Download complete 
c7f666cbee49: Pulling fs layer 
ffab25bb53d2: Download complete 
514af8fa48be: Pulling fs layer 
34d222c4e2da: Pulling fs layer 
15719a5d4ea4: Pulling fs layer 
8f9e416ec514: Pulling fs layer 
24f3f3d22062: Pulling fs layer 
24f3f3d22062: Layer already being pulled by another client. Waiting. 

Dockerfile is at: https://github.com/javaee-samples/docker-java/blob/master/attendees/cicd/jenkins/Dockerfile. Just change FROM jenkinsci/jenkins.

@zhangchl007
Copy link

get the issue also , I try to restart docker and delete images even server reboot,the issue still exits
af340544ed62: Pulling fs layer
af340544ed62: Layer already being pulled by another client. Waiting.

^Croot@Ubuntu-14-x8664:~# docker version
Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64
root@Ubuntu-14-x8664:~# uname -r
3.19.0-28-generic

@bahrmichael
Copy link

Had the same issue pulling mysql.

root@drop:~# docker version
Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Killed the docker process, removed all the images with docker images -q | xargs docker rmi and then restarted the pull. Working now.

@zhangchl007
Copy link

so do I!

@cybertk
Copy link

cybertk commented Sep 22, 2015

Same issue here for 1.8.2

@george-videoamp
Copy link

Seeing this issue as well attempting to pull postgres:latest in a docker within a docker image

@njgraham
Copy link

I've been evaluating Docker for our shop the past couple of days. I must say it's disheartening when the very first thing I try to do requires 13 restarts of the Docker daemon to complete successfully.

Windows + boot2docker VM and also LinuxMint (17.2):

docker@default:~$ docker version
Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      linux/amd64

@cpuguy83
Copy link
Member

It's unfortunate the patch which probably fixes this didn't make it into 1.8.2
But I believe these issues are resolved on master. It is hard to tell though until there is wide-spread usage.

@pinnokio
Copy link

This issue became a head pain for me. I located in Ukraine. And two days ago I had this issue only on few images. Now I can't retrieve any images at all. I think it's connected with Amazon Cloudfront somehow. I have Digital Ocean droplet in the US. So I installed tinyproxy there and started docker daemon manually with HTTP_PROXY env variable. And bang, image pulled successfully.
So I think this issue is on the Amazon side.

@a7rk6s
Copy link

a7rk6s commented Sep 24, 2015

On my slow/unreliable home connection I've had problems downloading images since the pre-1.0 days and still to this day. Large/complex images are hit or miss, and then I often run into this issue.

What I do is pull the image on a remote server, docker save to a file, then rsync it over. This always works fine. It's also easy to cancel and restart. You would think that since docker has knowledge of the layer structure, it should be faster and more reliable than rsync, but it usually isn't.

Is there a way to serialize the download of layers? So it would start at the first layer I don't have and then continue from there, one at a time? That would probably help.

@CameronCarranza
Copy link

Happened very frequently for me on both OS X (with Docker Toolbox 1.8.1). It was definitely more apparent on a slower connection (needed to reboot the VM roughly 5 times for a relatively small image), but it still happened pretty frequently on a 100mbit connection.

Since upgrading to 1.8.2 it seems to be less frequent (also dumped all of my images and containers). I was able to download ubuntu:14.04 and tutum/wordpress without rebooting the VM once. For reference yesterday I had to restart the VM about 20 times just to get the Wordpress image.

It still happens occasionally on 1.8.2 on my Macbook (OS X 10.9.5), but it usually only happens on very large images now on a slower connection.

In terms of upgrading from OS X I believe you have to download the whole Toolbox again and just rerun the installer, then when it's done (I believe) you have to create a new image to update the Server version, so maybe give that a try. I believe you will lose your images in the transition using this method. You could also try docker-machine upgrade MACHINENAME, I just removed my machine and made a new one to get a fresh start.

Version Info

Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      linux/amd64
➜  ~  VBoxManage --version
5.0.2r102096

@voidengineer
Copy link

I started the docker daemon with debug-logging enabled so I could see what happend directly before the message "Layer already being pulled by another client. Waiting." appears.

It seems docker tries to restart the download off all layers (even those that are already pulled) on any error without cleaning up the previous download.

For me this happend on two occassions:

  1. My private repository-mirror timed out (because scanning for malware took to Long). In this case docker retries the download from the official repository (which is a bug on ist own).
  2. I had no space left on the device where /var/lib/docker resides.

In both cases the message appears, and docker deadlocks itself.

@atabrizian
Copy link

I am expiriencing same issue on linux with version 1.8.2:

$ uname -a                                                                                                                                  [11:01:29]
Linux ighost 4.1.6-1-ARCH #1 SMP PREEMPT Mon Aug 17 08:52:28 CEST 2015 x86_64 GNU/Linux
$ docker version                                                                                                                            [11:01:39]
Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.5.1
 Git commit:   0a8c2e3-dirty
 Built:        Mon Sep 14 12:09:36 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.5.1
 Git commit:   0a8c2e3-dirty
 Built:        Mon Sep 14 12:09:36 UTC 2015
 OS/Arch:      linux/amd64

this is a sample of what happens:

$ docker pull centos                                                                                                                        [11:02:35]
Using default tag: latest
latest: Pulling from library/centos
47d44cb6f252: Pulling fs layer 
f6f39725d938: Layer already being pulled by another client. Waiting. 
f9a8cbc8dd13: Layer already being pulled by another client. Waiting. 
f37e6a610a37: Layer already being pulled by another client. Waiting. 
0f73ae75014f: Layer already being pulled by another client. Waiting. 
0f73ae75014f: Layer already being pulled by another client. Waiting. 
^C% 

What's the solution?!

@elyulka
Copy link

elyulka commented Sep 26, 2015

I'm also having frequent troubles with pulling images today.
As i can see there are almost no free inodes left on hard drive.
After

$ docker rmi $(docker images --no-trunc -q)
$ docker-machine restart default

Everything works like a charm.
It is consistent problem in my expirience. Images eat too much inodes.

See issue:

$ docker version
Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      linux/amd64

docker.log:

time="2015-09-26T19:48:01.944960324Z" level=error msg="error copying from layer download progress reader: write tcp 192.168.99.1:61335: broken pipe"
time="2015-09-26T19:48:05.940615137Z" level=debug msg="Calling POST /images/create"
time="2015-09-26T19:48:05.940649988Z" level=info msg="POST /v1.20/images/create?fromImage=php%3A5.4-cli"
time="2015-09-26T19:48:05.940713049Z" level=debug msg="Trying to pull php from https://registry-1.docker.io v2"
time="2015-09-26T19:48:05.989304109Z" level=debug msg="Fetched 1 base graphs at 2015-09-26 19:48:05.989288547 +0000 UTC"
time="2015-09-26T19:48:05.992594816Z" level=debug msg="Reloaded graph with 3 grants expiring at 2017-03-22 19:04:46.713978458 +0000 UTC"
time="2015-09-26T19:48:26.349295241Z" level=debug msg="Downloaded b4fb234b95b140b70f5a77d8d56739d85b34ad968013488ccb874a9990857625 to tempfile /mnt/sda1/var/lib/docker/tmp/GetImageBlob406045864"
time="2015-09-26T19:48:29.912252763Z" level=debug msg="Error trying v2 registry: ApplyLayer exit status 1 stdout:  stderr: open /usr/src/php/ext/spl/tests/recursiveIteratorIterator_endchildren_error.phpt: no space left on device"
docker@default:~$ sudo df -i
Filesystem              Inodes      Used Available Use% Mounted on
tmpfs                   127518      4466    123052   4% /
tmpfs                   127518         1    127517   0% /dev/shm
/dev/sda1              1218224   1203906     14318  99% /mnt/sda1
cgroup                  127518        11    127507   0% /sys/fs/cgroup
none                      1000         0      1000   0% /Users
overlay                1218224   1203906     14318  99% /mnt/sda1/var/lib/docker/overlay/d093cc262d0065032b7cff8b1f08adcff5ec5e2ab3a096e08c776153b9585b75/merged
overlay                1218224   1203906     14318  99% /mnt/sda1/var/lib/docker/overlay/ca72d1fea07d75501979c084dc324c9fd84a2c1a9536041a6ad38532d5ecc8f3/merged

@cpuguy83
Copy link
Member

@elyulka excessive inode usage is a known issue with overlay fs.

@satakare
Copy link

I am also facing the same issue while downloading any docker images its taking long hours and image is not getting downloaded...

@tinco
Copy link

tinco commented Sep 28, 2015

I get this issue, even after rm -r /var/lib/docker when I do: docker pull phusion/passenger-ruby22:0.9.16. It almost immediately says another client is already downloading the image, but there's nothing in ps aux | grep docker. Is there another place where there's information stored about what docker is downloading?
edit: jeez, finally a reboot fixed the problem. There must have been some state somewhere else

@tonistiigi
Copy link
Member

@tinco The reason why you get that message right away is that this image contains a duplicate layer. This should be unrelated to this issue and not cause blocking on pulls. The issue that caused creating such layers on pushes should be fixed with #14421, that is included in v1.8.


For the issue that causes blocking, its caused by 1.8.2 not recovering form pull errors and should be fixed in master. Please let us know if you can still reproduce it on master or v1.9 RC-s once they come out.

@d3vil-st
Copy link

d3vil-st commented Oct 6, 2015

+1

Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:19:00 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:19:00 UTC 2015
 OS/Arch:      linux/amd64

@tallsam
Copy link

tallsam commented Oct 6, 2015

+1

Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      linux/amd64

@joegoggins
Copy link

+1

$ docker version
Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:10:10 UTC 2015
 OS/Arch:      linux/amd64

@michaeljs1990
Copy link

for all the people +1'ing with client and server version can you follow up and say what resolved your issue or if it was just waiting X amount of time and trying again? Did you have to force remove containers or reboot your VM. Information like that is much more helpful as we can see that the issue is indeed occurring on the current client. If you want to subscribe use the button on the right. Potentially times at which you are having in an issue is also helpful.

@antonienko
Copy link

I haven't solved the issue, but what I do is to restart docker service and
try again. Eventually it gets done.

@dalbrekt
Copy link

dalbrekt commented Oct 8, 2015

A workaround that works for me is to restart the docker daemon.

@mallegrini
Copy link

I haven't solved the issue either and even when I restart docker service and/or delete the VM, I have to try more than once in a seemingly random pattern until it eventually works.

@phlegx
Copy link

phlegx commented Oct 8, 2015

Can confirm this @mallegrini , sometimes it is solved by restarting the docker daemon and/or the VM. But it does not seem to be reproducible at all times

@mallegrini
Copy link

Let me restate it: what I mean is that I need to try more than once in order to pull the image successfully. Sometimes (very very rarely) it works on first try. Most of the times I have to restart docker, pull again (and fail), restart, pull, etc until it finally succeeds.

I'm trying right now, but this is weird: on my Mac I have also a Linux VM with docker 1.6.2 that used to pull images flawlessy 3 days ago while 1.8.2 couldn't. Today I'm getting timeout error even pulling images with docker 1.6.2
----- here is what I get with docker 1.6.2. Command was: docker pull wordpress

Get https://index.docker.io/v1/repositories/library/wordpress/images: dial tcp 54.210.246.75:443: connection timed out

I ruled out server/network problems because docker 1.6.2 used to work but now I'm clueless...
On status.docker.com everything seems ok and my network works like a charm too

@TheSerapher
Copy link

You may laugh, but I have been looking at this myself and at a whim started docker using strace: strace -s 1024 -f docker daemon -D

Then I ran the client trying to pull an image that I knew was consistently failing unless the docker daemon was restarted multiple times (up to 20 times until it worked). To my surprise, albeit slow, the download finished successfully.

So maybe this slowdown by strace is helping the daemon to not run into a weird race condition between download/verification processes which cause the verification of an image to fail for no apparent reason.

I know this is a wild guess but figured I'd add it to this discussion.

EDIT: 4/4 successful strace downloads now, with 10/10 failed ones without strace.

EDIT 2: It seems that this happens when two images are running Verifying Checksum:

3c885c708a1d: Verifying Checksum
bfa41ba8ca44: Download complete
161a1b9923c6: Download complete
ce2c98c38047: Download complete
5ad0a66e80db: Download complete
feb6744c675a: Download complete
2f628a7fedae: Download complete
46d523682386: Download complete
fac32dbcdc13: Verifying Checksum
18b7a897b0d6: Download complete
20ea8020d723: Download complete
89b76ece1942: Download complete
b0ea693189dd: Download complete
6485202bf49b: Download complete
6a3c2bc61e0e: Download complete
0612e71ec82b: Download complete
b2fff81ebb6c: Download complete
be1ff97089b4: Download complete

Checking the debug log of the running daemon, I can confirm these two have failed the checksum check in the logs:

[...]
DEBU[0002] pulling blob "sha256:9e6c383e25b281ce4c3a9dfe9be4d280e6cacaa1b20aafcfe0d8d8645d033989" to 3c885c708a1d664dfcb62503d1af04c907048ccf142100761b8ee79755ca9c01
[...]
DEBU[0002] pulling blob "sha256:54b2c4728fc2566bec2963eac578b15d5f360d11c6111c3853003791bce638cc" to fac32dbcdc1319fcab46e7173791884ead6813f8b02f6a5bf8763e1ff916de85
[...]
ERRO[0004] filesystem layer verification failed for digest sha256:54b2c4728fc2566bec2963eac578b15d5f360d11c6111c3853003791bce638cc
ERRO[0004] filesystem layer verification failed for digest sha256:9e6c383e25b281ce4c3a9dfe9be4d280e6cacaa1b20aafcfe0d8d8645d033989
DEBU[0004] Error trying v2 registry: filesystem layer verification failed for digest sha256:9e6c383e25b281ce4c3a9dfe9be4d280e6cacaa1b20aafcfe0d8d8645d033989
DEBU[0004] Trying to pull [REDACTED] from https://[REDACTED] v1
DEBU[0004] hostDir: /etc/docker/certs.d/[REDACTED]
DEBU[0004] attempting v2 ping for registry endpoint https://[REDACTED]/v2/
ERRO[0004] error copying from layer download progress reader: download canceled
DEBU[0004] [registry] Calling GET https://[REDACTED]/v1/repositories/[REDACTED]/images
DEBU[0004] Not continuing with error: Error: image [REDACTED]:bc13ed81 not found

This in turn causes the system to fallback to v1 protocol which then errors out with an image not found error. If I am not mistaken, something in this parallel processing of checksums is failing and causes these issues?

@rodolfo42
Copy link

For me, this issue only happens when I'm impatient and Ctrl+C before docker pull finishes. Afterwards, every pull for the same image results in "already being pulled by another client". Restarting boot2docker once solves it 100% of the time.

@tallsam
Copy link

tallsam commented Oct 8, 2015

I get this issue with dockie/dockie but not with the official ubuntu image. I'm running with docker machine. Looking into the debug log:

time="2015-10-08T11:29:38.314214375Z" level=debug msg="Calling POST /images/create"
time="2015-10-08T11:29:38.314257223Z" level=info msg="POST /v1.20/images/create?fromImage=dockie%2Fdockie%3Alatest"
time="2015-10-08T11:29:38.314316280Z" level=debug msg="Trying to pull dockie/dockie from https://registry-1.docker.io v2"
time="2015-10-08T11:29:40.467268096Z" level=debug msg="Pulling tag from V2 registry: \"latest\""
time="2015-10-08T11:29:46.360937842Z" level=debug msg="Verification failed for /dockie/dockie using key GSIK:IYVL:MVIH:WHHV:7WRA:QS6Y:ESZB:UKHL:X6EI:7YB5:7S64:TLZN"
time="2015-10-08T11:29:46.360976872Z" level=debug msg="Key check result: not verified"

@TheSerapher
Copy link

@rodolfo42 I should have clarified, this is even before the issue with aborting a download. Checksum validation simply fails, maybe I should have opened or checked for another ticket?

@icecrime
Copy link
Contributor

icecrime commented Oct 9, 2015

Hello all! I believe this should by fixed on master by #15489 (and will soon ship in a few weeks as part of Docker 1.9.0).

Can you please try with the current master, or with the nightly builds available on this page?

I'm tentatively closing this, but keeping the tab open in case it breaks ;-)

@icecrime icecrime closed this as completed Oct 9, 2015
@dmitrym0
Copy link

@icecrime Is there an associated version of boot2docker.iso? After getting docker from your link, I'm getting:

~ ❯❯❯ docker version                                                                                                                                                                                    ⏎
Client:
 Version:      1.9.0-dev
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   6e12d9f
 Built:        Sun Oct 11 19:48:14 UTC 2015
 OS/Arch:      darwin/amd64

Error response from daemon: client is newer than server (client API version: 1.21, server API version: 1.20)

@dmitrym0
Copy link

FWIW, I installed docker 1.9 and pulled boot2docker.iso from here and my issue went away.

@thaJeztah
Copy link
Member

Thanks for testing @dmitrym0 !

@tallsam
Copy link

tallsam commented Oct 12, 2015

Works for me too.

@icecrime
Copy link
Contributor

👍

@xiaods
Copy link
Contributor

xiaods commented Oct 14, 2015

thanks, can't wait to use it

@Terry-Weymouth
Copy link

Just a note, which may help others. In my case, the error with the message 'Layer already being pulled by another client. Waiting.' appears to have been caused by using the 'boot2docker' command instead of docker-machine (or perhaps intermixing the two). I removed /usr/local/bin/boot2docker, deleted ~/.docker, deleted all docker-related VMs, reinstalled from the "Docker Toolkit", built the 'default' docker-machine, and have not had this problem since. Docker Version (client/server) 1.8.3 (build from commit f4bf5c7); docker-machine version 0.4.1 (e2c88d6) .

@michaeljs1990
Copy link

@Terry-Weymouth likely a coincidence. I have only ever had docker-machine installed on my computer and I have run into this issue 4 or 5 times.

@elbow-jason
Copy link

I have been having this issue with docker-machine. A simple stop and start seems to resolve the issue for now.

@phpguru
Copy link

phpguru commented Dec 16, 2015

@powdahound I got an error trying to xargs docker rmi after the machine was stopped. This worked though, thanks for the tip!

$ docker-machine stop default
$ docker-machine start default
$ docker images -q | xargs docker rmi

@asheshambasta
Copy link

Happens quite often on 1.8.2 when someone hits Ctrl+C on a docker pull cmd, and since this is our dev environment with other devs using other services running on docker, restarting the docker daemon is absolutely not an option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests