Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Error response from daemon: layer does not exist" when performing "docker images" concurrently to image creation #21215

Open
trianglee opened this issue Mar 15, 2016 · 9 comments
Assignees
Labels
kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.10

Comments

@trianglee
Copy link

Output of docker version:

Client:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 21:49:11 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 21:49:11 2016
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 3
 Running: 0
 Paused: 0
 Stopped: 3
Images: 4
Server Version: 1.10.3
Storage Driver: aufs
 Root Dir: /tmp/docker1.graph/aufs
 Backing Filesystem: extfs
 Dirs: 13
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 3.13.0-79-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 125.8 GiB
Name: prov-dev
ID: N5JQ:J4OH:U2D2:EAHM:K2RK:L5TI:5RBL:IMCE:72AO:7NHA:ERK3:EKC7
WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):
Physical machine.

Steps to reproduce the issue:

  1. Create images, and concurrently run "docker images".

This can be reproduced by running the following two scripts at the same time -

#!/bin/bash

ITERATIONS=100
NAME=dummy

for i in `seq -s' ' 1 $ITERATIONS` ; do

        docker build -t $NAME --no-cache .
        if [ $? -ne 0 ] ; then
                echo "Error in \"docker build\"."
                exit 1
        fi

        docker rmi $NAME
        if [ $? -ne 0 ] ; then
                echo "Error in \"docker rmi\"."
                exit 1
        fi

done

And, at the same time -

#!/bin/bash

ITERATIONS=10000

for i in `seq -s' ' 1 $ITERATIONS` ; do
        docker images
        if [ $? -ne 0 ] ; then
                echo "Error in \"docker images\"."
                exit 1
        fi
done

With this simple Dockerfile -

FROM ubuntu

RUN echo Hello > hello1.txt
RUN echo Hello > hello2.txt
RUN echo Hello > hello3.txt
RUN echo Hello > hello4.txt
RUN echo Hello > hello5.txt
RUN echo Hello > hello6.txt
RUN echo Hello > hello7.txt
RUN echo Hello > hello8.txt
RUN echo Hello > hello9.txt
RUN echo Hello > hello10.txt
RUN echo Hello > hello11.txt

Describe the results you received:
After a few runs (less than a minute, usually sooner), "docker images" would return -

Error response from daemon: layer does not exist

Describe the results you expected:
Such errors shouldn't happen even when using Docker concurrently.

Additional information you deem important (e.g. issue happens only occasionally):

The problem is reproduced in a much more complicated scenario. This is a narraowed down to simplify debugging.
The problem is also reproduced when using "docker-py", that doesn't use the "docker" binary directly, so this is probably a daemon issue, and not a client issue.

Docker daemon log shows -

ERRO[0530] Handler for GET /v1.22/images/json returned error: layer does not exist

Problem is also reproduced with a fairly recent nightly build -

 Version:      1.11.0-dev
 API version:  1.23
 Go version:   go1.6
 Git commit:   ed6fb41
 Built:        Sat Mar 12 22:53:45 2016
 OS/Arch:      linux/amd64

Problem is reproduced with aufs and with devicemapper storages.

@thaJeztah thaJeztah added the kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. label Mar 15, 2016
@thaJeztah
Copy link
Member

Sounds like docker images is trying to get information about an image that was just deleted in the other shell. I wonder how we'd be able to prevent this without locking everything (and potentially causing more nasty side-effects)

/cc @cpuguy83 @tonistiigi any thoughts?

@vikstrous
Copy link
Contributor

I ran into the same thing on 1.11.1 while doing docker images -q | xargs docker rmi -f in one terminal and docker images in another.

@thaJeztah
Copy link
Member

@vikstrous can you try to reproduce on the 1.11.2 release candidate? We fixed some locking issues in that, possibly it's resolved

@vikstrous
Copy link
Contributor

It still happens at least with my repro... On overlayfs deletions take longer, so that might be a factor in my case.

docker version
Client:
 Version:      1.11.2-rc1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   1179573
 Built:        Sat May 28 20:18:33 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.2-rc1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   1179573
 Built:        Sat May 28 20:18:33 2016
 OS/Arch:      linux/amd64
docker info
Containers: 336
 Running: 16
 Paused: 0
 Stopped: 320
Images: 580
Server Version: 1.11.2-rc1
Storage Driver: overlay
 Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge null host overlay
Kernel Version: 4.5.4-1-ARCH
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.709 GiB
Name: compooter
ID: GKSI:YHZC:YPNI:PAGO:EYPD:AYME:K6BY:BAIH:EBKQ:SOAK:WUFP:6MJM
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): true
 File Descriptors: 80
 Goroutines: 164
 System Time: 2016-05-30T15:39:45.002554195-07:00
 EventsListeners: 1
Username: viktorstanchev
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Cluster store: etcd://172.17.0.1:12379
Cluster advertise: 172.17.0.1:12376

Here are the steps:

  1. Get a bunch of images
  2. Run docker images -q | xargs docker rmi
  3. While that's running, in another terminal run docker images. Result:
    Error response from daemon: layer does not exist

I would say this particular behaviour is not a real problem because you can just wait until rmi is done, but it might have implications I don't know about.

@dhs-rec
Copy link

dhs-rec commented Sep 5, 2016

Just stumbled across this myself. While cleaning up old images and containers, the system was not only unable to list available images, but also to create new containers based on images that where still completely available. So I'd say it has implications.

This is 1.11.2, running on Ubuntu Xenial.

@tonistiigi tonistiigi self-assigned this Sep 6, 2016
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Dec 16, 2016
Automatic merge from submit-queue (batch tested with PRs 38888, 38895)

InodeEviction Test failing because of docker race condition.

The inode eviciton test was failing because of a bug in moby/moby#21215.
Inode eviction test triggers garbage collection of images, which causes an error if kubernetes tries to "docker images list" at the same time.
This is not relevant to the inode eviction test, so do not cause the test to fail if this race occurs.
@Random-Liu
@illegalnumbers
Copy link

I had this same issue.

@hayderimran7
Copy link

hayderimran7 commented Feb 13, 2018

just had the same issue but fixed it by doing following:

  1. kill any process doing docker rmi kill -9 $(ps aux | grep -v grep | grep "rmi" | awk '{print $2}')
  2. restart docker service docker restart
  3. check docker images work now docker images

@anshupitlia
Copy link

I faced the issue while using overlayfs (overlay2). It was unable to rename one of the layer and was giving this error at the end. Fixed it by restarting docker.

@ghost
Copy link

ghost commented Oct 30, 2020

Is this question connected to this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.10
Projects
None yet
Development

No branches or pull requests

9 participants