New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker fails to mount the block device for the container on devicemapper #4036

Closed
unclejack opened this Issue Feb 10, 2014 · 391 comments

Comments

Projects
None yet
@unclejack
Contributor

unclejack commented Feb 10, 2014

When running something like for i in {0..100}; do docker run busybox echo test; done with Docker running on devicemapper, errors are thrown and containers fail to run:

2014/02/10 9:48:42 Error: start: Cannot start container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284: Error getting container 56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284' on '/var/lib/docker/devicemapper/mnt/56bee8c4da5bd5641fc42405c742083b418ca14ddfb4a3e632955e236e23c284': no such file or directory
2014/02/10 9:48:42 Error: start: Cannot start container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914: Error getting container b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914' on '/var/lib/docker/devicemapper/mnt/b90b4385778142aab5251846460008e5c4eb9fe1e7ec82f07d06f1de823bd914': no such file or directory
2014/02/10 9:48:43 Error: start: Cannot start container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c: Error getting container ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c' on '/var/lib/docker/devicemapper/mnt/ca53b3b21c92ffb17ad15c1088be293260ea240abdf25db7e5aadc11517cf93c': no such file or directory
test
2014/02/10 9:48:43 Error: start: Cannot start container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3: Error getting container 1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3 from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-4980769-1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3' on '/var/lib/docker/devicemapper/mnt/1e1e06044711e73cede8ede10547de7e270c33fac7ad5e60a8cb23246950adf3': no such file or directory

Fedora 20 with kernel 3.12.9 doesn't seem to be affected.

kernel version, distribution, docker info and docker version:

3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:39:31 UTC 2014 x86_64 x86_64
Ubuntu 12.04.4
 docker info
Containers: 101
Images: 44
Driver: devicemapper
 Pool Name: docker-8:1-4980769-pool
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 3234.9 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 6.9 Mb
 Metadata Space Total: 2048.0 Mb

Client version: 0.8.0-dev
Go version (client): go1.2
Git commit (client): 695719b
Server version: 0.8.0-dev
Git commit (server): 695719b
Go version (server): go1.2
Last stable version: 0.8.0

The Docker binary is actually master with PR #4017 merged.

/cc @alexlarsson

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Feb 11, 2014

Contributor

Same here, running ubuntu precise with lts-raring kernel (3.8.0-35-generic).

Contributor

discordianfish commented Feb 11, 2014

Same here, running ubuntu precise with lts-raring kernel (3.8.0-35-generic).

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 11, 2014

Contributor

This is due to a race with udev. The problem is that starting a container creates a dm device, activates it and then immediately deactivates it and then activates it again. This races with the udev device-node creation such that you end up in a state where the device node created by udev is removed by docker.

Contributor

alexlarsson commented Feb 11, 2014

This is due to a race with udev. The problem is that starting a container creates a dm device, activates it and then immediately deactivates it and then activates it again. This races with the udev device-node creation such that you end up in a state where the device node created by udev is removed by docker.

alexlarsson added a commit to alexlarsson/docker that referenced this issue Feb 11, 2014

Avoid extra mount/unmount during container registration
Runtime.Register() called driver.Get()/Put() in order to read back the
basefs of the container. However, this is not needed, as the basefs
is read during container.Mount() anyway, and basefs is only valid
while mounted (and all current calls satisfy this).

This seems minor, but this is actually problematic, as the Get/Put
pair will create a spurious mount/unmount cycle that is not needed and
slows things down. Additionally it will create a supurious
devicemapper activate/deactivate cycle that causes races with udev as
seen in moby#4036.

With this change devicemapper is now race-free, and container startup
is slightly faster.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Feb 11, 2014

Contributor

@alexlarsson Could we work around it somehow? It looks like every non-Fedora system has this problem.

Contributor

unclejack commented Feb 11, 2014

@alexlarsson Could we work around it somehow? It looks like every non-Fedora system has this problem.

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 11, 2014

Contributor

@unclejack #4067 fixes it

Contributor

alexlarsson commented Feb 11, 2014

@unclejack #4067 fixes it

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Feb 12, 2014

Contributor

The problem seems to be fixed for runs, but not for builds. When building this Dockerfile:

FROM busybox
RUN  echo hello world

docker fails with the same error:

2014/02/12 03:30:52 build: Error getting container c9bfa29eaa875796895e6e1994b5437d983833da3249d4ea3337e3f560fab5af from driver devicemapper: Error mounting '/dev/mapper/docker-252:0-5767278-c9bfa29eaa875796895e6e1994b5437d983833da3249d4ea3337e3f560fab5af' on '/var/lib/docker/devicemapper/mnt/c9bfa29eaa875796895e6e1994b5437d983833da3249d4ea3337e3f560fab5af': no such file or directory
Contributor

discordianfish commented Feb 12, 2014

The problem seems to be fixed for runs, but not for builds. When building this Dockerfile:

FROM busybox
RUN  echo hello world

docker fails with the same error:

2014/02/12 03:30:52 build: Error getting container c9bfa29eaa875796895e6e1994b5437d983833da3249d4ea3337e3f560fab5af from driver devicemapper: Error mounting '/dev/mapper/docker-252:0-5767278-c9bfa29eaa875796895e6e1994b5437d983833da3249d4ea3337e3f560fab5af' on '/var/lib/docker/devicemapper/mnt/c9bfa29eaa875796895e6e1994b5437d983833da3249d4ea3337e3f560fab5af': no such file or directory
@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 12, 2014

Contributor

@discordianfish I'll have a look at that.

Contributor

alexlarsson commented Feb 12, 2014

@discordianfish I'll have a look at that.

alexlarsson added a commit to alexlarsson/docker that referenced this issue Feb 12, 2014

Avoid extra mount/unmount during build
CmdRun() calls first run() and then wait() to wait for it to exit,
then it runs commit(). The run command will mount the container and
the container exiting will unmount it. Then the commit will
immediately mount it again to do a diff.

This seems minor, but this is actually problematic, as the Get/Put
pair will create a spurious mount/unmount cycle that is not needed and
slows things down. Additionally it will create a supurious
devicemapper activate/deactivate cycle that causes races with udev as
seen in moby#4036.

To ensure that we only unmount once we split up run() into create()
and run() and reference the mount until after the commit().

With this change docker build on devicemapper is now race-free, and
slightly faster.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 12, 2014

Contributor

#4096 fixes a similar spurious mount/unmount cycle during build.

Contributor

alexlarsson commented Feb 12, 2014

#4096 fixes a similar spurious mount/unmount cycle during build.

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 12, 2014

Contributor

While the above fixes will fix this for most people we should probably leave this open to track the actual problem of the udev race, as it would be good to have a fix for that too.

Contributor

alexlarsson commented Feb 12, 2014

While the above fixes will fix this for most people we should probably leave this open to track the actual problem of the udev race, as it would be good to have a fix for that too.

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Feb 12, 2014

Contributor

The above fixes are a big improvement, but when restarting container it sometimes still happens:

docker ps -q|xargs docker restart
Error: restart: Cannot restart container 3b96cdfee1eb: Error getting container 3b96cdfee1eb5e5b70d94d9a30e0c1cf9e9eba83901d63fd3e509df81cef8627 from driver devicemapper: Error mounting '/dev/mapper/docker-252:0-264246-3b96cdfee1eb5e5b70d94d9a30e0c1cf9e9eba83901d63fd3e509df81cef8627' on '/var/lib/docker/devicemapper/mnt/3b96cdfee1eb5e5b70d94d9a30e0c1cf9e9eba83901d63fd3e509df81cef8627': no such file or directory
Error: restart: Cannot restart container d968669da4ce: Error getting container d968669da4ce5632416e55fac56666c588e8ed236874394e9bbfa5e3561acb81 from driver devicemapper: Error mounting '/dev/mapper/docker-252:0-264246-d968669da4ce5632416e55fac56666c588e8ed236874394e9bbfa5e3561acb81' on '/var/lib/docker/devicemapper/mnt/d968669da4ce5632416e55fac56666c588e8ed236874394e9bbfa5e3561acb81': no such file or directory
96fd55c11524
6c0860bb723d
Contributor

discordianfish commented Feb 12, 2014

The above fixes are a big improvement, but when restarting container it sometimes still happens:

docker ps -q|xargs docker restart
Error: restart: Cannot restart container 3b96cdfee1eb: Error getting container 3b96cdfee1eb5e5b70d94d9a30e0c1cf9e9eba83901d63fd3e509df81cef8627 from driver devicemapper: Error mounting '/dev/mapper/docker-252:0-264246-3b96cdfee1eb5e5b70d94d9a30e0c1cf9e9eba83901d63fd3e509df81cef8627' on '/var/lib/docker/devicemapper/mnt/3b96cdfee1eb5e5b70d94d9a30e0c1cf9e9eba83901d63fd3e509df81cef8627': no such file or directory
Error: restart: Cannot restart container d968669da4ce: Error getting container d968669da4ce5632416e55fac56666c588e8ed236874394e9bbfa5e3561acb81 from driver devicemapper: Error mounting '/dev/mapper/docker-252:0-264246-d968669da4ce5632416e55fac56666c588e8ed236874394e9bbfa5e3561acb81' on '/var/lib/docker/devicemapper/mnt/d968669da4ce5632416e55fac56666c588e8ed236874394e9bbfa5e3561acb81': no such file or directory
96fd55c11524
6c0860bb723d

unclejack added a commit to unclejack/moby that referenced this issue Feb 14, 2014

Avoid extra mount/unmount during build
CmdRun() calls first run() and then wait() to wait for it to exit,
then it runs commit(). The run command will mount the container and
the container exiting will unmount it. Then the commit will
immediately mount it again to do a diff.

This seems minor, but this is actually problematic, as the Get/Put
pair will create a spurious mount/unmount cycle that is not needed and
slows things down. Additionally it will create a supurious
devicemapper activate/deactivate cycle that causes races with udev as
seen in moby#4036.

To ensure that we only unmount once we split up run() into create()
and run() and reference the mount until after the commit().

With this change docker build on devicemapper is now race-free, and
slightly faster.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)

unclejack added a commit to unclejack/moby that referenced this issue Feb 14, 2014

Avoid extra mount/unmount during container registration
Runtime.Register() called driver.Get()/Put() in order to read back the
basefs of the container. However, this is not needed, as the basefs
is read during container.Mount() anyway, and basefs is only valid
while mounted (and all current calls satisfy this).

This seems minor, but this is actually problematic, as the Get/Put
pair will create a spurious mount/unmount cycle that is not needed and
slows things down. Additionally it will create a supurious
devicemapper activate/deactivate cycle that causes races with udev as
seen in moby#4036.

With this change devicemapper is now race-free, and container startup
is slightly faster.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)

crosbymichael added a commit to crosbymichael/docker that referenced this issue Feb 15, 2014

Avoid extra mount/unmount during container registration
Runtime.Register() called driver.Get()/Put() in order to read back the
basefs of the container. However, this is not needed, as the basefs
is read during container.Mount() anyway, and basefs is only valid
while mounted (and all current calls satisfy this).

This seems minor, but this is actually problematic, as the Get/Put
pair will create a spurious mount/unmount cycle that is not needed and
slows things down. Additionally it will create a supurious
devicemapper activate/deactivate cycle that causes races with udev as
seen in moby#4036.

With this change devicemapper is now race-free, and container startup
is slightly faster.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)

alexlarsson added a commit to alexlarsson/docker that referenced this issue Feb 17, 2014

Avoid temporarily unmounting the container when restarting it
Stopping the container will typicall cause it to unmount, to keep it mounted
over the stop/start cycle we aquire a temporary reference to it during this time.

This helps with moby#4036

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 17, 2014

Contributor

@discordianfish The restart issue should be fixed in #4180

Contributor

alexlarsson commented Feb 17, 2014

@discordianfish The restart issue should be fixed in #4180

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Feb 21, 2014

Contributor

These issues aren't occurring any more on any of my test systems.

Contributor

unclejack commented Feb 21, 2014

These issues aren't occurring any more on any of my test systems.

@unclejack unclejack closed this Feb 21, 2014

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 21, 2014

Contributor

I believe it is still possible to run into the race if you e.g. do:
docker wait $id
docker diff $if
or something, where the container exits and we immediately start an operation that mounts it.
However, the race is pretty small, so maybe its not possible in practice...

Contributor

alexlarsson commented Feb 21, 2014

I believe it is still possible to run into the race if you e.g. do:
docker wait $id
docker diff $if
or something, where the container exits and we immediately start an operation that mounts it.
However, the race is pretty small, so maybe its not possible in practice...

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Feb 21, 2014

Contributor

Short description of what I believe is happening:

fish_: its a lot more complex
fish_: both udev and docker are creating the nodes, and if this happens at the same time it gets into a weird situation where the device is activated but the device node doesn't exist
something like:
udev creates node
we see it, so don't create
we activate
udev activates
udev looks at the device, runs rules
udev removes node
=> we're in a state where we activated and created the node, but the node is now gone

Contributor

alexlarsson commented Feb 21, 2014

Short description of what I believe is happening:

fish_: its a lot more complex
fish_: both udev and docker are creating the nodes, and if this happens at the same time it gets into a weird situation where the device is activated but the device node doesn't exist
something like:
udev creates node
we see it, so don't create
we activate
udev activates
udev looks at the device, runs rules
udev removes node
=> we're in a state where we activated and created the node, but the node is now gone

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Mar 4, 2014

Contributor

Reopening still the actual problem is still there.

Contributor

discordianfish commented Mar 4, 2014

Reopening still the actual problem is still there.

@discordianfish discordianfish reopened this Mar 4, 2014

@shykes shykes added this to the 0.9.1 milestone Mar 10, 2014

@shykes

This comment has been minimized.

Show comment
Hide comment
@shykes

shykes Mar 10, 2014

Collaborator

Tentatively scheduling for 0.9.1

Collaborator

shykes commented Mar 10, 2014

Tentatively scheduling for 0.9.1

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Mar 10, 2014

Contributor

There's still a problem with devicemapper.
Using the script:

for i in {1..50}
do

CID=`docker run -d -P registry`
for j in {1..20}
do
docker kill $CID 2> /dev/null
docker restart $CID
done

docker kill $CID 2> /dev/null
docker rm $CID 2> /dev/null &

done

Docker ends up throwing an error which looks like this:

Error: Cannot restart container 3bb6ce8fa5b76b6c6c8bd2adcb20e473ca9b07b78d425a4dper/docker-8:1-1052351-3bb6ce8fa5b76b6c6c8bd2adcb20e473ca9b07b78d425a4dafcbccd66
2014/03/10 20:04:28 Error: failed to restart one or more containers

The environments on which this occurs are both running Ubuntu precise:

Ubuntu 12.04.4 LTS
Linux ubu1204-3 3.8.0-36-generic #52~precise1-Ubuntu SMP Mon Feb 3 21:54:46 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Linux ubu1204-2 3.11.0-17-generic #31~precise1-Ubuntu SMP Tue Feb 4 21:25:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

This is a problem on 0.8.1 and it's still a problem on 0.9.0.
@alexlarsson Could this be the same udev race condition?

Contributor

unclejack commented Mar 10, 2014

There's still a problem with devicemapper.
Using the script:

for i in {1..50}
do

CID=`docker run -d -P registry`
for j in {1..20}
do
docker kill $CID 2> /dev/null
docker restart $CID
done

docker kill $CID 2> /dev/null
docker rm $CID 2> /dev/null &

done

Docker ends up throwing an error which looks like this:

Error: Cannot restart container 3bb6ce8fa5b76b6c6c8bd2adcb20e473ca9b07b78d425a4dper/docker-8:1-1052351-3bb6ce8fa5b76b6c6c8bd2adcb20e473ca9b07b78d425a4dafcbccd66
2014/03/10 20:04:28 Error: failed to restart one or more containers

The environments on which this occurs are both running Ubuntu precise:

Ubuntu 12.04.4 LTS
Linux ubu1204-3 3.8.0-36-generic #52~precise1-Ubuntu SMP Mon Feb 3 21:54:46 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Linux ubu1204-2 3.11.0-17-generic #31~precise1-Ubuntu SMP Tue Feb 4 21:25:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

This is a problem on 0.8.1 and it's still a problem on 0.9.0.
@alexlarsson Could this be the same udev race condition?

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
@alexlarsson

alexlarsson Mar 12, 2014

Contributor

We could try a workaround where we gratuitously keep a ref on a recently created device (unless you're trying to delete it) for say 1 second.

Contributor

alexlarsson commented Mar 12, 2014

We could try a workaround where we gratuitously keep a ref on a recently created device (unless you're trying to delete it) for say 1 second.

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Mar 18, 2014

Contributor

I ran into another error which might or might not be related:

$ docker rm 5da32d8cfd63
Error: container_delete: Cannot destroy container 5da32d8cfd63: Driver devicemapper failed to remove root filesystem 5da32d8cfd63e59ad6f2dfdc189bdc62f3e25e8b6d90ac5407e485ce610e9001: UnmountDevice: no such device 5da32d8cfd63e59ad6f2dfdc189bdc62f3e25e8b6d90ac5407e485ce610e9001
2014/03/18 15:04:14 Error: failed to remove one or more containers

(Container is stopped for some weeks already, system was rebooted since that several times)

Contributor

discordianfish commented Mar 18, 2014

I ran into another error which might or might not be related:

$ docker rm 5da32d8cfd63
Error: container_delete: Cannot destroy container 5da32d8cfd63: Driver devicemapper failed to remove root filesystem 5da32d8cfd63e59ad6f2dfdc189bdc62f3e25e8b6d90ac5407e485ce610e9001: UnmountDevice: no such device 5da32d8cfd63e59ad6f2dfdc189bdc62f3e25e8b6d90ac5407e485ce610e9001
2014/03/18 15:04:14 Error: failed to remove one or more containers

(Container is stopped for some weeks already, system was rebooted since that several times)

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
Contributor

alexlarsson commented Mar 18, 2014

@alexlarsson

This comment has been minimized.

Show comment
Hide comment
Contributor

alexlarsson commented Mar 18, 2014

@creack

This comment has been minimized.

Show comment
Hide comment
@creack

creack Mar 18, 2014

Contributor

The timeout PR seems to have solve the issue /cc @unclejack

Contributor

creack commented Mar 18, 2014

The timeout PR seems to have solve the issue /cc @unclejack

@rws-github

This comment has been minimized.

Show comment
Hide comment
@rws-github

rws-github Mar 24, 2014

I'm still getting the error even with the changes:

2014/03/21 17:05:37 Error mounting '/dev/mapper/docker-8:1-2138150-e85daf5656ab5bc43aa7153f21f9dd5b54473cada547f00763c9e89b28360e33-init' on '/var/lib/docker/devicemapper/mnt/e85daf5656ab5bc43aa7153f21f9dd5b54473cada547f00763c9e89b28360e33-init': invalid argument

I'm still getting the error even with the changes:

2014/03/21 17:05:37 Error mounting '/dev/mapper/docker-8:1-2138150-e85daf5656ab5bc43aa7153f21f9dd5b54473cada547f00763c9e89b28360e33-init' on '/var/lib/docker/devicemapper/mnt/e85daf5656ab5bc43aa7153f21f9dd5b54473cada547f00763c9e89b28360e33-init': invalid argument

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Mar 24, 2014

Contributor

@rws-github This looks different. You get 'invalid argument' where I got 'no such file or directory'.

Contributor

discordianfish commented Mar 24, 2014

@rws-github This looks different. You get 'invalid argument' where I got 'no such file or directory'.

@PhilibertDugas

This comment has been minimized.

Show comment
Hide comment
@PhilibertDugas

PhilibertDugas Nov 2, 2015

I tried as well with Docker 1.8.3 but I face the same issue

I tried as well with Docker 1.8.3 but I face the same issue

@PhilibertDugas

This comment has been minimized.

Show comment
Hide comment
@PhilibertDugas

PhilibertDugas Nov 3, 2015

Changing from Device Mapper to OverlayFS did the trick for me!

Changing from Device Mapper to OverlayFS did the trick for me!

@MarcoLotz

This comment has been minimized.

Show comment
Hide comment
@MarcoLotz

MarcoLotz Nov 26, 2015

having the same problem on docker 1.9.1
Not common tho. Happens once every 10 started containers. The containers are all nginx.

on /var/log/upstart/docker.log:

INFO[32197] POST /v1.21/containers/4d2190e5f5fa/start

DEBU[32198] Assigning addresses for endpoint evil_brattain's interface on network labs6

DEBU[32198] RequestAddress(LocalDefault/172.24.0.0/16, , map[])

DEBU[32198] /sbin/iptables, [--wait -t nat -A DOCKER -p tcp -d 0/0 --dport 36669 -j DNAT --to-destination 172.24.1.66:80 ! -i br-10febd436870]

DEBU[32199] /sbin/iptables, [--wait -t filter -A DOCKER ! -i br-10febd436870 -o br-10febd436870 -p tcp -d 172.24.1.66 --dport 80 -j ACCEPT]

DEBU[32201] /sbin/iptables, [--wait -t nat -A POSTROUTING -p tcp -s 172.24.1.66 -d 172.24.1.66 --dport 80 -j MASQUERADE]

DEBU[32203] Assigning addresses for endpoint evil_brattain's interface on network labs6

DEBU[32203] /sbin/iptables, [--wait -t nat -D DOCKER -p tcp -d 0/0 --dport 36669 -j DNAT --to-destination 172.24.1.66:80 ! -i br-10febd436870]

DEBU[32204] /sbin/iptables, [--wait -t filter -D DOCKER ! -i br-10febd436870 -o br-10febd436870 -p tcp -d 172.24.1.66 --dport 80 -j ACCEPT]

DEBU[32206] /sbin/iptables, [--wait -t nat -D POSTROUTING -p tcp -s 172.24.1.66 -d 172.24.1.66 --dport 80 -j MASQUERADE]

DEBU[32207] Releasing addresses for endpoint evil_brattain's interface on network labs6

DEBU[32207] ReleaseAddress(LocalDefault/172.24.0.0/16, 172.24.1.66)

ERRO[32210] Handler for POST /v1.21/containers/4d2190e5f5fa/start returned error: Cannot start container 4d2190e5f5fa: mounting mqueue mqueue : device or resource busy

ERRO[32210] HTTP Error err=Cannot start container 4d2190e5f5fa: mounting mqueue mqueue : device or resource busy statusCode=500

Didn't try with OverlayFS yet.

having the same problem on docker 1.9.1
Not common tho. Happens once every 10 started containers. The containers are all nginx.

on /var/log/upstart/docker.log:

INFO[32197] POST /v1.21/containers/4d2190e5f5fa/start

DEBU[32198] Assigning addresses for endpoint evil_brattain's interface on network labs6

DEBU[32198] RequestAddress(LocalDefault/172.24.0.0/16, , map[])

DEBU[32198] /sbin/iptables, [--wait -t nat -A DOCKER -p tcp -d 0/0 --dport 36669 -j DNAT --to-destination 172.24.1.66:80 ! -i br-10febd436870]

DEBU[32199] /sbin/iptables, [--wait -t filter -A DOCKER ! -i br-10febd436870 -o br-10febd436870 -p tcp -d 172.24.1.66 --dport 80 -j ACCEPT]

DEBU[32201] /sbin/iptables, [--wait -t nat -A POSTROUTING -p tcp -s 172.24.1.66 -d 172.24.1.66 --dport 80 -j MASQUERADE]

DEBU[32203] Assigning addresses for endpoint evil_brattain's interface on network labs6

DEBU[32203] /sbin/iptables, [--wait -t nat -D DOCKER -p tcp -d 0/0 --dport 36669 -j DNAT --to-destination 172.24.1.66:80 ! -i br-10febd436870]

DEBU[32204] /sbin/iptables, [--wait -t filter -D DOCKER ! -i br-10febd436870 -o br-10febd436870 -p tcp -d 172.24.1.66 --dport 80 -j ACCEPT]

DEBU[32206] /sbin/iptables, [--wait -t nat -D POSTROUTING -p tcp -s 172.24.1.66 -d 172.24.1.66 --dport 80 -j MASQUERADE]

DEBU[32207] Releasing addresses for endpoint evil_brattain's interface on network labs6

DEBU[32207] ReleaseAddress(LocalDefault/172.24.0.0/16, 172.24.1.66)

ERRO[32210] Handler for POST /v1.21/containers/4d2190e5f5fa/start returned error: Cannot start container 4d2190e5f5fa: mounting mqueue mqueue : device or resource busy

ERRO[32210] HTTP Error err=Cannot start container 4d2190e5f5fa: mounting mqueue mqueue : device or resource busy statusCode=500

Didn't try with OverlayFS yet.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Nov 26, 2015

Member

@MarcoLotz can you open a new issue? I've seen those messages as well occasionally, perhaps its already being worked on, but I think it's a different issue than the one being discussed here

Member

thaJeztah commented Nov 26, 2015

@MarcoLotz can you open a new issue? I've seen those messages as well occasionally, perhaps its already being worked on, but I think it's a different issue than the one being discussed here

@slampenny

This comment has been minimized.

Show comment
Hide comment
@slampenny

slampenny Nov 27, 2015

In case anyone gets a similar message.... I got it when the script that brings docker up tried to mount a directory that wan't there.

For instance:

docker run -d -v /Users/jordan/Code:/home/docker -p 8000:80 -p 2222:22 -p 4430:443 --name web --link mongo:mongo --link redis:redis MY_IMAGE

On the box to which I was deploying, /Users/jordan/Code didn't exist, because it was an ubuntu box, not an OSX box.

In case anyone gets a similar message.... I got it when the script that brings docker up tried to mount a directory that wan't there.

For instance:

docker run -d -v /Users/jordan/Code:/home/docker -p 8000:80 -p 2222:22 -p 4430:443 --name web --link mongo:mongo --link redis:redis MY_IMAGE

On the box to which I was deploying, /Users/jordan/Code didn't exist, because it was an ubuntu box, not an OSX box.

@liusdu

This comment has been minimized.

Show comment
Hide comment
@liusdu

liusdu Dec 1, 2015

Contributor

I fix a devicemapper issue like the folowing in. #18329

$ docker rm -f fdca994bec21
Error response from daemon: Driver devicemapper failed to remove root filesystem fdca994bec2100335c912d56540acc6df271b36b2554270c606836277e159e15: Device is Busy
Error: failed to remove containers: [fdca994bec21]

If you have running containers in aufs/devicemaper and the daemon crashed. When you restart the daemon and attempt to remove the remaining containers. You will get this error.

Contributor

liusdu commented Dec 1, 2015

I fix a devicemapper issue like the folowing in. #18329

$ docker rm -f fdca994bec21
Error response from daemon: Driver devicemapper failed to remove root filesystem fdca994bec2100335c912d56540acc6df271b36b2554270c606836277e159e15: Device is Busy
Error: failed to remove containers: [fdca994bec21]

If you have running containers in aufs/devicemaper and the daemon crashed. When you restart the daemon and attempt to remove the remaining containers. You will get this error.

@slatkovic

This comment has been minimized.

Show comment
Hide comment
@slatkovic

slatkovic Dec 6, 2015

Switching from devicemapper to aufs solved the problem for me.

Switching from devicemapper to aufs solved the problem for me.

@AaronDMarasco-VSI

This comment has been minimized.

Show comment
Hide comment
@AaronDMarasco-VSI

AaronDMarasco-VSI Jan 11, 2016

I hate to be participant 132 in a huge thread, but after skimming this ticket, I think I "should" be OK, but am not. Using CentOS7:

$ docker info | grep Storage
Storage Driver: devicemapper
$ docker info |grep -i "udev.*sync"
 Udev Sync Supported: true
$ docker info | grep -i loop
(no response)

This is an example of the error I receive:

docker run -d -v /opt/Xilinx/:/opt/Xilinx/:ro -v /data/jenkins_workspaces/dds:/build --name jenkins-dds-460 jenkins/build:v2-C6
5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320
Error response from daemon: Cannot start container 5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320: Error getting container 5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320 from driver devicemapper: open /dev/mapper/docker-253:6-101737821-5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320: no such file or directory

This happens 5-20% of the time it seems.

A few days ago, I had Docker totally trash all its containers. I researched some, and learned that on CentOS 7 hosts, the preferred method is a "raw" LVM thin provisioned, so that's what I did:

$ cat /etc/systemd/system/docker.service.d/0-move-library.conf
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --exec-opt native.cgroupdriver=cgroupfs -H fd:// --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/vg_ex-docker--pool --storage-opt dm.use_deferred_removal=true

# Needs newer systemd than CentOS uses: https://bugzilla.redhat.com/show_bug.cgi?id=1200946
# --storage-opt dm.use_deferred_deletion=true

I'm not listing here all the LVM stuff I did, but it was working for a few days...

docker info

$ cat /etc/redhat-release 
CentOS Linux release 7.1.1503 (Core) 

$ uname -a
Linux redacted.example.com 3.10.0-229.20.1.el7.x86_64 #1 SMP Tue Nov 3 19:10:07 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ docker --version
Docker version 1.9.1, build a34a1d5

$ docker info
Containers: 9
Images: 35
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: vg_ex-docker--pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 20.51 GB
 Data Space Total: 322.1 GB
 Data Space Available: 301.6 GB
 Metadata Space Used: 14.26 MB
 Metadata Space Total: 4.001 GB
 Metadata Space Available: 3.987 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.20.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 56
Total Memory: 251.6 GiB

I hate to be participant 132 in a huge thread, but after skimming this ticket, I think I "should" be OK, but am not. Using CentOS7:

$ docker info | grep Storage
Storage Driver: devicemapper
$ docker info |grep -i "udev.*sync"
 Udev Sync Supported: true
$ docker info | grep -i loop
(no response)

This is an example of the error I receive:

docker run -d -v /opt/Xilinx/:/opt/Xilinx/:ro -v /data/jenkins_workspaces/dds:/build --name jenkins-dds-460 jenkins/build:v2-C6
5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320
Error response from daemon: Cannot start container 5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320: Error getting container 5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320 from driver devicemapper: open /dev/mapper/docker-253:6-101737821-5eadc30da8d1f60a13db3392ba751b197079c579a3dbba43f6a092269a475320: no such file or directory

This happens 5-20% of the time it seems.

A few days ago, I had Docker totally trash all its containers. I researched some, and learned that on CentOS 7 hosts, the preferred method is a "raw" LVM thin provisioned, so that's what I did:

$ cat /etc/systemd/system/docker.service.d/0-move-library.conf
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --exec-opt native.cgroupdriver=cgroupfs -H fd:// --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/vg_ex-docker--pool --storage-opt dm.use_deferred_removal=true

# Needs newer systemd than CentOS uses: https://bugzilla.redhat.com/show_bug.cgi?id=1200946
# --storage-opt dm.use_deferred_deletion=true

I'm not listing here all the LVM stuff I did, but it was working for a few days...

docker info

$ cat /etc/redhat-release 
CentOS Linux release 7.1.1503 (Core) 

$ uname -a
Linux redacted.example.com 3.10.0-229.20.1.el7.x86_64 #1 SMP Tue Nov 3 19:10:07 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ docker --version
Docker version 1.9.1, build a34a1d5

$ docker info
Containers: 9
Images: 35
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: vg_ex-docker--pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 20.51 GB
 Data Space Total: 322.1 GB
 Data Space Available: 301.6 GB
 Metadata Space Used: 14.26 MB
 Metadata Space Total: 4.001 GB
 Metadata Space Available: 3.987 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.20.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 56
Total Memory: 251.6 GiB
@AaronDMarasco-VSI

This comment has been minimized.

Show comment
Hide comment
@AaronDMarasco-VSI

AaronDMarasco-VSI Jan 12, 2016

It definitely seems to be a race condition, when my Jenkins tries to spawn a handful of containers at once. I manually tried this on the server:

for i in $(seq 1 25); do
  docker run -d -v /opt/Xilinx/:/opt/Xilinx/:ro -v /data/jenkins_workspaces/examples:/build --name jenkins-examples-485 jenkins/ocpibuild:v2-C6; 
  echo $?; 
  docker rm -f jenkins-examples-485; 
done

And it ran flawlessly. However, if I background them instead, to force all 25 to try to create at once:

for i in $(seq 1 25); do 
  docker run -d -v /opt/Xilinx/:/opt/Xilinx/:ro -v /data/jenkins_workspaces/examples:/build --name jenkins-ctests-474-${i} jenkins/ocpibuild:v2-C6 & 
  echo $?; 
done
b66c5c2276b40854fc9f7b21e6bf153d861a2493638fb277195ad6a9376f8d32
621dc1b143f1a8728fb93a8227231a7b4488df0d96368e65d21956a6d6272004
6c6d1b417cd343469804c16f4d8479775f3aba905014d72cdb061f141f2ba816
127fdfa18b8d332da7a26e9cd9cb2f49ee4d4a000cf7ea2fe20858fffe1d7a09
66c0100da826fb6a94c084b3f4d9c53583b3e699e8a26076035c0f09bf61ec30
d16237d791da495d711483dc02c7021ab42781c001eb46128a064b3f1ddb245e
72176e1075e4a837ae7abeee9bfaee30b2f08f27cfa3542a66ebcd6a51115326
7d10de31239df16bca6c69d31d151d5ad7f5e44036f98e394e14fa9853408824
a582799ff489507fd13b02b3e2f2032a93c7f1bfbc88d443fee4526b8bddf84d
84de8f525c76ca1b64ea0c25f63a885a0e35ffa0cf2493361c56270635149023
0f104d795cb386542f0850bf964a5f426fe01476d8b56a414fdd98c6f145a973
1ad7c18178d54a53fa4fa4cda67de75923af1c965527a713542f73ac8a58cc52
ce153bb43b1fdb4ad193e72d36e8451baedf97c8c81bbe18f19818f7492eaf74
29ccbf74238de02c96eeee1eabd2f45e9163ab0633d96465b76d10a83e81f947
f178c90fb051303180bfcd0cde370526bd4a96f37ed94b3bcf56350d989e267f
46e91895d8c5c475bee7db2dc1a4c0592d4a715628dce807eff353739c7aeb4e
4c36736d89d06d62509a70203c66b74dd6ce37e7de6e6c5a2ffb3174525eed0d
86a913665de1522922726e901a54800414cc9858679168228fa626a255704d4c
c38da55732a1ff5df62238391b042a2129ad701ab2ddb826f99c81326bf26f74
57b6e0356ca25b3746c680bd6a535c514d64f4bcb50b900b9a93018207034b72
7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e
521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a
Error response from daemon: Cannot start container 7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e: Error getting container 7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e from driver devicemapper: open /dev/mapper/docker-253:6-101737821-7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e: no such file or directory
Error response from daemon: Cannot start container 521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a: Error getting container 521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a from driver devicemapper: open /dev/mapper/docker-253:6-101737821-521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a: no such file or directory
ac57455b5030d56f163b45e7d1f05856c0429823fc708b37aa21b5925f70da37

So 2 of 25 failed.

And they "stay dead":

$ docker ps -a | grep 521cc795b9
521cc795b9e9        jenkins/ocpibuild:v2-C6   "/bin/sleep 1h"     2 minutes ago       Created                                       jenkins-ctests-474-3
$ docker start jenkins-ctests-474-3
Error response from daemon: Cannot start container jenkins-ctests-474-3: Error getting container 521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a from driver devicemapper: open /dev/mapper/docker-253:6-101737821-521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a: no such file or directory
Error: failed to start containers: [jenkins-ctests-474-3]

It definitely seems to be a race condition, when my Jenkins tries to spawn a handful of containers at once. I manually tried this on the server:

for i in $(seq 1 25); do
  docker run -d -v /opt/Xilinx/:/opt/Xilinx/:ro -v /data/jenkins_workspaces/examples:/build --name jenkins-examples-485 jenkins/ocpibuild:v2-C6; 
  echo $?; 
  docker rm -f jenkins-examples-485; 
done

And it ran flawlessly. However, if I background them instead, to force all 25 to try to create at once:

for i in $(seq 1 25); do 
  docker run -d -v /opt/Xilinx/:/opt/Xilinx/:ro -v /data/jenkins_workspaces/examples:/build --name jenkins-ctests-474-${i} jenkins/ocpibuild:v2-C6 & 
  echo $?; 
done
b66c5c2276b40854fc9f7b21e6bf153d861a2493638fb277195ad6a9376f8d32
621dc1b143f1a8728fb93a8227231a7b4488df0d96368e65d21956a6d6272004
6c6d1b417cd343469804c16f4d8479775f3aba905014d72cdb061f141f2ba816
127fdfa18b8d332da7a26e9cd9cb2f49ee4d4a000cf7ea2fe20858fffe1d7a09
66c0100da826fb6a94c084b3f4d9c53583b3e699e8a26076035c0f09bf61ec30
d16237d791da495d711483dc02c7021ab42781c001eb46128a064b3f1ddb245e
72176e1075e4a837ae7abeee9bfaee30b2f08f27cfa3542a66ebcd6a51115326
7d10de31239df16bca6c69d31d151d5ad7f5e44036f98e394e14fa9853408824
a582799ff489507fd13b02b3e2f2032a93c7f1bfbc88d443fee4526b8bddf84d
84de8f525c76ca1b64ea0c25f63a885a0e35ffa0cf2493361c56270635149023
0f104d795cb386542f0850bf964a5f426fe01476d8b56a414fdd98c6f145a973
1ad7c18178d54a53fa4fa4cda67de75923af1c965527a713542f73ac8a58cc52
ce153bb43b1fdb4ad193e72d36e8451baedf97c8c81bbe18f19818f7492eaf74
29ccbf74238de02c96eeee1eabd2f45e9163ab0633d96465b76d10a83e81f947
f178c90fb051303180bfcd0cde370526bd4a96f37ed94b3bcf56350d989e267f
46e91895d8c5c475bee7db2dc1a4c0592d4a715628dce807eff353739c7aeb4e
4c36736d89d06d62509a70203c66b74dd6ce37e7de6e6c5a2ffb3174525eed0d
86a913665de1522922726e901a54800414cc9858679168228fa626a255704d4c
c38da55732a1ff5df62238391b042a2129ad701ab2ddb826f99c81326bf26f74
57b6e0356ca25b3746c680bd6a535c514d64f4bcb50b900b9a93018207034b72
7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e
521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a
Error response from daemon: Cannot start container 7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e: Error getting container 7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e from driver devicemapper: open /dev/mapper/docker-253:6-101737821-7651b72be21fb358b3fc01bd006b6e0c8fa8eb958e1cb1719e9ac223e214770e: no such file or directory
Error response from daemon: Cannot start container 521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a: Error getting container 521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a from driver devicemapper: open /dev/mapper/docker-253:6-101737821-521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a: no such file or directory
ac57455b5030d56f163b45e7d1f05856c0429823fc708b37aa21b5925f70da37

So 2 of 25 failed.

And they "stay dead":

$ docker ps -a | grep 521cc795b9
521cc795b9e9        jenkins/ocpibuild:v2-C6   "/bin/sleep 1h"     2 minutes ago       Created                                       jenkins-ctests-474-3
$ docker start jenkins-ctests-474-3
Error response from daemon: Cannot start container jenkins-ctests-474-3: Error getting container 521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a from driver devicemapper: open /dev/mapper/docker-253:6-101737821-521cc795b9e9f65cc5651a457593b540369188e17281657488490e36b62f370a: no such file or directory
Error: failed to start containers: [jenkins-ctests-474-3]
@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jan 12, 2016

Member

@AaronDMarasco-VSI @RoelVdP could you open a new issue instead? (but feel free to link to this issue). This discussion is already really long, and I think it's better to start a "fresh" one.

Member

thaJeztah commented Jan 12, 2016

@AaronDMarasco-VSI @RoelVdP could you open a new issue instead? (but feel free to link to this issue). This discussion is already really long, and I think it's better to start a "fresh" one.

@EwanValentine

This comment has been minimized.

Show comment
Hide comment
@EwanValentine

EwanValentine Jan 16, 2016

I'm getting...

Recreating 051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_bandzest_core-api_1
ERROR: Cannot start container 40dcfe9579ccce5fceb7bff3d1e1c33325f6b1b64d0b532e1b0dc9fbf2958274: [8] System error: no such file or directory

For no apparent reason. I'm guessing this is related? I'm trying to run a golang app in Docker on Ubuntu 14.04.

I'm getting...

Recreating 051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_051010d58e_bandzest_core-api_1
ERROR: Cannot start container 40dcfe9579ccce5fceb7bff3d1e1c33325f6b1b64d0b532e1b0dc9fbf2958274: [8] System error: no such file or directory

For no apparent reason. I'm guessing this is related? I'm trying to run a golang app in Docker on Ubuntu 14.04.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jan 16, 2016

Member

@EwanValentine looks not related to this one; think that's this one #18098

Member

thaJeztah commented Jan 16, 2016

@EwanValentine looks not related to this one; think that's this one #18098

@EwanValentine

This comment has been minimized.

Show comment
Hide comment
@EwanValentine

EwanValentine Jan 16, 2016

I'm not sure how they're related @thaJeztah there doesn't seem to be a suggested fix or definitive cause in that reference :(

I'm not sure how they're related @thaJeztah there doesn't seem to be a suggested fix or definitive cause in that reference :(

@thisiswangle

This comment has been minimized.

Show comment
Hide comment
@thisiswangle

thisiswangle Jan 22, 2016

Similar error while trying to build with docker 1.9.0 and 1.9.1:

$ docker build -t dev/base:v1.0 .
Sending build context to Docker daemon 4.608 kB
Step 1 : FROM ubuntu:14.04
 ---> ca4d7b1b9a51
Step 2 : MAINTAINER Le Wang "thisiswangle@gmail.com"
 ---> Using cache
 ---> 1c99faca4ce3
Step 3 : RUN dpkg-divert --local --rename --add /sbin/initctl
 ---> Using cache
 ---> 61530fa5a55e
Step 4 : RUN ln -sf /bin/true /sbin/initctl
 ---> Using cache
 ---> 21bc3d4993b4
Step 5 : ENV DEBIAN_FRONTEND noninteractive
 ---> Using cache
 ---> 3b9f5744dc3c
Step 6 : COPY sources.list.trusty /etc/apt/sources.list
Error getting container 9901a9ff6b11246b08ff5be40848540a544b89ea4317467915b70005909bc26b from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-138421-9901a9ff6b11246b08ff5be40848540a544b89ea4317467915b70005909bc26b' on '/var/lib/docker/devicemapper/mnt/9901a9ff6b11246b08ff5be40848540a544b89ea4317467915b70005909bc26b': no such file or directory

Building process works well after a few times retry.

Here is my docker info:

$ docker info
Containers: 3
Images: 33
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-8:1-138421-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem:
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 3.434 GB
 Data Space Total: 107.4 GB
 Data Space Available: 36.98 GB
 Metadata Space Used: 3.65 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.144 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-66-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 2
Total Memory: 1.955 GiB
Name: vagrant-ubuntu-trusty-64
ID: WI7O:J45C:QK3V:IYCB:TCVO:SWNY:SFRI:4C3V:INOB:QNGO:67E6:JLCK

My Host OS is OSX EI Capitan 10.11.2 + Virtual Box 5.10 + Vagrant 1.7.4.
My Guest OS is Ubuntu 14.04 x86_64.

Similar error while trying to build with docker 1.9.0 and 1.9.1:

$ docker build -t dev/base:v1.0 .
Sending build context to Docker daemon 4.608 kB
Step 1 : FROM ubuntu:14.04
 ---> ca4d7b1b9a51
Step 2 : MAINTAINER Le Wang "thisiswangle@gmail.com"
 ---> Using cache
 ---> 1c99faca4ce3
Step 3 : RUN dpkg-divert --local --rename --add /sbin/initctl
 ---> Using cache
 ---> 61530fa5a55e
Step 4 : RUN ln -sf /bin/true /sbin/initctl
 ---> Using cache
 ---> 21bc3d4993b4
Step 5 : ENV DEBIAN_FRONTEND noninteractive
 ---> Using cache
 ---> 3b9f5744dc3c
Step 6 : COPY sources.list.trusty /etc/apt/sources.list
Error getting container 9901a9ff6b11246b08ff5be40848540a544b89ea4317467915b70005909bc26b from driver devicemapper: Error mounting '/dev/mapper/docker-8:1-138421-9901a9ff6b11246b08ff5be40848540a544b89ea4317467915b70005909bc26b' on '/var/lib/docker/devicemapper/mnt/9901a9ff6b11246b08ff5be40848540a544b89ea4317467915b70005909bc26b': no such file or directory

Building process works well after a few times retry.

Here is my docker info:

$ docker info
Containers: 3
Images: 33
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-8:1-138421-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem:
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 3.434 GB
 Data Space Total: 107.4 GB
 Data Space Available: 36.98 GB
 Metadata Space Used: 3.65 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.144 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-66-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 2
Total Memory: 1.955 GiB
Name: vagrant-ubuntu-trusty-64
ID: WI7O:J45C:QK3V:IYCB:TCVO:SWNY:SFRI:4C3V:INOB:QNGO:67E6:JLCK

My Host OS is OSX EI Capitan 10.11.2 + Virtual Box 5.10 + Vagrant 1.7.4.
My Guest OS is Ubuntu 14.04 x86_64.

@andrecp

This comment has been minimized.

Show comment
Hide comment
@andrecp

andrecp Feb 3, 2016

@thisiswangle I too have the same problem my setup is
virtualbox 5.0.12
Vagrant 1.8.1
el captain 10.11.2

running on guest os Ubuntu 14.04 x86_64 3.13.0-77-generic

andrecp commented Feb 3, 2016

@thisiswangle I too have the same problem my setup is
virtualbox 5.0.12
Vagrant 1.8.1
el captain 10.11.2

running on guest os Ubuntu 14.04 x86_64 3.13.0-77-generic

@xarem

This comment has been minimized.

Show comment
Hide comment
@xarem

xarem Feb 4, 2016

I'm getting this error every 2-3 builds... i have to restart the command and then it works.

$ docker build -t .
Sending build context to Docker daemon 557.1 kBSending build context to Docker daemon 1.114 MBSending build context to Docker daemon 1.639 MBSending build context to Docker daemon 2.195 MBSending build context to Docker daemon 2.753 MBSending build context to Docker daemon  3.31 MBSending build context to Docker daemon 3.867 MBSending build context to Docker daemon 4.424 MBSending build context to Docker daemon 4.981 MBSending build context to Docker daemon 5.538 MBSending build context to Docker daemon 6.095 MBSending build context to Docker daemon 6.652 MBSending build context to Docker daemon 7.209 MBSending build context to Docker daemon 7.766 MBSending build context to Docker daemon 8.323 MBSending build context to Docker daemon  8.88 MBSending build context to Docker daemon 9.437 MBSending build context to Docker daemon 9.994 MBSending build context to Docker daemon 10.55 MBSending build context to Docker daemon 11.11 MBSending build context to Docker daemon 11.67 MBSending build context to Docker daemon 12.22 MBSending build context to Docker daemon 12.75 MBSending build context to Docker daemon  13.3 MBSending build context to Docker daemon 13.86 MBSending build context to Docker daemon 14.39 MBSending build context to Docker daemon 14.94 MBSending build context to Docker daemon  15.5 MBSending build context to Docker daemon 16.06 MBSending build context to Docker daemon 16.61 MBSending build context to Docker daemon 17.15 MBSending build context to Docker daemon 17.69 MBSending build context to Docker daemon 18.25 MBSending build context to Docker daemon 18.81 MBSending build context to Docker daemon 19.37 MBSending build context to Docker daemon 19.92 MBSending build context to Docker daemon 20.48 MBSending build context to Docker daemon 21.04 MBSending build context to Docker daemon 21.59 MBSending build context to Docker daemon 22.15 MBSending build context to Docker daemon 22.38 MBSending build context to Docker daemon 22.38 MB
Step 1 : FROM whatwedo/symfony3:latest
latest: Pulling from whatwedo/symfony3
Digest: sha256:45d451017fb77b24c5f8357563d8ae7562197c11c28fd63c3c356179ff9ebf72
Status: Downloaded newer image for whatwedo/symfony3:latest
 ---> 1d5f72b7a6d2
Step 2 : ADD . /var/www
 ---> cc138333ab5a
Removing intermediate container 893287b5028e
Step 3 : WORKDIR /var/www
 ---> Running in b66ce2b959ff
 ---> c2c230788706
Removing intermediate container b66ce2b959ff
Step 4 : RUN cp /var/www/app/config/parameters.yml.docker /var/www/app/config/parameters.yml
 ---> Running in afeaff032b7f
 ---> 0881dbc9d17a
Removing intermediate container afeaff032b7f
Step 5 : RUN cp /var/www/app/config/nginx/nginx.conf /etc/nginx
 ---> Running in 16f015be3f2c
 ---> 1398471886d0
Removing intermediate container 16f015be3f2c
Step 6 : RUN apt-get update && apt-get install xvfb xfonts-75dpi fontconfig libxrender1 pdftk -qq
Error getting container 1312109b05a8c73346f481e1b1533e3cf030cd0ab5d7914d811b168745a86105 from driver devicemapper: Error mounting '/dev/mapper/docker-253:1-1318774-1312109b05a8c73346f481e1b1533e3cf030cd0ab5d7914d811b168745a86105' on '/var/lib/docker/devicemapper/mnt/1312109b05a8c73346f481e1b1533e3cf030cd0ab5d7914d811b168745a86105': no such file or directory

ERROR: Build failed with: exit status 1
root@gci01:~# docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      linux/amd64
root@gci01:~# docker info
Containers: 6
Images: 124
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-253:1-1318774-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: 
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 5.76 GB
 Data Space Total: 107.4 GB
 Data Space Available: 19.54 GB
 Metadata Space Used: 9.286 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.138 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-71-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 1
Total Memory: 994 MiB
Name: gci01
ID: 6ARE:3PQL:I452:ADT2:TQXI:S6KG:6W5N:YDFH:NXUZ:OIOF:BPRA:DOV2
WARNING: No swap limit support
root@gci01:~# uname -r
3.13.0-71-generic

on DigitalOcean 1GB instance

xarem commented Feb 4, 2016

I'm getting this error every 2-3 builds... i have to restart the command and then it works.

$ docker build -t .
Sending build context to Docker daemon 557.1 kBSending build context to Docker daemon 1.114 MBSending build context to Docker daemon 1.639 MBSending build context to Docker daemon 2.195 MBSending build context to Docker daemon 2.753 MBSending build context to Docker daemon  3.31 MBSending build context to Docker daemon 3.867 MBSending build context to Docker daemon 4.424 MBSending build context to Docker daemon 4.981 MBSending build context to Docker daemon 5.538 MBSending build context to Docker daemon 6.095 MBSending build context to Docker daemon 6.652 MBSending build context to Docker daemon 7.209 MBSending build context to Docker daemon 7.766 MBSending build context to Docker daemon 8.323 MBSending build context to Docker daemon  8.88 MBSending build context to Docker daemon 9.437 MBSending build context to Docker daemon 9.994 MBSending build context to Docker daemon 10.55 MBSending build context to Docker daemon 11.11 MBSending build context to Docker daemon 11.67 MBSending build context to Docker daemon 12.22 MBSending build context to Docker daemon 12.75 MBSending build context to Docker daemon  13.3 MBSending build context to Docker daemon 13.86 MBSending build context to Docker daemon 14.39 MBSending build context to Docker daemon 14.94 MBSending build context to Docker daemon  15.5 MBSending build context to Docker daemon 16.06 MBSending build context to Docker daemon 16.61 MBSending build context to Docker daemon 17.15 MBSending build context to Docker daemon 17.69 MBSending build context to Docker daemon 18.25 MBSending build context to Docker daemon 18.81 MBSending build context to Docker daemon 19.37 MBSending build context to Docker daemon 19.92 MBSending build context to Docker daemon 20.48 MBSending build context to Docker daemon 21.04 MBSending build context to Docker daemon 21.59 MBSending build context to Docker daemon 22.15 MBSending build context to Docker daemon 22.38 MBSending build context to Docker daemon 22.38 MB
Step 1 : FROM whatwedo/symfony3:latest
latest: Pulling from whatwedo/symfony3
Digest: sha256:45d451017fb77b24c5f8357563d8ae7562197c11c28fd63c3c356179ff9ebf72
Status: Downloaded newer image for whatwedo/symfony3:latest
 ---> 1d5f72b7a6d2
Step 2 : ADD . /var/www
 ---> cc138333ab5a
Removing intermediate container 893287b5028e
Step 3 : WORKDIR /var/www
 ---> Running in b66ce2b959ff
 ---> c2c230788706
Removing intermediate container b66ce2b959ff
Step 4 : RUN cp /var/www/app/config/parameters.yml.docker /var/www/app/config/parameters.yml
 ---> Running in afeaff032b7f
 ---> 0881dbc9d17a
Removing intermediate container afeaff032b7f
Step 5 : RUN cp /var/www/app/config/nginx/nginx.conf /etc/nginx
 ---> Running in 16f015be3f2c
 ---> 1398471886d0
Removing intermediate container 16f015be3f2c
Step 6 : RUN apt-get update && apt-get install xvfb xfonts-75dpi fontconfig libxrender1 pdftk -qq
Error getting container 1312109b05a8c73346f481e1b1533e3cf030cd0ab5d7914d811b168745a86105 from driver devicemapper: Error mounting '/dev/mapper/docker-253:1-1318774-1312109b05a8c73346f481e1b1533e3cf030cd0ab5d7914d811b168745a86105' on '/var/lib/docker/devicemapper/mnt/1312109b05a8c73346f481e1b1533e3cf030cd0ab5d7914d811b168745a86105': no such file or directory

ERROR: Build failed with: exit status 1
root@gci01:~# docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      linux/amd64
root@gci01:~# docker info
Containers: 6
Images: 124
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-253:1-1318774-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: 
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 5.76 GB
 Data Space Total: 107.4 GB
 Data Space Available: 19.54 GB
 Metadata Space Used: 9.286 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.138 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-71-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 1
Total Memory: 994 MiB
Name: gci01
ID: 6ARE:3PQL:I452:ADT2:TQXI:S6KG:6W5N:YDFH:NXUZ:OIOF:BPRA:DOV2
WARNING: No swap limit support
root@gci01:~# uname -r
3.13.0-71-generic

on DigitalOcean 1GB instance

@andrecp

This comment has been minimized.

Show comment
Hide comment
@andrecp

andrecp Feb 4, 2016

What I've done to fix it (apparently it did) was installing the docker-engine instead of the docker-lxc in the ubuntu machine I have...

I think docker-lxc is probably legacy and shouldn't be used.

In the docker-engine installation I don't get

Udev Sync Supported: false

andrecp commented Feb 4, 2016

What I've done to fix it (apparently it did) was installing the docker-engine instead of the docker-lxc in the ubuntu machine I have...

I think docker-lxc is probably legacy and shouldn't be used.

In the docker-engine installation I don't get

Udev Sync Supported: false

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Feb 4, 2016

Member

@xarem how did you install docker; did you use the apt repository, or install a static binary? I see you're using devicemapper without Udev sync support; it's strongly discouraged to run without Udev sync, as that will lead to data loss, and strange behavior like this.

The default storage driver for Ubuntu is aufs, which will be used if you install using the installation procedure in https://docs.docker.com/engine/installation/ubuntulinux/

You will need to wipe your /var/lib/docker to do a fresh install though (otherwise the devicemapper dir will be still there)

Member

thaJeztah commented Feb 4, 2016

@xarem how did you install docker; did you use the apt repository, or install a static binary? I see you're using devicemapper without Udev sync support; it's strongly discouraged to run without Udev sync, as that will lead to data loss, and strange behavior like this.

The default storage driver for Ubuntu is aufs, which will be used if you install using the installation procedure in https://docs.docker.com/engine/installation/ubuntulinux/

You will need to wipe your /var/lib/docker to do a fresh install though (otherwise the devicemapper dir will be still there)

@grisha87

This comment has been minimized.

Show comment
Hide comment
@grisha87

grisha87 Feb 4, 2016

++@andrecp was about to post the same answer. Installing the latest "docker-engine" from the repository actually fixes this issue. What's important here is that you need to have all the extra linux kernel features installed (so called extras) prior installing docker-engine. Also the key to success is to get the dynamically linked docker binary:

$ ldd /usr/bin/docker 
        linux-vdso.so.1 =>  (0x00007ffd273fe000)
        libesets_pac.so => /usr/lib/libesets_pac.so (0x00007f557ff99000)
        libapparmor.so.1 => /usr/lib/x86_64-linux-gnu/libapparmor.so.1 (0x00007f557fd8d000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f557fb6f000)
        libdevmapper.so.1.02.1 => /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1 (0x00007f557f936000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f557f571000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f557f36c000)
        /lib64/ld-linux-x86-64.so.2 (0x000055a1a2fb5000)
        libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f557f149000)
        libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007f557ef38000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f557ecf9000)
        libcgmanager.so.0 => /lib/x86_64-linux-gnu/libcgmanager.so.0 (0x00007f557eade000)
        libnih.so.1 => /lib/x86_64-linux-gnu/libnih.so.1 (0x00007f557e8c6000)
        libnih-dbus.so.1 => /lib/x86_64-linux-gnu/libnih-dbus.so.1 (0x00007f557e6bb000)
        libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f557e476000)
        librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f557e26e000)

Above results with enabled udev sync support for devicemapper.

grisha87 commented Feb 4, 2016

++@andrecp was about to post the same answer. Installing the latest "docker-engine" from the repository actually fixes this issue. What's important here is that you need to have all the extra linux kernel features installed (so called extras) prior installing docker-engine. Also the key to success is to get the dynamically linked docker binary:

$ ldd /usr/bin/docker 
        linux-vdso.so.1 =>  (0x00007ffd273fe000)
        libesets_pac.so => /usr/lib/libesets_pac.so (0x00007f557ff99000)
        libapparmor.so.1 => /usr/lib/x86_64-linux-gnu/libapparmor.so.1 (0x00007f557fd8d000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f557fb6f000)
        libdevmapper.so.1.02.1 => /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1 (0x00007f557f936000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f557f571000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f557f36c000)
        /lib64/ld-linux-x86-64.so.2 (0x000055a1a2fb5000)
        libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f557f149000)
        libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007f557ef38000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f557ecf9000)
        libcgmanager.so.0 => /lib/x86_64-linux-gnu/libcgmanager.so.0 (0x00007f557eade000)
        libnih.so.1 => /lib/x86_64-linux-gnu/libnih.so.1 (0x00007f557e8c6000)
        libnih-dbus.so.1 => /lib/x86_64-linux-gnu/libnih-dbus.so.1 (0x00007f557e6bb000)
        libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f557e476000)
        librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f557e26e000)

Above results with enabled udev sync support for devicemapper.

@ZYNCMA

This comment has been minimized.

Show comment
Hide comment
@ZYNCMA

ZYNCMA May 17, 2016

I ran for i in {0..100}; do docker run busybox echo test; done using docker 1.11 and got no error.
Can I say that devicemapper is race-free using docker 1.11?
If I can, what differences does udev_sync_supported make?

docker info (sorry that I can't publish kernel and os)

# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 3
Server Version: 1.11.0
Storage Driver: devicemapper
 Pool Name: docker-8:4-1076834816-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 4.448 GB
 Data Space Total: 107.4 GB
 Data Space Available: 102.9 GB
 Metadata Space Used: 3.617 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.144 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /data/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
 Metadata loop file: /data/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: host null
Kernel Version: 
Operating System: 
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 62.65 GiB
Name: 
ID: 2UZG:RFJS:7SVS:3RE5:H7XK:6Q4N:TVOO:KZND:SDEF:POSS:ZEWF:ESAO
Docker Root Dir: /data/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/

ZYNCMA commented May 17, 2016

I ran for i in {0..100}; do docker run busybox echo test; done using docker 1.11 and got no error.
Can I say that devicemapper is race-free using docker 1.11?
If I can, what differences does udev_sync_supported make?

docker info (sorry that I can't publish kernel and os)

# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 3
Server Version: 1.11.0
Storage Driver: devicemapper
 Pool Name: docker-8:4-1076834816-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 4.448 GB
 Data Space Total: 107.4 GB
 Data Space Available: 102.9 GB
 Metadata Space Used: 3.617 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.144 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /data/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
 Metadata loop file: /data/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: host null
Kernel Version: 
Operating System: 
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 62.65 GiB
Name: 
ID: 2UZG:RFJS:7SVS:3RE5:H7XK:6Q4N:TVOO:KZND:SDEF:POSS:ZEWF:ESAO
Docker Root Dir: /data/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
@davehodg

This comment has been minimized.

Show comment
Hide comment
@davehodg

davehodg May 19, 2016

This is still a problem? I have:

docker --version

Docker version 1.11.1, build 5604cbe

Getting:

docker build -t mongodb .

Sending build context to Docker daemon 16.9 kB
Step 1 : FROM registry.access.redhat.com/rhel7.2
---> bf2034427837
Step 2 : MAINTAINER Joe Bloggs
devmapper: Unknown device b4904eb2bd71c7aa6cc50c8ac7c9695845641de531ed6d8b97742857258b418e

docker version

Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:42 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:42 2016
OS/Arch: linux/amd64

So pretty fresh I think.

This is still a problem? I have:

docker --version

Docker version 1.11.1, build 5604cbe

Getting:

docker build -t mongodb .

Sending build context to Docker daemon 16.9 kB
Step 1 : FROM registry.access.redhat.com/rhel7.2
---> bf2034427837
Step 2 : MAINTAINER Joe Bloggs
devmapper: Unknown device b4904eb2bd71c7aa6cc50c8ac7c9695845641de531ed6d8b97742857258b418e

docker version

Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:42 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:42 2016
OS/Arch: linux/amd64

So pretty fresh I think.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 19, 2016

Member

@davehodg can you show your docker info?

Member

thaJeztah commented May 19, 2016

@davehodg can you show your docker info?

@davehodg

This comment has been minimized.

Show comment
Hide comment
@davehodg

davehodg May 19, 2016

docker info

Containers: 32
Running: 0
Paused: 0
Stopped: 32
Images: 1
Server Version: 1.11.1
Storage Driver: devicemapper
Pool Name: docker-253:0-20487-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 11.8 MB
Data Space Total: 107.4 GB
Data Space Available: 11.93 GB
Metadata Space Used: 581.6 kB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.147 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use --storage-opt dm.thinpooldev or use --storage-opt dm.no_warn_on_loop_devices=true to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null
Kernel Version: 3.10.0-327.13.1.el7.x86_64
Operating System: Red Hat Enterprise Linux
OSType: linux
Architecture: x86_64
CPUs: 3
Total Memory: 5.518 GiB
Name: daves-macbook-air
ID: MIBZ:OB6L:J4Q2:OD6P:EVEZ:XEHS:JT3V:SLZG:ALFZ:NC47:IYXQ:72EE
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

docker info

Containers: 32
Running: 0
Paused: 0
Stopped: 32
Images: 1
Server Version: 1.11.1
Storage Driver: devicemapper
Pool Name: docker-253:0-20487-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 11.8 MB
Data Space Total: 107.4 GB
Data Space Available: 11.93 GB
Metadata Space Used: 581.6 kB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.147 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use --storage-opt dm.thinpooldev or use --storage-opt dm.no_warn_on_loop_devices=true to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null
Kernel Version: 3.10.0-327.13.1.el7.x86_64
Operating System: Red Hat Enterprise Linux
OSType: linux
Architecture: x86_64
CPUs: 3
Total Memory: 5.518 GiB
Name: daves-macbook-air
ID: MIBZ:OB6L:J4Q2:OD6P:EVEZ:XEHS:JT3V:SLZG:ALFZ:NC47:IYXQ:72EE
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

@davehodg

This comment has been minimized.

Show comment
Hide comment
@davehodg

davehodg May 19, 2016

If I were to get the freshest most up to date docker, where would I get it from?

If I were to get the freshest most up to date docker, where would I get it from?

@davehodg

This comment has been minimized.

Show comment
Hide comment
@davehodg

davehodg May 19, 2016

OK, got the source from here. Tried make:

make

docker build -t "docker-dev:master" -f "Dockerfile" .
Sending build context to Docker daemon 130.2 MB
Step 1 : FROM debian:jessie
jessie: Pulling from library/debian
8b87079b7a06: Already exists
a3ed95caeb02: Already exists
Digest: sha256:32a225e412babcd54c0ea777846183c61003d125278882873fb2bc97f9057c51
Status: Downloaded newer image for debian:jessie
---> bb5d89f9b6cb
Step 2 : RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61 || apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
devmapper: Unknown device 50a33b69e5edbf78d012da68ac249b1af5a9fddaeb316c662e6a65f763c974e2
make: *** [build] Error 1

I think I might make a new RHEL7 vmware image...

OK, got the source from here. Tried make:

make

docker build -t "docker-dev:master" -f "Dockerfile" .
Sending build context to Docker daemon 130.2 MB
Step 1 : FROM debian:jessie
jessie: Pulling from library/debian
8b87079b7a06: Already exists
a3ed95caeb02: Already exists
Digest: sha256:32a225e412babcd54c0ea777846183c61003d125278882873fb2bc97f9057c51
Status: Downloaded newer image for debian:jessie
---> bb5d89f9b6cb
Step 2 : RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61 || apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
devmapper: Unknown device 50a33b69e5edbf78d012da68ac249b1af5a9fddaeb316c662e6a65f763c974e2
make: *** [build] Error 1

I think I might make a new RHEL7 vmware image...

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
Member

thaJeztah commented May 19, 2016

@davehodg also see #22031

@davehodg

This comment has been minimized.

Show comment
Hide comment
@davehodg

davehodg May 19, 2016

Not really telling me anything new :(

Not really telling me anything new :(

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 19, 2016

Member

@davehodg no, problem is indeed that it's quite an unpredictable issue. It's known to be problematic on systems without udev-sync, also running on "loop devices" doesn't help.

Member

thaJeztah commented May 19, 2016

@davehodg no, problem is indeed that it's quite an unpredictable issue. It's known to be problematic on systems without udev-sync, also running on "loop devices" doesn't help.

@davehodg

This comment has been minimized.

Show comment
Hide comment
@davehodg

davehodg May 19, 2016

Thanks. I'm so far down the rabbit hole at this point.

Thanks. I'm so far down the rabbit hole at this point.

@davehodg

This comment has been minimized.

Show comment
Hide comment
@davehodg

davehodg May 20, 2016

Register my whine, but I've moved to running Docker on my Mac. Let's see how that transpires.

Register my whine, but I've moved to running Docker on my Mac. Let's see how that transpires.

@jizhilong

This comment has been minimized.

Show comment
Hide comment
@jizhilong

jizhilong Jul 14, 2016

Contributor

I also encountered this issue in our environment, and I worked out a way to reproduce it, hope it may help to resolve the issue.

docker info

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 2
Server Version: 1.11.1
Storage Driver: devicemapper
 Pool Name: docker-252:1-58724678-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 883.9 MB
 Data Space Total: 107.4 GB
 Data Space Available: 35.03 GB
 Metadata Space Used: 2.146 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.145 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: host bridge null
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.86 GiB
Name: jzl-dev0010
ID: U4ZX:62TO:YRGL:CNDH:6UZN:VNYX:3R2W:ZCWF:4TJS:54BS:O4AK:NLYY
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/

way to reproduce

make sure all containers removed

# NOTE!!! this command is dangerous, don't execute it on any non-experimental node.
docker rm -f  `docker ps -aq`

get image ready

a busybox image would be sufficient, docker pull busybox

get script ready

A Python script is needed to eagerly detect and open any docker created non-init dm devices:

import os
import time

while True:
    try:
        for line in os.listdir('/dev/mapper'):
            if 'init' in line:continue
            if len(line) >= 80:
                filename=os.path.join('/dev/mapper/',line)
                f = open(filename)
        print 'got it %s' % filename
        time.sleep(3*10**7)
    except Exception:
        pass

make sure all container or image related dm devices removed

run dmsetup info to list all dm devices on your node, and run dmsetup remove <name> to remove all devices whose name starts with docker- and ends with a uuid or <uuid>-init, such as docker-252:1-58724678-3c96b9d15c41b49816eab1faaa0cb6eaa7cdffe5decd45f2bd48ae72f32e16f5

run the script and reproduce the issue

following is one sample of my reproducing logs:

# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
# dmsetup info
Name:              docker-252:1-58724678-pool
State:             ACTIVE
Read Ahead:        8192
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 0
Number of targets: 1

# python opendm.py &
[1] 3757
# cid=`docker create -it busybox sh`
got it /dev/mapper/docker-252:1-58724678-097a341a6a5f17cc86f9d83ebdff35b8da86fd8ab6d614ac8445f3fabe1d6e25
# docker diff $cid
# docker start $cid
Error response from daemon: open /dev/mapper/docker-252:1-58724678-097a341a6a5f17cc86f9d83ebdff35b8da86fd8ab6d614ac8445f3fabe1d6e25: no such file or directory
Error: failed to start containers: d3cbd8f86704136511b5d4cfdafeb1398a46f962ff9af7389ec7368f8472433e

the essential commands are:

python opendm.py &
docker diff $cid
docker start $cid

get the corrupted container back to normal

# kill %1
# dmsetup remove docker-252:1-58724678-097a341a6a5f17cc86f9d83ebdff35b8da86fd8ab6d614ac8445f3fabe1d6e25
[1]+  Terminated              python opendm.py
# ls /dev/dm-*
/dev/dm-0
# ls /dev/mapper/
control  docker-252:1-58724678-pool
# docker start $cid
d3cbd8f86704136511b5d4cfdafeb1398a46f962ff9af7389ec7368f8472433e

the essential operations to recover the container include:

  1. kill the background process python opendm.py
  2. remove the container's dm device.
Contributor

jizhilong commented Jul 14, 2016

I also encountered this issue in our environment, and I worked out a way to reproduce it, hope it may help to resolve the issue.

docker info

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 2
Server Version: 1.11.1
Storage Driver: devicemapper
 Pool Name: docker-252:1-58724678-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 883.9 MB
 Data Space Total: 107.4 GB
 Data Space Available: 35.03 GB
 Metadata Space Used: 2.146 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.145 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: host bridge null
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.86 GiB
Name: jzl-dev0010
ID: U4ZX:62TO:YRGL:CNDH:6UZN:VNYX:3R2W:ZCWF:4TJS:54BS:O4AK:NLYY
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/

way to reproduce

make sure all containers removed

# NOTE!!! this command is dangerous, don't execute it on any non-experimental node.
docker rm -f  `docker ps -aq`

get image ready

a busybox image would be sufficient, docker pull busybox

get script ready

A Python script is needed to eagerly detect and open any docker created non-init dm devices:

import os
import time

while True:
    try:
        for line in os.listdir('/dev/mapper'):
            if 'init' in line:continue
            if len(line) >= 80:
                filename=os.path.join('/dev/mapper/',line)
                f = open(filename)
        print 'got it %s' % filename
        time.sleep(3*10**7)
    except Exception:
        pass

make sure all container or image related dm devices removed

run dmsetup info to list all dm devices on your node, and run dmsetup remove <name> to remove all devices whose name starts with docker- and ends with a uuid or <uuid>-init, such as docker-252:1-58724678-3c96b9d15c41b49816eab1faaa0cb6eaa7cdffe5decd45f2bd48ae72f32e16f5

run the script and reproduce the issue

following is one sample of my reproducing logs:

# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
# dmsetup info
Name:              docker-252:1-58724678-pool
State:             ACTIVE
Read Ahead:        8192
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 0
Number of targets: 1

# python opendm.py &
[1] 3757
# cid=`docker create -it busybox sh`
got it /dev/mapper/docker-252:1-58724678-097a341a6a5f17cc86f9d83ebdff35b8da86fd8ab6d614ac8445f3fabe1d6e25
# docker diff $cid
# docker start $cid
Error response from daemon: open /dev/mapper/docker-252:1-58724678-097a341a6a5f17cc86f9d83ebdff35b8da86fd8ab6d614ac8445f3fabe1d6e25: no such file or directory
Error: failed to start containers: d3cbd8f86704136511b5d4cfdafeb1398a46f962ff9af7389ec7368f8472433e

the essential commands are:

python opendm.py &
docker diff $cid
docker start $cid

get the corrupted container back to normal

# kill %1
# dmsetup remove docker-252:1-58724678-097a341a6a5f17cc86f9d83ebdff35b8da86fd8ab6d614ac8445f3fabe1d6e25
[1]+  Terminated              python opendm.py
# ls /dev/dm-*
/dev/dm-0
# ls /dev/mapper/
control  docker-252:1-58724678-pool
# docker start $cid
d3cbd8f86704136511b5d4cfdafeb1398a46f962ff9af7389ec7368f8472433e

the essential operations to recover the container include:

  1. kill the background process python opendm.py
  2. remove the container's dm device.
@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jul 14, 2016

Member

@jizhilong can you please open a new issue? The original issue reported here was resolved and related to udev-sync not being available, causing corruption. Since your machine looks to have udev-sync support, it's likely a different issue. This discussion is already very long, so it's better to open a new issue, than commenting on a closed issue.

Member

thaJeztah commented Jul 14, 2016

@jizhilong can you please open a new issue? The original issue reported here was resolved and related to udev-sync not being available, causing corruption. Since your machine looks to have udev-sync support, it's likely a different issue. This discussion is already very long, so it's better to open a new issue, than commenting on a closed issue.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jul 14, 2016

Member

I'm locking this issue, and provide a link and quote to the resolution for the issue that was originally reported: #4036 (comment). If you still encounter this, please open a new issue with as much information as possible and steps to reproduce.

(I modified some outdated information in the description below)

This issue is resolved

Key item to observe before commenting on this issue, is whether docker info | grep Udev returns for Udev Sync Supported: true or not.

Udev sync

Devicemapper storage driver expects to be synchronized with udev. When this info reports false, that means that devicemapper and udev are not able to sync. In this event there is a race condition that many of you have experienced.

Causes

There are a couple of causes for Udev sync to not be supported:

  • The docker binary used on your host, was statically linked when compiled, instead of dynamically linking.
  • If your docker binary is dynamically linked, the version of libdevmapper.so too old to support Udev sync

Solutions

  1. Install the docker binary from our apt/yum repositories; these binaries are dynamically linked
  2. Build the docker binary yourself, dynamically linked
    • git clone git://github.com/docker/docker.git
    • cd docker
    • AUTO_GOPATH=1 ./hack/make.sh dynbinary
  3. If your docker binary is dynamic (test with file $(which docker) | grep dynamically) and Udev sync reports false, you will likely need to update the distribution release used

Docker 1.11 and up will refuse to start the daemon if Udev sync is not supported (see #21097)

Member

thaJeztah commented Jul 14, 2016

I'm locking this issue, and provide a link and quote to the resolution for the issue that was originally reported: #4036 (comment). If you still encounter this, please open a new issue with as much information as possible and steps to reproduce.

(I modified some outdated information in the description below)

This issue is resolved

Key item to observe before commenting on this issue, is whether docker info | grep Udev returns for Udev Sync Supported: true or not.

Udev sync

Devicemapper storage driver expects to be synchronized with udev. When this info reports false, that means that devicemapper and udev are not able to sync. In this event there is a race condition that many of you have experienced.

Causes

There are a couple of causes for Udev sync to not be supported:

  • The docker binary used on your host, was statically linked when compiled, instead of dynamically linking.
  • If your docker binary is dynamically linked, the version of libdevmapper.so too old to support Udev sync

Solutions

  1. Install the docker binary from our apt/yum repositories; these binaries are dynamically linked
  2. Build the docker binary yourself, dynamically linked
    • git clone git://github.com/docker/docker.git
    • cd docker
    • AUTO_GOPATH=1 ./hack/make.sh dynbinary
  3. If your docker binary is dynamic (test with file $(which docker) | grep dynamically) and Udev sync reports false, you will likely need to update the distribution release used

Docker 1.11 and up will refuse to start the daemon if Udev sync is not supported (see #21097)

@moby moby locked and limited conversation to collaborators Jul 14, 2016

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.