New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker in Docker - failed to find the cgroup root #8791

Closed
STOIE opened this Issue Oct 27, 2014 · 22 comments

Comments

Projects
None yet
@STOIE

STOIE commented Oct 27, 2014

Hi all,

I am running 1.3 on RHEL7 with a priv container running ubuntu 14:04 and 1.3 under it.
Everything is running, however, in jenkins (the container running docker within), I am getting the following:

Sending build context to Docker daemon 
Step 0 : FROM ubuntu:latest
 ---> 5506de2b643b
Step 1 : MAINTAINER Aaron Nicoli <aaronnicoli@gmail.com>
 ---> Running in 88aa05a47854
 ---> 23620994a830
Removing intermediate container 88aa05a47854
Step 2 : ENV DEBIAN_FRONTEND noninteractive
 ---> Running in 8462b83bec13
 ---> 6aa5870a3b2d
Removing intermediate container 8462b83bec13
Step 3 : RUN rm -f /etc/localtime
 ---> Running in 0f2bcc9197b8
Removing intermediate container 0f2bcc9197b8
2014/10/27 14:12:23 failed to find the cgroup root
Build step 'Execute shell' marked build as failure
Finished: FAILURE

Any thoughts?
I can't see any issues, but, then again.. I am no expert.

root@02eff186c45b:/# df -h
Filesystem                                                                                        Size  Used Avail Use% Mounted on
/dev/mapper/docker-8:49-1310721-02eff186c45bd6dd568a909b7fb7e86cd92a8599b2921c6e558ad21627a705cf  9.8G  735M  8.5G   8% /
tmpfs                                                                                             3.9G     0  3.9G   0% /dev
shm                                                                                                64M     0   64M   0% /dev/shm
/dev/sdd1                                                                                          40G  3.6G   34G  10% /etc/hosts
/dev/sdc1                                                                                          16G  491M   15G   4% /var/jenkins_home
cgroup                                                                                            3.9G     0  3.9G   0% /sys/fs/cgroup

I have /var/lib/docker listed as a volume in the Dockerfile (that the jenk container was built from).

Aaron.

@STOIE

This comment has been minimized.

Show comment
Hide comment
@STOIE

STOIE Oct 28, 2014

Nothing guys :(

I have this exact image working perfect on a 14.04 host running 1.3.... but, no go on el7+1.3 :(

STOIE commented Oct 28, 2014

Nothing guys :(

I have this exact image working perfect on a 14.04 host running 1.3.... but, no go on el7+1.3 :(

@Xe

This comment has been minimized.

Show comment
Hide comment
@Xe

Xe Oct 28, 2014

This is affecting me too

Xe commented Oct 28, 2014

This is affecting me too

@STOIE

This comment has been minimized.

Show comment
Hide comment
@STOIE

STOIE Oct 28, 2014

YAY! I am not just going crazy... well actually I am while trying to fix this... arrgh!!!

STOIE commented Oct 28, 2014

YAY! I am not just going crazy... well actually I am while trying to fix this... arrgh!!!

@Xe

This comment has been minimized.

Show comment
Hide comment
@Xe

Xe Oct 29, 2014

cgroupfs_mount() {
        # see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
        if grep -v '^#' /etc/fstab | grep -q cgroup \
                || [ ! -e /proc/cgroups ] \
                || [ ! -d /sys/fs/cgroup ]; then
                return
        fi
        if ! mountpoint -q /sys/fs/cgroup; then
                mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
        fi
        (
                cd /sys/fs/cgroup
                for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
                        mkdir -p $sys
                        if ! mountpoint -q $sys; then
                                if ! mount -n -t cgroup -o $sys cgroup $sys; then
                                        rmdir $sys || true
                                fi
                        fi
                done
        )
}

~                                                                                                                                                                                                           
~                                                                                                                                                                                                           
root@7737ff9e148f:~# source cgroup.sh 
root@7737ff9e148f:~# cgroup 
cgroupfs_mount  cgroups-mount   cgroups-umount  
root@7737ff9e148f:~# cgroupfs_mount 
mount: cgroup already mounted or cpu busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or cpuacct busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_cls busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_prio busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
root@7737ff9e148f:~# mount
/dev/sdb on / type btrfs (rw,relatime,space_cache)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
/dev/sdb on /etc/resolv.conf type btrfs (rw,relatime,space_cache)
/dev/sdb on /etc/hostname type btrfs (rw,relatime,space_cache)
/dev/sdb on /etc/hosts type btrfs (rw,relatime,space_cache)
/dev/sdb on /var/lib/docker type btrfs (rw,relatime,space_cache)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
/dev/sdb on /var/lib/docker/btrfs type btrfs (rw,relatime,space_cache)
root@7737ff9e148f:~# umount /sys/fs/cgroup/cpu
umount: /sys/fs/cgroup/cpu: not found
root@7737ff9e148f:~# cgroupfs_mount           
mount: cgroup already mounted or cpu busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or cpuacct busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_cls busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_prio busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup

This may help in debugging. Host docker is CoreOS 472 and guest docker is dockerhub:flitter/builder:master

Xe commented Oct 29, 2014

cgroupfs_mount() {
        # see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
        if grep -v '^#' /etc/fstab | grep -q cgroup \
                || [ ! -e /proc/cgroups ] \
                || [ ! -d /sys/fs/cgroup ]; then
                return
        fi
        if ! mountpoint -q /sys/fs/cgroup; then
                mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
        fi
        (
                cd /sys/fs/cgroup
                for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
                        mkdir -p $sys
                        if ! mountpoint -q $sys; then
                                if ! mount -n -t cgroup -o $sys cgroup $sys; then
                                        rmdir $sys || true
                                fi
                        fi
                done
        )
}

~                                                                                                                                                                                                           
~                                                                                                                                                                                                           
root@7737ff9e148f:~# source cgroup.sh 
root@7737ff9e148f:~# cgroup 
cgroupfs_mount  cgroups-mount   cgroups-umount  
root@7737ff9e148f:~# cgroupfs_mount 
mount: cgroup already mounted or cpu busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or cpuacct busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_cls busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_prio busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
root@7737ff9e148f:~# mount
/dev/sdb on / type btrfs (rw,relatime,space_cache)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
/dev/sdb on /etc/resolv.conf type btrfs (rw,relatime,space_cache)
/dev/sdb on /etc/hostname type btrfs (rw,relatime,space_cache)
/dev/sdb on /etc/hosts type btrfs (rw,relatime,space_cache)
/dev/sdb on /var/lib/docker type btrfs (rw,relatime,space_cache)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
/dev/sdb on /var/lib/docker/btrfs type btrfs (rw,relatime,space_cache)
root@7737ff9e148f:~# umount /sys/fs/cgroup/cpu
umount: /sys/fs/cgroup/cpu: not found
root@7737ff9e148f:~# cgroupfs_mount           
mount: cgroup already mounted or cpu busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or cpuacct busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_cls busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup
mount: cgroup already mounted or net_prio busy
mount: according to mtab, cgroup is mounted on /sys/fs/cgroup

This may help in debugging. Host docker is CoreOS 472 and guest docker is dockerhub:flitter/builder:master

@ianmiell

This comment has been minimized.

Show comment
Hide comment
@ianmiell

ianmiell Nov 6, 2014

Thanks for this - this made docker-in-docker work for me:

ianmiell/shutit@3d7238f#diff-469a2a5ebd309b7bea7966e84b092112

Sending you a virtual beer/coffee/the purest water.

ianmiell commented Nov 6, 2014

Thanks for this - this made docker-in-docker work for me:

ianmiell/shutit@3d7238f#diff-469a2a5ebd309b7bea7966e84b092112

Sending you a virtual beer/coffee/the purest water.

@ianmiell

This comment has been minimized.

Show comment
Hide comment
@ianmiell

ianmiell Nov 6, 2014

@jpetazzo you might be interested in the above for dind
Ian

ianmiell commented Nov 6, 2014

@jpetazzo you might be interested in the above for dind
Ian

@sqawasmi

This comment has been minimized.

Show comment
Hide comment
@sqawasmi

sqawasmi Nov 10, 2014

I just faced this issue on AWS, I fixed it by installing libcgroup and starting cgconfig init.d script.

sqawasmi commented Nov 10, 2014

I just faced this issue on AWS, I fixed it by installing libcgroup and starting cgconfig init.d script.

@SvenDowideit

This comment has been minimized.

Show comment
Hide comment
@SvenDowideit

SvenDowideit Nov 11, 2014

Contributor

@tiborvass can we detect and error or fix these situations?

or do i need to start collecting 'fix weird stuff' faq's

Contributor

SvenDowideit commented Nov 11, 2014

@tiborvass can we detect and error or fix these situations?

or do i need to start collecting 'fix weird stuff' faq's

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass Nov 21, 2014

Collaborator

@SvenDowideit not sure, will have to try the libcgroup install. This is weird indeed, i'll add help-wanted.

Collaborator

tiborvass commented Nov 21, 2014

@SvenDowideit not sure, will have to try the libcgroup install. This is weird indeed, i'll add help-wanted.

@imotai

This comment has been minimized.

Show comment
Hide comment
@imotai

imotai Dec 11, 2014

@sqawasmi I face the same problem . Can you show some details of the solutions? Do we need reboot system when reinstall libcgroup?
I use the binary docker from docker.com with none configuration.

the error message is

[debug] server.go:1181 Calling POST /containers/create
[info] POST /v1.15/containers/create
[e3fdf987] +job create()
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:255 registerDevice(38, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:281 activateDeviceIfNeeded(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:255 registerDevice(39, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:1005 [devmapper] UnmountDevice(hash=4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:1028 [devmapper] Unmount(/home/docker-graph/devicemapper/mnt/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:1032 [devmapper] Unmount done
[debug] deviceset.go:786 [devmapper] deactivateDevice(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:880 Waiting for unmount of 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init: opencount=0
[debug] devmapper.go:545 [devmapper] removeDevice START
[debug] devmapper.go:558 [devmapper] removeDevice END
[debug] deviceset.go:842 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:853 Waiting for removal of docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init: exists=0
[debug] deviceset.go:866 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init) END
[debug] deviceset.go:805 [devmapper] deactivateDevice END
[debug] deviceset.go:1040 [devmapper] UnmountDevice END
[e3fdf987] +job log(create, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3)
[e3fdf987] -job log(create, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3) = OK (0)
[e3fdf987] -job create() = OK (0)
[debug] server.go:1181 Calling POST /containers/{name:.*}/attach
[info] POST /v1.15/containers/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832/attach?stderr=1&stdin=1&stdout=1&stream=1
[e3fdf987] +job container_inspect(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job container_inspect(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[e3fdf987] +job attach(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] attach.go:137 attach: stdin: begin
[debug] attach.go:176 attach: stdout: begin
[debug] attach.go:215 attach: stderr: begin
[debug] attach.go:263 attach: waiting for job 1/3
[debug] server.go:1181 Calling POST /containers/{name:.*}/start
[info] POST /v1.15/containers/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832/start
[e3fdf987] +job start(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:281 activateDeviceIfNeeded(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] +job allocate_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job allocate_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[e3fdf987] +job log(start, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3)
[e3fdf987] -job log(start, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3) = OK (0)
[e3fdf987] +job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[debug] deviceset.go:1005 [devmapper] UnmountDevice(hash=4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:1028 [devmapper] Unmount(/home/docker-graph/devicemapper/mnt/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] attach.go:193 attach: stdout: end
[debug] attach.go:233 attach: stderr: end
[debug] attach.go:268 attach: job 1 completed successfully
[debug] attach.go:263 attach: waiting for job 2/3
[debug] attach.go:268 attach: job 2 completed successfully
[debug] attach.go:263 attach: waiting for job 3/3
[debug] attach.go:95 Closing buffered stdin pipe
[debug] attach.go:165 attach: stdin: end
[debug] attach.go:268 attach: job 3 completed successfully
[debug] attach.go:270 attach: all jobs completed successfully
[e3fdf987] -job attach(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[debug] deviceset.go:1032 [devmapper] Unmount done
[debug] deviceset.go:786 [devmapper] deactivateDevice(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:880 Waiting for unmount of 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: opencount=0
[debug] devmapper.go:545 [devmapper] removeDevice START
[debug] devmapper.go:558 [devmapper] removeDevice END
[debug] deviceset.go:842 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:853 Waiting for removal of docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: exists=0
[debug] deviceset.go:866 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) END
[debug] deviceset.go:805 [devmapper] deactivateDevice END
[debug] deviceset.go:1040 [devmapper] UnmountDevice END
[e3fdf987] +job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[debug] deviceset.go:1005 [devmapper] UnmountDevice(hash=4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:1020 [devmapper] UnmountDevice END
[error] driver.go:145 Warning: error unmounting device 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: UnmountDevice: device not-mounted id 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832

[e3fdf987] +job log(die, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3)
[e3fdf987] -job log(die, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3) = OK (0)
Cannot start container 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: failed to find the cgroup root
[e3fdf987] -job start(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = ERR (1)
[error] server.go:1207 Handler for POST /containers/{name:.*}/start returned error: Cannot start container 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: failed to find the cgroup root
[error] server.go:110 HTTP Error: statusCode=500 Cannot start container 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: failed to find the cgroup root


imotai commented Dec 11, 2014

@sqawasmi I face the same problem . Can you show some details of the solutions? Do we need reboot system when reinstall libcgroup?
I use the binary docker from docker.com with none configuration.

the error message is

[debug] server.go:1181 Calling POST /containers/create
[info] POST /v1.15/containers/create
[e3fdf987] +job create()
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:455 libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on docker-8:6-131072003-pool failed: File exists
[debug] deviceset.go:255 registerDevice(38, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:281 activateDeviceIfNeeded(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:255 registerDevice(39, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:1005 [devmapper] UnmountDevice(hash=4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:1028 [devmapper] Unmount(/home/docker-graph/devicemapper/mnt/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:1032 [devmapper] Unmount done
[debug] deviceset.go:786 [devmapper] deactivateDevice(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:880 Waiting for unmount of 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init: opencount=0
[debug] devmapper.go:545 [devmapper] removeDevice START
[debug] devmapper.go:558 [devmapper] removeDevice END
[debug] deviceset.go:842 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init)
[debug] deviceset.go:853 Waiting for removal of docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init: exists=0
[debug] deviceset.go:866 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832-init) END
[debug] deviceset.go:805 [devmapper] deactivateDevice END
[debug] deviceset.go:1040 [devmapper] UnmountDevice END
[e3fdf987] +job log(create, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3)
[e3fdf987] -job log(create, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3) = OK (0)
[e3fdf987] -job create() = OK (0)
[debug] server.go:1181 Calling POST /containers/{name:.*}/attach
[info] POST /v1.15/containers/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832/attach?stderr=1&stdin=1&stdout=1&stream=1
[e3fdf987] +job container_inspect(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job container_inspect(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[e3fdf987] +job attach(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] attach.go:137 attach: stdin: begin
[debug] attach.go:176 attach: stdout: begin
[debug] attach.go:215 attach: stderr: begin
[debug] attach.go:263 attach: waiting for job 1/3
[debug] server.go:1181 Calling POST /containers/{name:.*}/start
[info] POST /v1.15/containers/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832/start
[e3fdf987] +job start(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:281 activateDeviceIfNeeded(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] +job allocate_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job allocate_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[e3fdf987] +job log(start, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3)
[e3fdf987] -job log(start, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3) = OK (0)
[e3fdf987] +job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[debug] deviceset.go:1005 [devmapper] UnmountDevice(hash=4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:1028 [devmapper] Unmount(/home/docker-graph/devicemapper/mnt/4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] attach.go:193 attach: stdout: end
[debug] attach.go:233 attach: stderr: end
[debug] attach.go:268 attach: job 1 completed successfully
[debug] attach.go:263 attach: waiting for job 2/3
[debug] attach.go:268 attach: job 2 completed successfully
[debug] attach.go:263 attach: waiting for job 3/3
[debug] attach.go:95 Closing buffered stdin pipe
[debug] attach.go:165 attach: stdin: end
[debug] attach.go:268 attach: job 3 completed successfully
[debug] attach.go:270 attach: all jobs completed successfully
[e3fdf987] -job attach(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[debug] deviceset.go:1032 [devmapper] Unmount done
[debug] deviceset.go:786 [devmapper] deactivateDevice(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:880 Waiting for unmount of 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: opencount=0
[debug] devmapper.go:545 [devmapper] removeDevice START
[debug] devmapper.go:558 [devmapper] removeDevice END
[debug] deviceset.go:842 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:853 Waiting for removal of docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: exists=0
[debug] deviceset.go:866 [deviceset docker-8:6-131072003] waitRemove(docker-8:6-131072003-4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) END
[debug] deviceset.go:805 [devmapper] deactivateDevice END
[debug] deviceset.go:1040 [devmapper] UnmountDevice END
[e3fdf987] +job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[e3fdf987] -job release_interface(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = OK (0)
[debug] deviceset.go:1005 [devmapper] UnmountDevice(hash=4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832)
[debug] deviceset.go:1020 [devmapper] UnmountDevice END
[error] driver.go:145 Warning: error unmounting device 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: UnmountDevice: device not-mounted id 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832

[e3fdf987] +job log(die, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3)
[e3fdf987] -job log(die, 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832, golang:1.3) = OK (0)
Cannot start container 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: failed to find the cgroup root
[e3fdf987] -job start(4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832) = ERR (1)
[error] server.go:1207 Handler for POST /containers/{name:.*}/start returned error: Cannot start container 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: failed to find the cgroup root
[error] server.go:110 HTTP Error: statusCode=500 Cannot start container 4c76e3fef111e88702041080c02e3ba82fe5fedc2cf3c24185a09b688a24c832: failed to find the cgroup root


@jaybuff

This comment has been minimized.

Show comment
Hide comment
@jaybuff

jaybuff Jan 16, 2015

I had this issue with docker 1.3.0

I went through the code and I think I understand what is happening. At startup time the docker daemon reads /proc/self/mountinfo to determine where your cgroups are mounted:

[jaybuff@host ~]$  cat /proc/self/mountinfo |grep cpu$
24 21 0:18 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu

Docker parses that line and, in this case, it would be "/cgroup"

cgconfig is what creates those mounts for you. In RHEL it's this file, which is part of the libcgroup package:

[jaybuff@host ~]$ rpm -qf /etc/rc.d/init.d/cgconfig
libcgroup-0.37-7.el6.x86_64

My issue was that I was starting cgconfig to create those mount points after I started docker.

This bug was introduced in libcontainer's commit 2531fba when the code to read /proc/self/mountinfo was moved into an init() function which happens at start up time. Previous to that commit it was in a regular function, so it the mountinfo was read on each call.

IMHO, the fix for this is to fail to start the docker daemon if you can't determine the cgroup root rather than waiting until the getCgroupData() function is called.

jaybuff commented Jan 16, 2015

I had this issue with docker 1.3.0

I went through the code and I think I understand what is happening. At startup time the docker daemon reads /proc/self/mountinfo to determine where your cgroups are mounted:

[jaybuff@host ~]$  cat /proc/self/mountinfo |grep cpu$
24 21 0:18 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu

Docker parses that line and, in this case, it would be "/cgroup"

cgconfig is what creates those mounts for you. In RHEL it's this file, which is part of the libcgroup package:

[jaybuff@host ~]$ rpm -qf /etc/rc.d/init.d/cgconfig
libcgroup-0.37-7.el6.x86_64

My issue was that I was starting cgconfig to create those mount points after I started docker.

This bug was introduced in libcontainer's commit 2531fba when the code to read /proc/self/mountinfo was moved into an init() function which happens at start up time. Previous to that commit it was in a regular function, so it the mountinfo was read on each call.

IMHO, the fix for this is to fail to start the docker daemon if you can't determine the cgroup root rather than waiting until the getCgroupData() function is called.

jaybuff referenced this issue Jan 16, 2015

Update libcontainer to 185328a42654f6dc9a41814e578
Mac address support to the netlink pkg.
Cgroup performance and memory issues.
Netlink refactoring.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
@jaybuff

This comment has been minimized.

Show comment
Hide comment
@jaybuff

jaybuff Jan 16, 2015

Sorry, the real issue was caused by docker/libcontainer@3cbe3eb

jaybuff commented Jan 16, 2015

Sorry, the real issue was caused by docker/libcontainer@3cbe3eb

jaybuff referenced this issue in docker/libcontainer Jan 16, 2015

Cache cgroup root mount location.
We calculate this on every cgroup call. It comprised of 30%+ of the CPU
usage in cAdvisor.

Docker-DCO-1.1-Signed-off-by: Victor Marmol <vmarmol@google.com> (github: vmarmol)
@mbentley

This comment has been minimized.

Show comment
Hide comment
@mbentley

mbentley Jan 17, 2015

Contributor

For anyone who may be having or is still having this issue on a RHEL server, would you be able to verify if the libcgroup package is installed using sudo yum list installed libcgroup? If it isn't try installing it (sudo yum install libcgroup).

Assuming it is installed already, first of all, check the status:

$ sudo /etc/init.d/cgconfig status
Running

If it is stopped, start it:

$ sudo /etc/init.d/cgconfig start
Starting cgconfig service:                                 [  OK  ]

If it is already running, restart it:

$ sudo /etc/init.d/cgconfig restart
Stopping cgconfig service:                                 [  OK  ]
Starting cgconfig service:                                 [  OK  ]

The docker init script /etc/init.d/docker requires that the cgconfig service be started but it sounds like one (or more) of the cgroup filesystems becomes unconfigured for whatever reason after the service is started so restarting it ought to resolve the issues.

If it doesn't, could someone post their /etc/cgconfig.conf and check to see if there are any files in /etc/cgconfig.d?

Contributor

mbentley commented Jan 17, 2015

For anyone who may be having or is still having this issue on a RHEL server, would you be able to verify if the libcgroup package is installed using sudo yum list installed libcgroup? If it isn't try installing it (sudo yum install libcgroup).

Assuming it is installed already, first of all, check the status:

$ sudo /etc/init.d/cgconfig status
Running

If it is stopped, start it:

$ sudo /etc/init.d/cgconfig start
Starting cgconfig service:                                 [  OK  ]

If it is already running, restart it:

$ sudo /etc/init.d/cgconfig restart
Stopping cgconfig service:                                 [  OK  ]
Starting cgconfig service:                                 [  OK  ]

The docker init script /etc/init.d/docker requires that the cgconfig service be started but it sounds like one (or more) of the cgroup filesystems becomes unconfigured for whatever reason after the service is started so restarting it ought to resolve the issues.

If it doesn't, could someone post their /etc/cgconfig.conf and check to see if there are any files in /etc/cgconfig.d?

@dmichel1

This comment has been minimized.

Show comment
Hide comment
@dmichel1

dmichel1 Jan 18, 2015

as a workaround on Ubuntu install cgroups and ensure it's started.

apt-get install cgroup-lite
service cgroup-lite start

dmichel1 commented Jan 18, 2015

as a workaround on Ubuntu install cgroups and ensure it's started.

apt-get install cgroup-lite
service cgroup-lite start
@jaybuff

This comment has been minimized.

Show comment
Hide comment
@jaybuff

jaybuff Jan 19, 2015

@dmichel1 you must start cgroup-lite to create the proper mounts before docker starts.

jaybuff commented Jan 19, 2015

@dmichel1 you must start cgroup-lite to create the proper mounts before docker starts.

@jaybuff

This comment has been minimized.

Show comment
Hide comment
@jaybuff

jaybuff Feb 4, 2015

ping @vmarmol

Your commit
docker/libcontainer@3cbe3eb is the root cause of this bug. Can you move the cgroup root check in cgroups/fs/apply_raw.go to run during startup time rather than runtime?

jaybuff commented Feb 4, 2015

ping @vmarmol

Your commit
docker/libcontainer@3cbe3eb is the root cause of this bug. Can you move the cgroup root check in cgroups/fs/apply_raw.go to run during startup time rather than runtime?

@vmarmol

This comment has been minimized.

Show comment
Hide comment
@vmarmol

vmarmol Feb 6, 2015

Contributor

@jaybuff would be happy to, but the proper response to an init failure is to panic. Is that the intended behavior?

Contributor

vmarmol commented Feb 6, 2015

@jaybuff would be happy to, but the proper response to an init failure is to panic. Is that the intended behavior?

@jaybuff

This comment has been minimized.

Show comment
Hide comment
@jaybuff

jaybuff Feb 6, 2015

Before docker/libcontainer@3cbe3eb the cgroup root was located on every "docker run" call. That meant that you could mount /cgroup after starting dockerd. That check is expensive, so now it only makes that check at start up time. The problem is that it doesn't fail until docker run time and mounting /cgroup after dockerd starts won't resolve the issue. Now we have to have start up script code that refuses to start the docker daemon until the cgroup root is mounted.

There are two ways to address this an not reintroduce the performance issues you resolved in 3cbe3eb.

If the cgroup root is not mounted:

  • dockerd should fail to start with an error message and a non zero exit code.
  • if the start up time check had failed, rerun the check at docker run time.

I like the second option because it mirrors the docker 1.2 behavior.

jaybuff commented Feb 6, 2015

Before docker/libcontainer@3cbe3eb the cgroup root was located on every "docker run" call. That meant that you could mount /cgroup after starting dockerd. That check is expensive, so now it only makes that check at start up time. The problem is that it doesn't fail until docker run time and mounting /cgroup after dockerd starts won't resolve the issue. Now we have to have start up script code that refuses to start the docker daemon until the cgroup root is mounted.

There are two ways to address this an not reintroduce the performance issues you resolved in 3cbe3eb.

If the cgroup root is not mounted:

  • dockerd should fail to start with an error message and a non zero exit code.
  • if the start up time check had failed, rerun the check at docker run time.

I like the second option because it mirrors the docker 1.2 behavior.

@vmarmol

This comment has been minimized.

Show comment
Hide comment
@vmarmol

vmarmol Feb 6, 2015

Contributor

The second options sounds good. I'll go ahead and send a PR. Thanks for reporting and debugging!

Contributor

vmarmol commented Feb 6, 2015

The second options sounds good. I'll go ahead and send a PR. Thanks for reporting and debugging!

jaybuff referenced this issue in vmarmol/libcontainer Feb 6, 2015

Retry getting the cgroup root at apply time.
This will allow late-binding of the cgroup hierarchy.

Signed-off-by: Victor Marmol <vmarmol@google.com>

vmarmol added a commit to vmarmol/libcontainer that referenced this issue Feb 6, 2015

Retry getting the cgroup root at apply time.
This will allow late-binding of the cgroup hierarchy.

Fixes moby/moby#8791

Signed-off-by: Victor Marmol <vmarmol@google.com>

mahak added a commit to mahak/libcontainer that referenced this issue Feb 7, 2015

Retry getting the cgroup root at apply time.
This will allow late-binding of the cgroup hierarchy.

Fixes moby/moby#8791

Signed-off-by: Victor Marmol <vmarmol@google.com>
@johanhaleby

This comment has been minimized.

Show comment
Hide comment
@johanhaleby

johanhaleby Mar 13, 2015

Should this be fixed in Docker 1.5.0? I'm getting the same error when using Docker in Docker, both the host and container (also running docker) is using 1.5.0.

johanhaleby commented Mar 13, 2015

Should this be fixed in Docker 1.5.0? I'm getting the same error when using Docker in Docker, both the host and container (also running docker) is using 1.5.0.

@bmwertman

This comment has been minimized.

Show comment
Hide comment
@bmwertman

bmwertman Apr 11, 2015

When I run sudo /etc/init.d/cgconfig status as @mbentley suggested it says stopped.

Running sudo /etc/init.d/cgconfig start errors with;

Starting cgconfig service: Error: cannot mount hugetlb to /cgroup/hugetlb: No such file or directory /sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to parse /etc/cgconfig.conf

bmwertman commented Apr 11, 2015

When I run sudo /etc/init.d/cgconfig status as @mbentley suggested it says stopped.

Running sudo /etc/init.d/cgconfig start errors with;

Starting cgconfig service: Error: cannot mount hugetlb to /cgroup/hugetlb: No such file or directory /sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to parse /etc/cgconfig.conf

@zhouyangchao

This comment has been minimized.

Show comment
Hide comment
@zhouyangchao

zhouyangchao Sep 6, 2017

Wow, it works for me.

zhouyangchao commented Sep 6, 2017

Wow, it works for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment