Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1.12.5] Docker leaking cgroups causing no space left on device? #29638

Open
BenHall opened this Issue Dec 21, 2016 · 67 comments

Comments

@BenHall
Copy link
Contributor

BenHall commented Dec 21, 2016

I'm seeing errors relating to cgroup running out of disk space. When starting containers, I get this error:

"oci runtime error: process_linux.go:258: applying cgroup configuration for process caused "mkdir /sys/fs/cgroup/memory/docker/406cfca0c0a597091854c256a3bb2f09261ecbf86e98805414752150b11eb13a: no space left on device""

The servers have plenty of disk space and inodes. The containers cgroup is read-only, so no-one should be filling that area of the disk.

Do cgroup limits exist? If so, what are they?

UPDATE:

$ docker info
Containers: 101
 Running: 60
 Paused: 0
 Stopped: 41
Images: 73
Server Version: 1.12.3
Storage Driver: overlay
 Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host bridge null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.6.0-040600-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 40
Total Memory: 251.8 GiB
Name: sd-87633
ID: YDD7:FC5T:DCP3:ZDZO:UWP4:ZR5V:SENB:GK6N:NJGF:FB3J:T5G4:OJPZ
Docker Root Dir: /home/docker/data
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

$ uname -a
Linux sd-87633 4.6.0-040600-generic #201606100558 SMP Fri Jun 10 10:01:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 22:01:48 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 22:01:48 2016
 OS/Arch:      linux/amd64
@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Dec 21, 2016

Turns out, yes, cgroups does have limits cat /proc/cgroups

If this hits 65535 then it could produce the error.

Is there a potential leak? This is a server that has been running for a couple of days:

# cat /proc/cgroups | grep memory
memory	5	22563	1
# ls -l -F /sys/fs/cgroup/memory/docker/ | grep / | wc -l
61
# docker ps -a | wc -l
75

Kernel version Linux 4.6.3-040603-generic #201606241434 and Linux 4.6.0-040600-generic #201606100558

@BenHall BenHall changed the title [1.12.5] Potential cgroups issue - no space left on device [1.12.5] Docker leaking cgroups causing no space left on device? Dec 21, 2016

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Dec 21, 2016

@mlaventure

This comment has been minimized.

Copy link
Contributor

mlaventure commented Dec 21, 2016

@BenHall Every directory under the mount point (including the mount point) is considered to be a cgroup. So to get the actual number of cgroups from the FS you would have to run: find /sys/fs/cgroup/memory -type d | wc -l and that should match the number found in /proc/cgroups.

Run find /sys/fs/cgroup/memory/docker -type d | wc -l to see exactly how many are under the docker hierarchy. Docker only create one cgroup per container, so normally the number under the docker hierarchy should match the number of container which are currently in a running state. Unless those container create new cgroup themselves as they will appear under their own docker/<container-id>/ cgroup.

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Dec 21, 2016

Ahh, ok, thanks. Here is the updated view.

$ find /sys/fs/cgroup/memory/docker -type d | wc -l
74
$ docker ps -a | wc -l
74
$ cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	9	79	1
cpu	10	309	1
cpuacct	10	309	1
blkio	11	309	1
memory	5	23008	1
devices	7	309	1
freezer	3	79	1
net_cls	4	79	1
perf_event	6	79	1
net_prio	4	79	1
hugetlb	2	79	1
pids	8	309	1

Where does 23008 come from?

@mlaventure

This comment has been minimized.

Copy link
Contributor

mlaventure commented Dec 21, 2016

@BenHall what else do you have under /sys/fs/cgroup/memory/ excluding the docker dir? (something like find /sys/fs/cgroup/memory -type d ! -path '/sys/fs/cgroup/memory/docker*' should work).

You can also check that you don't have the memory cgroup mounted somewhere else by checking the output of mount or cat /proc/self/mountinfo

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Dec 21, 2016

$ find /sys/fs/cgroup/memory -type d ! -path '/sys/fs/cgroup/memory/docker*' | wc -l
171

Looks like a number of /sys/fs/cgroup/memory/system.slice/home-docker-data-containers-624ea7b42c191b30f7b73b42f5525749569f14b9363408144b749cef984e2303-shm.mount and /sys/fs/cgroup/memory/system.slice/home-docker-data-overlay-b45562a1276300fa272fa5c23a3c4895cd83007520e7253f285857470a2eca6b-merged.mount

@mlaventure

This comment has been minimized.

Copy link
Contributor

mlaventure commented Dec 21, 2016

It's still way under the 23k you get from /proc/cgroups.

Did you check if you had the memory cgroup filesystem mounted anywhere else?

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Dec 21, 2016

Sorry, yes I did but didn't see anything unexpected.

$ mount | wc -l
250
$ mount | grep memory
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
@mlaventure

This comment has been minimized.

Copy link
Contributor

mlaventure commented Dec 21, 2016

Sorry, my touchpad went crazy.

I was about to ask if you could provide the output of /proc/self/mountinfo instead

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Dec 21, 2016

@Puneeth-n

This comment has been minimized.

Copy link

Puneeth-n commented Dec 22, 2016

I faced the same issue right now on my jenkins server running:
Docker version 1.12.0, build 8eab29e
3.19.0-66-generic #74~14.04.1-Ubuntu

Restarting the system helped me.

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Dec 22, 2016

Thanks, restarting helped us too, resets the number and allows containers to start again.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Dec 22, 2016

@BenHall could you adddocker version and docker info output to your top description? In case there's a kernel/distro specific issue

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Dec 22, 2016

@thaJeztah Updated

@mlaventure

This comment has been minimized.

Copy link
Contributor

mlaventure commented Dec 22, 2016

Nothing stand out in the mountinfo output unfortunately. This may be an issue with that version of the kernel, but I haven't found any reference to a similar issue as of now. I'm having a look at the cgroup_rmdir code just in case.

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Jan 10, 2017

This is still climbing

$ cat /proc/cgroups | grep memory
memory	5	52044	1
@saiwl

This comment has been minimized.

Copy link

saiwl commented Jan 11, 2017

I had the same problem. "echo 1 > /sys/fs/cgroup/memory/docker/memory.force_empty" with root helps a little, the num will decrease, but still high. And this command may stuck and prevent containers from stopping, a little dangerous.

@BenHall

This comment has been minimized.

Copy link
Contributor Author

BenHall commented Jan 14, 2017

Had to reboot the server. Results when it came back up

$ cat /proc/cgroups | grep memory
memory	3	139	1
@dberzano

This comment has been minimized.

Copy link

dberzano commented Jan 15, 2017

We are having a similar (if not identical) problem on our production servers where we continuously run containers with a relatively short lifetime. At some point our deployment system complains with:

applying cgroup configuration for process caused "mkdir /sys/fs/cgroup/memory/docker/fd55cf1091bf6d6c75095240dc59550999ef8f5e62275e1d8652da544cc54104: no space left on device"

What we have noticed is the following, for instance:

ls -1 /sys/fs/cgroup/cpuset/docker | wc -l
119998

which is a pretty high number. The directory itself contains data for containers that are long gone (days or weeks).

I am not sure who/what is in charge of cleaning those cgroups up.

We are running Docker 1.12.3 on CentOS 7 (but the same is happening on 1.12.6):

#> docker version
Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:
 OS/Arch:      linux/amd64
#> docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.12.3
Storage Driver: overlay
 Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.36.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 62.76 GiB
Name: cn05.internal
ID: EADF:J6RM:QH7B:XKW2:SGEW:WOVV:LYJ7:ES7Y:5JAI:EAJD:J4PZ:WALF
Docker Root Dir: /extra
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
 127.0.0.0/8
#> uname -r
3.10.0-327.36.3.el7.x86_64

After a power cycle of the affected nodes it seems that cgroups start to be cleaned up correctly as soon as containers exit. We are really not sure what is causing this trouble which is affecting at random all our systems...

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Jan 15, 2017

ping @mlaventure PTAL

@mlaventure

This comment has been minimized.

Copy link
Contributor

mlaventure commented Jan 16, 2017

@dberzano do you have the same amount of dead containers in /run/runc? Those cgroup directories are deleted when runc delete <id> is called upon container exit.

Is there any error in the daemon log after a container exit?

@shashank0033

This comment has been minimized.

Copy link

shashank0033 commented Jul 16, 2018

I have the same issue
#docker info
Containers: 461
Running: 3
Paused: 0
Stopped: 458
Images: 16
Server Version: 17.03.1-ce

none of the containers are running , have enough space available but when I try to create manually it throws me "no space left"
O.S version
Linux 7.4 3.10.0-693.2.2.el7.x86_64

NicolasT added a commit to scality/kubernetes that referenced this issue Sep 2, 2018

ryarnyah added a commit to ryarnyah/runc that referenced this issue Sep 18, 2018

ryarnyah added a commit to ryarnyah/runc that referenced this issue Sep 25, 2018

ryarnyah added a commit to ryarnyah/runc that referenced this issue Sep 25, 2018

ryarnyah added a commit to ryarnyah/runc that referenced this issue Sep 25, 2018

ryarnyah added a commit to ryarnyah/runc that referenced this issue Sep 25, 2018

@trevex

This comment has been minimized.

Copy link

trevex commented Oct 1, 2018

Having a similar or the same issue on CoreOS:

core@node-6 ~ $ docker run -it busybox sh
/run/torcx/bin/docker: Error response from daemon: mounting shm tmpfs: no space left on device.
core@node-6 ~ $ uname -a
Linux node-6.figo.systems 4.14.67-coreos #1 SMP Mon Sep 10 23:14:26 UTC 2018 x86_64 Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz GenuineIntel GNU/Linux
core@node-6 ~ $ docker info
Containers: 116
 Running: 56
 Paused: 0
 Stopped: 60
Images: 37
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: v0.13.2 (expected: fec3683b971d9c3ef73f284f176672c44b448662)
Security Options:
 seccomp
  Profile: default
 selinux
Kernel Version: 4.14.67-coreos
Operating System: Container Linux by CoreOS 1855.4.0 (Rhyolite)
OSType: linux
Architecture: x86_64
CPUs: 64
Total Memory: 125.9GiB
Name: node-6.figo.systems
ID: MBHE:DHJX:Q7I2:SMBS:OBGU:6UWQ:GA3S:AJPY:DTY7:JJKY:Q3MG:A2ZT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
core@node-6 ~ $ cat /proc/self/mountinfo | wc -l
25185
core@node-6 ~ $ mount | wc -l
25185
core@node-6 ~ $ docker ps -a | wc -l
117
core@node-6 ~ $ cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	11	91	1
cpu	2	145	1
cpuacct	2	145	1
blkio	10	145	1
memory	5	1389	1
devices	4	145	1
freezer	9	91	1
net_cls	3	91	1
perf_event	7	91	1
net_prio	3	91	1
hugetlb	8	91	1
pids	6	156	1
core@node-6 ~ $ find /sys/fs/cgroup/memory/docker -type d | wc -l
3
core@node-6 ~ $ find /sys/fs/cgroup/memory -type d ! -path '/sys/fs/cgroup/memory/docker*' | wc -l
152

Let me know if I can provide more details to track down the issue.

@chestack

This comment has been minimized.

Copy link

chestack commented Oct 15, 2018

same issue

docker v1.13
kubernetes v1.9.8

# cat /proc/cgroups | grep memory
memory	7	1133	1

kolyshkin added a commit to kolyshkin/runc that referenced this issue Oct 31, 2018

libcontainer/cgroups: do not enable kmem on broken kernels
Commit fe898e7 (PR opencontainers#1350) enables kernel memory accounting
for all cgroups created by libcontainer even if kmem limit is
not configured.

Kernel memory accounting is known to be broken in RHEL7 kernels,
including the latest RHEL 7.5 kernel. It does not support reclaim
and can lead to kernel oopses while removing cgroup (merging it
with its parent). Unconditionally enabling kmem acct on RHEL7
leads to bugs:

* opencontainers#1725
* kubernetes/kubernetes#61937
* moby/moby#29638

I am not aware of any good way to figure out whether the kernel
memory accounting in the given kernel is working or broken.
For the lack of a better way, let's check if the running kernel
is RHEL7, and disable initial setting of kmem.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

kolyshkin added a commit to kolyshkin/runc that referenced this issue Nov 1, 2018

libcontainer: enable to compile without kmem
Commit fe898e7 (PR opencontainers#1350) enables kernel memory accounting
for all cgroups created by libcontainer -- even if kmem limit is
not configured.

Kernel memory accounting is known to be broken in some kernels,
specifically the ones from RHEL7 (including RHEL 7.5). Those
kernels do not support kernel memory reclaim, and are prone to
oopses. Unconditionally enabling kmem acct on such kernels lead
to bugs, such as

* opencontainers#1725
* kubernetes/kubernetes#61937
* moby/moby#29638

This commit gives a way to compile runc without kernel memory setting
support. To do so, use something like

	make BUILDTAGS="seccomp nokmem"

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

kolyshkin added a commit to kolyshkin/runc that referenced this issue Nov 1, 2018

libcontainer/cgroups: do not enable kmem on broken kernels
Commit fe898e7 (PR opencontainers#1350) enables kernel memory accounting
for all cgroups created by libcontainer even if kmem limit is
not configured.

Kernel memory accounting is known to be broken in RHEL7 kernels,
including the latest RHEL 7.5 kernel. It does not support reclaim
and can lead to kernel oopses while removing cgroup (merging it
with its parent). Unconditionally enabling kmem acct on RHEL7
leads to bugs:

* opencontainers#1725
* kubernetes/kubernetes#61937
* moby/moby#29638

I am not aware of any good way to figure out whether the kernel
memory accounting in the given kernel is working or broken.
For the lack of a better way, let's check if the running kernel
is RHEL7, and disable initial setting of kmem.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

kolyshkin added a commit to kolyshkin/runc that referenced this issue Nov 1, 2018

libcontainer: ability to compile without kmem
Commit fe898e7 (PR opencontainers#1350) enables kernel memory accounting
for all cgroups created by libcontainer -- even if kmem limit is
not configured.

Kernel memory accounting is known to be broken in some kernels,
specifically the ones from RHEL7 (including RHEL 7.5). Those
kernels do not support kernel memory reclaim, and are prone to
oopses. Unconditionally enabling kmem acct on such kernels lead
to bugs, such as

* opencontainers#1725
* kubernetes/kubernetes#61937
* moby/moby#29638

This commit gives a way to compile runc without kernel memory setting
support. To do so, use something like

	make BUILDTAGS="seccomp nokmem"

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

thaJeztah added a commit to thaJeztah/runc that referenced this issue Nov 13, 2018

libcontainer: ability to compile without kmem
Commit fe898e7 (PR opencontainers#1350) enables kernel memory accounting
for all cgroups created by libcontainer -- even if kmem limit is
not configured.

Kernel memory accounting is known to be broken in some kernels,
specifically the ones from RHEL7 (including RHEL 7.5). Those
kernels do not support kernel memory reclaim, and are prone to
oopses. Unconditionally enabling kmem acct on such kernels lead
to bugs, such as

* opencontainers#1725
* kubernetes/kubernetes#61937
* moby/moby#29638

This commit gives a way to compile runc without kernel memory setting
support. To do so, use something like

	make BUILDTAGS="seccomp nokmem"

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 6a2c155)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

clnperez added a commit to clnperez/runc that referenced this issue Nov 13, 2018

libcontainer: ability to compile without kmem
Commit fe898e7 (PR opencontainers#1350) enables kernel memory accounting
for all cgroups created by libcontainer -- even if kmem limit is
not configured.

Kernel memory accounting is known to be broken in some kernels,
specifically the ones from RHEL7 (including RHEL 7.5). Those
kernels do not support kernel memory reclaim, and are prone to
oopses. Unconditionally enabling kmem acct on such kernels lead
to bugs, such as

* opencontainers#1725
* kubernetes/kubernetes#61937
* moby/moby#29638

This commit gives a way to compile runc without kernel memory setting
support. To do so, use something like

	make BUILDTAGS="seccomp nokmem"

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
@erulabs

This comment has been minimized.

Copy link

erulabs commented Nov 14, 2018

Same issue here, Ubuntu 18.04, docker 17.03, kernel 4.15.0-1025-aws

The memory cgroup just grows and grows until a reboot is required to launch new containers. In my case, once near the limit I quickly reach 100% cpu utilization and the server becomes unresponsive.

Containers: 66
 Running: 64
 Paused: 2
 Stopped: 0
Images: 32
Server Version: 17.03.3-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6c463891b1ad274d505ae3bb738e530d1df2b3c7
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-1025-aws
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.1 GiB
Name: ip-172-31-17-38
ID: 5Y2E:3AWK:6JVH:Y3K2:CQDR:JN34:V6SR:VZXF:2WWO:R7F2:3GEY:6ECH
Docker Root Dir: /var/lib/evaldocker
Debug Mode (client): false
Debug Mode (server): false
Username: toniceval
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
@pascalandy

This comment has been minimized.

Copy link

pascalandy commented Nov 21, 2018

After running Docker smoothly for over 2 years I'm getting this issue as well.
Worst, I restarted and I still have 6 containers (over 45) that are not starting for this reason!

Humm

I run this setup for over a year now. The 6 containers that are not starting are all caddy 0.10.14 containers. I have other caddy's container that runs normally.

All the commands I ran

uname -a; echo; echo;
docker info; echo; echo;
docker version; echo; echo;
docker ps -a | wc -l; echo; echo;
ls -l -F /sys/fs/cgroup/memory/docker/ | grep / | wc -l; echo; echo;
mount | wc -l; echo; echo;
cat /proc/cgroups | grep memory; echo; echo;
cat /proc/self/mountinfo | wc -l; echo; echo;
ls -1 /sys/fs/cgroup/cpuset/docker | wc -l; echo; echo;
find /sys/fs/cgroup/memory -type d ! -path '/sys/fs/cgroup/memory/docker*' | wc -l

Results

root@my-vps:~/deploy-setup# uname -a; echo; echo;
Linux my-vps 4.10.0-24-generic #28-Ubuntu SMP Wed Jun 14 08:14:34 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@my-vps:~/deploy-setup# docker info; echo; echo;

Containers: 45
 Running: 44
 Paused: 0
 Stopped: 1
Images: 49
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: fmbi1a5nn9sp5o4qy3eyazeq5
 Is Manager: true
 ClusterID: lzc3rrzjgu41053qywhym8jdg
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 123.123.123.23
 Manager Addresses:
  123.123.123.23:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.10.0-24-generic
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.782GiB
Name: my-vps
ID: X5WW:PFNN:WZU7:OMCH:EXFN:N6TL:KMS4:GEHQ:WJLZ:J7DS:IHWX:I5JZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: devmtl
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

root@my-vps:~/deploy-setup# docker version; echo; echo;
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:24:56 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:21 2018
  OS/Arch:          linux/amd64
  Experimental:     false

root@my-vps:~/deploy-setup# docker ps -a | wc -l; echo; echo;
46

root@my-vps:~/deploy-setup# ls -l -F /sys/fs/cgroup/memory/docker/ | grep / | wc -l; echo; echo;
44

root@my-vps:~/deploy-setup# mount | wc -l; echo; echo;
172

root@my-vps:~/deploy-setup# cat /proc/cgroups | grep memory; echo; echo;
memory	2	278	1

root@my-vps:~/deploy-setup# cat /proc/self/mountinfo | wc -l; echo; echo;
172

root@my-vps:~/deploy-setup# ls -1 /sys/fs/cgroup/cpuset/docker | wc -l; echo; echo;
61

root@my-vps:~/deploy-setup# find /sys/fs/cgroup/memory -type d ! -path '/sys/fs/cgroup/memory/docker*' | wc -l
201
@leckylao

This comment has been minimized.

Copy link

leckylao commented Feb 19, 2019

Same issue here that plenty space in the storage but the group is 100%. And noticed its filesystem is none. Any idea how to resolve this?

df
Filesystem     1K-blocks     Used Available Use% Mounted on
udev              488328        4    488324   1% /dev
tmpfs             101604     1312    100292   2% /run
/dev/vda1       25745836 10700436  13718380  44% /
none                   4        4         0 100% /sys/fs/cgroup
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.