Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker disk usage #12265

Closed
corradio opened this issue Apr 10, 2015 · 27 comments
Closed

Docker disk usage #12265

corradio opened this issue Apr 10, 2015 · 27 comments
Labels
area/storage kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/more-info-needed

Comments

@corradio
Copy link

The /var/lib/docker/aufs/mnt folder takes a lot of disk space which is not cleared when removing images, containers and volumes. Upon docker restart, the mnt folder is cleared.
How come there's such a leak? Maybe there should be a way to clean the mnt folder?

@catrixs
Copy link

catrixs commented Apr 10, 2015

We also notice the problem too.

We use device mapper, and this is our $DOCKER/devicemapper/mnt:

# ls /data0/docker-1.3.2-fs/devicemapper/mnt
00a44ea7bc58795e270434aab08c90f325f855d7da66a32050c6e805e14a154a       732e08194f78d944daa3fedc32a80158eb18c38e64252f4fb11b6d8ea2e09281
015fb409be0db516ea7b8b1d04baf15004b871101a8411ce2065b65bb8426920       738a3e1244858c91565b9de53df5bd521826f125957f315520b50f6f5f7ed3d5
033171c86f63eff1c4df0b1835bbb932c768bc3a329ac1bd040b9890a27e42c2       73db404d05060b8a86e593ec67f90333b8166d31625d363bfa99ba1ff58857a9
04898a5475ea3caeb93ad14691dfb317a365656584b65404b8963ee5827fcca9       745460dd6c1bca1610f33cf1b013a6d5ecec1383b1d77341f34269d8304a3210
0613392ec3032b2fff0ad766a9219414b7e72989d8a945c04efa44c00a253bec       7692870c7dd332551b2c43bb507f56f6192775ec26468b5b47b18a29543a33c8
06ba2b469f02095226dcefba6d1508bee09d7560c3577c65faf3f94a9a89649d       770dda57a9553c22ae520b8f2eb88eb4390e038ae453f33fa959eea125186186
074b04117ff3ad6d57c950ee4f8a98fa7801fdbc766bce284bdfc037c7a91030       7769a09e5deeb9b517170dc13a8d64a5eb71bc2dc30a48145242d0fc1dc45cef
08025ce30fa5565b33d96b8f00e1da916e60a93030092b3539a365a71919debe       7acf13620725110ef68530932bfd3ef505986d48b8c366c7f7733276005887fc
09f931dea66e0495318effc1a6f7cd883291dfe5269c3bb5252a8e8681441a24       7b5325ce0de5b0bef5a2d71cba7c126b37b00d80e9e5d28da67cd339538a91df
0bc8167367f58803ab4dc91470587525b88e996afc7bd87b66fea32886834332       7d022bdb0e39d7af5bf8a8594f0be46907e2029d8424cfc2b7d2160e661ea18f
0c8192c962eed605b591c0be82e765f654306e14a43b9ade6ecd7c20cbde96cc       7d3798f1cce99bdbcd77c14742773a523d3a8501e1cb5bf84d2ff043621e7a08
0d8d76d3ae3a7f9ba83e2a02a819a0ff2e4d02a0127e4776d3445df361cc2f06       7d87e87e97c05d45d417cc172e34ad02223a59a7901e517b0adebc285be08ee8
0dbb6d5790ea74875602f7716b0811da12dce2949cf8f40630c5031e9ad7a357       7db36ad5822692a0ae5249877ed59b3b2c66db8282fd2fef0fb24df15732594b
0de393304c112b32ceea01215409d64c3fd4c13516612720aa580cbeeb9888d9       7e36d1c80a8ae2b56e7546688008a34cd0f25ca64940f9e09e36bac1fe9bd9fe
0f5e9b986a868402a21f6380e3c81925001a4be65db4765d3ba5942005622f82       7e497c1516a0f48ed0de05f67dc00fedc72ec73adb66a92bcc3e562b5601d8c6
0fbccfdaf25897d518a97868205d3c6b4824df1d48a6a2f1dfae2b4b55d204b8       7ec824c30b53dfabd74e7912bf7de641e07f1f19c0978efc7ceb2719d2054722
102464121a6182a7fcc0c688ea68abfff7fd7d919e269f8bad98dbcff73ba7c6       7fff0c6f0b8d19f509f651068c2b058012685e64677d4b732bfecb03f60d7d6a
145ae39c7c01fc65162821c9c1b1b2df00cdd0434e3f21c81afffdac3e3076fb       817a529f3d4b458f7a5af9bdc3757464099d3258f914894f5cc136b5ed1a6475
152ef97aad75de8f58c50a4bf64b2c96157e93b1c5195a553d7a891fb05f729f       87f3fc67bd85ac93bfb6b285b21a13b964b00ca9696907073be2f1c7cf82fa70
15cc425bc4becac2c0c2c85bf005ec908e4c61ba46aa9940c7037299331bb17c       88657e158c91497d2b7c1aba17785dfdf0f8106e91f8df0e4a875820db1c1e94
174d91d34022dfe84dc64e8413f7981121c5b296c28b148ff835b6e4dda78030       89675049e2385f606eb2f3de44be2776e215becbe9b98ab8af9f9e40dab6a1be
18440429c378d5fe2268f34f2464f01c760ad93271279d8ffb78018634c15d4a       8969ec383412057cc9f840981c02d47b8da94209b18b479e912903547926bbb1
188aadb4530cee08d86e6831bd9912d8a4faa06219e7d8c0458d834d152bb5a0       8a2d636d14fd5e4c8b2ad7d4b49882577e4a026b7a257c1b66a4a47dc8ff8052
18af71afdb51121bf4f6d336da8d1ce5a4895ac988a74644220fe2c585a475bd       8b392c012eb931a5d9ba0eba4ff5a6289f538c11989295be95561feaf18ba72f

We only run two container:

# docker ps -a
CONTAINER ID        IMAGE                                                                        COMMAND                CREATED             STATUS              PORTS               NAMES
6e9b714d68fd        registry.intra.weibo.com/weibo_rd_content/web_v4:WEB_V4_RELEASE_V2.3.23.37   "/docker_init.sh"      25 hours ago        Up 25 hours                             v4
3258ca115ef3        registry.intra.weibo.com/weibo_rd_common/cadvisor:0.7.1                      "/usr/bin/cadvisor -   9 weeks ago         Up 9 weeks                              cadvisor

Docker Info:

# docker info
Containers: 2
Images: 249
Storage Driver: devicemapper
 Pool Name: docker-8:6-3276803-pool
 Pool Blocksize: 65.54 kB
 Data file: /data0/docker1.3.2-fs/devicemapper/devicemapper/data
 Metadata file: /data0/docker1.3.2-fs/devicemapper/devicemapper/metadata
 Data Space Used: 4.89 GB
 Data Space Total: 107.4 GB
 Metadata Space Used: 12.36 MB
 Metadata Space Total: 2.147 GB
 Library Version: 1.02.89-RHEL6 (2014-09-01)
Execution Driver: native-0.2
Kernel Version: 2.6.32-431.11.2.el6.toa.2.x86_64
Operating System: <unknown>

Docker Version:

[root@77-109-144-bx-core ~]# docker version
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa/1.3.2
OS/Arch (client): linux/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 39fa2fa/1.3.2

@xiaods
Copy link
Contributor

xiaods commented Apr 10, 2015

have a try use a newer docker version?

@thaJeztah
Copy link
Member

I'm trying to determine if this is related to #11113 and #10991, or at least this part:

Right now we keep container rootfs mounted (in /var/lib/docker/devmapper/mnt/) after container launch and it is unmounted once container has exited.

But this creates problems of devices leaking into mount namespace of other containers. And that in turn does not allow container to exit or to be removed. We give up after 10 seconds in the process leaving behind active devices or unreclaimable space from thin pool.

@thaJeztah
Copy link
Member

@corradio could you provide the output of uname -a, docker version and docker -D info?

Also (If you're not already doing so), could you try and test with the current release or release-candidate of docker?

@corradio
Copy link
Author

uname -a:

Linux infra1-par 3.13.0-36-generic #63-Ubuntu SMP Wed Sep 3 21:30:07 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux

docker -D info:

Containers: 22
Images: 565
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 609
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 8
Total Memory: 31.26 GiB
Name: ******
ID: OG4X:E7QY:ISBZ:I5XY:FXJG:BJ6O:TCMD:QAM2:ZB5X:RAU3:AHWK:COWS
Debug mode (server): false
Debug mode (client): true
Fds: 132
Goroutines: 103
EventsListeners: 1
Init Path: /usr/bin/docker
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support

Olivier Corradi
*:snips *- www.snips.net
+33 (0) 7 81213708

On 10 April 2015 at 14:39, Sebastiaan van Stijn notifications@github.com
wrote:

@corradio https://github.com/corradio could you provide the output of uname
-a, docker version and docker -D info?

Also (If you're not already doing so), could you try and test with the
current release or release-candidate of docker?


Reply to this email directly or view it on GitHub
#12265 (comment).

@thaJeztah
Copy link
Member

Thanks, looks like you missed docker version though :)

@corradio
Copy link
Author

Sorry, here's the output of docker version

Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef

Olivier Corradi
*:snips *- www.snips.net
+33 (0) 7 81213708

On 10 April 2015 at 15:06, Sebastiaan van Stijn notifications@github.com
wrote:

Thanks, looks like you missed docker version though :)


Reply to this email directly or view it on GitHub
#12265 (comment).

@thaJeztah thaJeztah added the kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. label Apr 10, 2015
@thaJeztah
Copy link
Member

Thanks, @corradio. I have labeled this as bug, but there's a chance that this is an existing issue; there are a number of issues related with Docker sometimes not properly unmounting containers filesystems.

/ping @unclejack perhaps you know if this is already tracked by an existing issue (and which one)

@stevenschlansker
Copy link

We have similar problems with btrfs driver:

root@ip-10-70-7-137:~# docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8
root@ip-10-70-7-137:~# docker info
Containers: 26
Images: 17
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.19.0
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 29.39 GiB
Name: ip-10-70-7-137
ID: TZ6Q:2QBI:ROKL:LJII:HTNO:FDHH:ZAFL:67IJ:4NM3:T4WA:JVXQ:AHYN

@poelzi
Copy link

poelzi commented Jul 29, 2015

we use docker as a build system. currently I have to delete /var/lib/docker once a week or so, because of this image leakage.

@poelzi
Copy link

poelzi commented Jul 29, 2015

root@gitlabcirunner2:/var/lib# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

happens with aufs and had the problem with devicemapper as well

@cpuguy83
Copy link
Member

Is this related to docker-in-docker?
There are known issues with aufs over aufs, as well as cleaning up subvolumes from btrfs.

Note that with devicemapper you cannot use a static docker binary, as it will cause corruption issues leading to just this.

Is this still a problem?
Please make sure to include the output of docker info as well.
Thanks!

@thaJeztah
Copy link
Member

Is this still an issue with the current release?

@thaJeztah
Copy link
Member

i.e., is anybody here still having this issue with the current release?

@bfirsh
Copy link
Contributor

bfirsh commented Dec 10, 2015

@poelzi @stevenschlansker @corradio Are you still seeing this issue?

@stevenschlansker
Copy link

@thaJeztah @bfirsh We certainly still have disk space issues, although it's not entirely clear what / why it remains a problem. Since the opening of this issue we've transitioned to the overlay driver.

Concerns we still have:

In short, this is still a pain point for us, although it's not clear that this issue in particular still tracks anything for us.

@bfirsh
Copy link
Contributor

bfirsh commented Dec 11, 2015

@stevenschlansker Thanks. You might want to watch this to keep an eye on latest updates with your 3rd concern: #18601

@icecrime
Copy link
Contributor

Cc @tonistiigi @mlaventure.

@thaJeztah
Copy link
Member

Improved management is currently being worked on in #26108, and tracked through #22871

@thaJeztah
Copy link
Member

Given that the issues mentioned in #12265 (comment) are already tracked through separate issues, let me close this issue to not duplicate things.

@kiwenlau
Copy link

kiwenlau commented Jul 18, 2017

I faced the same problem like @corradio.

/var/lib/docker/aufs/mnt/ tooks a lot of disk space:

df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G  4.0K  3.9G   1% /dev
tmpfs           799M  748K  798M   1% /run
/dev/vda1       118G   93G   20G  83% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            3.9G  1.9M  3.9G   1% /run/shm
none            100M     0  100M   0% /run/user
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/d3a7f8d79b00a95f098e547ff6a553557c98f57486328e24ef24544ee0647ba2
shm              64M     0   64M   0% /var/lib/docker/containers/fd22b32d6bad8044123f829030d6260cb7c59423ddb7cb0422f829a2cc63a81c/shm
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/7448f3669686a3b2d825b3b544442b81c9a0c9915fe28be393496d675f94afdb
shm              64M     0   64M   0% /var/lib/docker/containers/22c9bd719dd7b14393cbba52dd75f666843a5d4bbcfa73230a849c6ce53cf330/shm
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/b6e2a9f5aaf3b96c010acc4f84daa713e1faa09862e0c1c523df2fa6e0af7add
shm              64M     0   64M   0% /var/lib/docker/containers/5042c59b6c21267ba7b57fccff47156a33408c6130fba720a37b4034eb6f9879/shm
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/082ccb0f01fa4acfa4eeb7e6e64d301ac189125ca1843560491447d9c962bbad
shm              64M     0   64M   0% /var/lib/docker/containers/2a40acc3ebca0878a7bd94ce7645cf3ce2d5cbe10d834c16c87a1e2da9e0630f/shm
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/7c07c1d6068f430e16f7e973b147a18e82c67689672fab770071f7c739c0ec49
shm              64M     0   64M   0% /var/lib/docker/containers/249df847183a684d9222aa0575234f018fc975559bbbc1162030afa903ccf30d/shm
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/58b136e4c6176ca4a45423ae4380b4f55e44deca8359619ee62179a6648c5f51
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/3de0b7f8f7b770f74ceb5575c0d1b2cc54464d9eee5da43d135344acc12cb2c8
shm              64M     0   64M   0% /var/lib/docker/containers/adddb00a1035312d11e868673078ac49a47173358827e544f795009567b4911e/shm
none            118G   93G   20G  83% /var/lib/docker/aufs/mnt/e8d106d34c22e7c8781b9a9c97fabee4004161bde1bbc21d4fab0cb0001a9328
shm              64M     0   64M   0% /var/lib/docker/containers/2caa3c43a54e5d38181465426837f02821f71b1f32204401a290ae62550f7efb/shm
shm              64M     0   64M   0% /var/lib/docker/containers/a376aa694b22ee497f6fc9f7d15d943de91c853284f8f105ff5ad

I only run 8 containers, but there are 80 directories in /var/lib/docker/aufs/mnt/, maybe this is the problem:

ls -l /var/lib/docker/aufs/mnt | wc -l
81

I removed stopped containers, untagged images and unused volumes, but nothing helps.

Do I have to restart docker to solve it? I have the problem in production environment...

And I checked docker#12265 (comment) , in fact, they seem not exactly the same as our problem.

docker version
Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:44:32 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:44:32 2016
 OS/Arch:      linux/amd64
docker info
Containers: 8
 Running: 7
 Paused: 0
 Stopped: 1
Images: 8
Server Version: 1.12.3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 80
 Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-86-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.798 GiB
Name: node03
ID: E7HQ:CW4F:HHIY:XB6C:MT5M:XJPA:X2GL:A3WD:SZW4:X2YX:MRSA:RVYH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 192.168.59.224:5000
 127.0.0.0/8

@kiwenlau
Copy link

I reduced the disk usage from 83% to 19% by just restarting docker

sudo restart docker

Why does this happen?

In addition, it is not a good idea to restart docker in production environment. Is there any better solution?

@mlaventure
Copy link
Contributor

@kiwenlau it's quite likely that for some reason when those container shutdown, docker couldn't remove the directory because the shm device was busy. This tends to happen often on 3.13 kernel. You may want to update it to the 4.4 version supported on trusty 14.04.5 LTS.

The reason it disappeared after a restart, is that daemon probably tried and succeeded to clean up left over data from stopped containers.

@kiwenlau
Copy link

@mlaventure Thx!

I've checked the kernel version, it's 3.13.

uname -r
3.13.0-86-generic

I will probably update the kernel to completely solve the problem.

@warmchang
Copy link

👍

@ghost
Copy link

ghost commented Nov 7, 2018

docker system prune --volumes -f

@anshumansworld
Copy link

docker system prune --volumes -f

This will delete all the containers, volumes, networks. Beware!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/storage kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. status/more-info-needed
Projects
None yet
Development

No branches or pull requests