New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker fills up disk #9786

Closed
dbabits opened this Issue Dec 23, 2014 · 23 comments

Comments

Projects
None yet
@dbabits
Copy link

dbabits commented Dec 23, 2014

After using docker for some time, my disk space got filled.
I have removed all stopped containers.
I was also getting errors like this: "sudo docker rm 8ed38fdc9a37
Error response from daemon: Cannot destroy container 8ed38fdc9a37: Driver devicemapper failed to remove root filesystem 8ed38fdc9a37e663dd85ea77cd6fbf91e48ed67ec5c62f0be5b60fe682e0bb08: Device is Busy"

At the moment, the disk space looks like this:
$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.3G 1.5G 82% /

$ sudo du -sh /var/lib/docker/*
5.1G /var/lib/docker/devicemapper

Can somebody please explain where 5.1G have gone?
Looking at sizes of images, numbers don't add up, (they add up to 3.976G):

$ sudo docker info
Containers: 0
Images: 87
Storage Driver: devicemapper
Pool Name: docker-202:1-401521-pool
Pool Blocksize: 65.54 kB
Data file: /var/lib/docker/devicemapper/devicemapper/data
Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 5.389 GB
Data Space Total: 107.4 GB
Metadata Space Used: 7.725 MB
Metadata Space Total: 2.147 GB
Library Version: 1.02.89-RHEL6 (2014-09-01)
Execution Driver: native-0.2
Kernel Version: 3.14.23-22.44.amzn1.x86_64
Operating System: Amazon Linux AMI 2014.09
Username: dbabits
Registry: [https://index.docker.io/v1/]

$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
dbabits/docker1 latest 63fb7fb9d95e 43 hours ago 1.531 GB
centos centos7 34943839435d 2 weeks ago 224 MB
aws_beanstalk/current-app latest f7f806e2cbc0 3 weeks ago 995.7 MB
leodido/sphinxsearch latest a8c26fad253d 8 weeks ago 979.6 MB
training/webapp latest 31fa814ba25a 6 months ago 278.6 MB

$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Environment:
AWS;
Linux ip-172-31-63-145 3.14.23-22.44.amzn1.x86_64 #1 SMP Tue Nov 11 23:07:48 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

$ sudo docker version
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f/1.3.2
OS/Arch (client): linux/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): c78088f/1.3.2

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Dec 23, 2014

Looking at sizes of images, numbers don't add up, (they add up to 3.976G):

If I'm not mistaken, the VIRTUAL SIZE of images is not the size on disk. For example, if an image has 100MB on layer 1 and removes 25MB on layer 2, the virtual size of that image is 75MB, but the size on disk is still 100MB (or even slightly more).
You can try docker images -a to also see intermediate layers, and check if that adds up.

update I actually was remembering incorrectly. See my comment below: #9786 (comment)

@SvenDowideit

This comment has been minimized.

Copy link
Contributor

SvenDowideit commented Dec 27, 2014

@thaJeztah oh nice - on reading this, i'm not sure we make that clear in the docs - might be well worth making a PR :)

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Dec 27, 2014

@SvenDowideit mm, you're right. I'll check if I can find anything in the doc that explains it. If not, will create a PR.

@kdomanski

This comment has been minimized.

Copy link
Contributor

kdomanski commented Dec 28, 2014

Distros from RedHat family seem to have no AUFS support, thus devicemapper is used, as stated by your docker info output. devicemapper driver uses a sparse file to store data. The file's maximum size is set to 100GB, as far as I can remember.

By default, space occupied by removed layers will not be reclaimed from the file (or: the file won't be re-sparsified) until the file actually reaches this maximum. This behavior improves performance but eventually makes the file grow to 100GB.

On my CentOS 6 deployments I force re-sparsification by adding --storage-opt dm.blkdiscard=true to daemon parameters in /etc/sysconfig/docker. This should be probably advised for RHEL/CentOS 6 in Docker's install docs.

@kdomanski

This comment has been minimized.

Copy link
Contributor

kdomanski commented Dec 28, 2014

Suprisingly, Docker man page appears to say that blkdiscard is enabled by default. @miminar @vbatts

@kdomanski

This comment has been minimized.

Copy link
Contributor

kdomanski commented Dec 31, 2014

btw. duplicate: #3182

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Jan 3, 2015

Ok, coming back to my prior explanation, I had my wires crossed and VIRTUAL SIZE should actually (roughly) match the size on disk, but keep in mind that layers that are shared by multiple images only take up disk space once. To see how an images' size is built-up, docker history is probably more useful than docker images -a.

To answer @SvenDowideit s question, there's already an explanation in the docs in the CLI / Images section;

The VIRTUAL SIZE is the cumulative space taken up by the image and all
its parent images. This is also the disk space used by the contents of the
Tar file created when you docker save an image.

@vbatts

This comment has been minimized.

Copy link
Contributor

vbatts commented Jan 3, 2015

Also, there is presently a bug in the calculation of the virtual size,
which fails to account for hard linked files. Therefore the size displayed
may be larger than the particular layer is.
On Jan 3, 2015 12:54 PM, "Sebastiaan van Stijn" notifications@github.com
wrote:

Ok, coming back to my prior explanation, I had my wires crossed and VIRTUAL
SIZE should actually (roughly) match the size on disk, but keep in mind
that layers that are shared by multiple images only take up disk space
once. To see how an images' size is built-up, docker history is
probably more useful than docker images -a.

To answer @SvenDowideit https://github.com/SvenDowideit s question,
there's already an explanation in the docs in the CLI / Images section
https://docs.docker.com/reference/commandline/cli/#images;

The VIRTUAL SIZE is the cumulative space taken up by the image and all
its parent images. This is also the disk space used by the contents of the
Tar file created when you docker save an image.


Reply to this email directly or view it on GitHub
#9786 (comment).

@vbatts

This comment has been minimized.

Copy link
Contributor

vbatts commented Jan 7, 2015

@dbabits we will have to review the next steps on a sanity check for disk space, as it varies per graph driver.

@kdomanski this is not a duplicate of #3182, as that one dealt with blocks not being recovered by the kernel, and this is that the loopback file created by devicemapper would allow upto 100gb, even though the disk holding that loopback file does not currently have space for that max size.

@termie

This comment has been minimized.

Copy link
Contributor

termie commented Mar 10, 2015

+1

@rhvgoyal

This comment has been minimized.

Copy link
Contributor

rhvgoyal commented Jun 8, 2015

Is this still an issue. First of all one should not be using loop devices. Even if you are, discards are issues by default for loop devices. So is this still a problem with latest docker?

@rhvgoyal

This comment has been minimized.

Copy link
Contributor

rhvgoyal commented Jun 8, 2015

I can think of one corner case where if docker did not shut down properly, then pool is not removed and upon next start pool is still there. Docker will not know about loop devices and any further removal will not issue discards.

@rhvgoyal

This comment has been minimized.

Copy link
Contributor

rhvgoyal commented Jun 12, 2015

In latest docker, we enable discards on loop devices. So deleting images/containers cleans up this space.

Is this issue reproducible on latest docker?

If not let us close the issue and open again whenever we can reproduce the issue with latest docker.

@vbatts

This comment has been minimized.

Copy link
Contributor

vbatts commented Jun 12, 2015

Is this a similar issue to #3182 ?
On Jun 12, 2015 8:31 AM, "Vivek Goyal" notifications@github.com wrote:

In latest docker, we enable discards on loop devices. So deleting
images/containers cleans up this space.

Is this issue reproducible on latest docker?

If not let us close the issue and open again whenever we can reproduce the
issue with latest docker.


Reply to this email directly or view it on GitHub
#9786 (comment).

@rhvgoyal

This comment has been minimized.

Copy link
Contributor

rhvgoyal commented Jun 16, 2015

So what's the issue here?

  • Original report seems to have listed two problems. First one is that device is busy. A reboot should solve that problem. Second one is that user is asking who is consuming space in /var/lib/docker/ and
    space consumed there seems more than it should have been. Again may be user is using ext3 and discards don't work and this issue might be similar to #3182.
  • W.r.t virtual size being more than actual physical space available on disk, I don't think that's a bug. That's how virtual sizes are. You can create a virtual disk as big as you want without physical space being there. Physical space can be made available even later by plugging in more disks.

So I really don't think this is a bug any more and we should close this bug.

@dbabits are you still seeing the problem? If yes, please provide more details like what file system you are using and exactly what's the problem you are seeing. Otherwise lets close this issue.

@vbatts

This comment has been minimized.

Copy link
Contributor

vbatts commented Jun 16, 2015

closing per @rhvgoyal
Op can reopen if this is still an issue on latest upstream releases.

@carn1x

This comment has been minimized.

Copy link

carn1x commented Jun 29, 2015

This was very useful to reclaim a ton of space: https://github.com/chadoe/docker-cleanup-volumes

Specifically a single docker run command can perform the task:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes:1.4.1 --dry-run

Where 1.4.1 is your Docker version. Be very careful to pick the right version, it seems running a pre-1.7.0 version of this script on post-1.7.0 may wipe everything.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Jun 29, 2015

@carn1x that's about cleaning up orphaned volumes, and unrelated to this issue. By default, docker does not remove a volume if you remove the container it belongs to. To remove the volume as well, add the -v option when deleting the container, i.e. docker rm -v <my-container>. An alternative tool to the one you linked, is https://github.com/cpuguy83/docker-volumes, which is a "PoC" implementation of what volume management in Docker itself might look like (which will be worked on).

edit: orphaned containers -> orphaned volumes

@ernestm

This comment has been minimized.

Copy link

ernestm commented Feb 3, 2016

I am having this issue on ubuntu 14.04 with docker 1.9.1.

Giant data file:

root@ip-10-231-39-235:/mnt/docker/devicemapper/devicemapper# ls -l
total 2386892
-rw------- 1 root root 107374182400 Feb 3 17:20 data
-rw------- 1 root root 2147483648 Feb 3 17:20 metadata

Just two images, stock ubuntu and the one I was building:

root@ip-10-231-39-235:/mnt/docker/devicemapper/devicemapper# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
2ed2c8373675 3 hours ago 187.9 MB
ubuntu 14.04 6cc0fc2a5ee3 2 weeks ago 187.9 MB

Don't see where it's going in docker history:

root@ip-10-231-39-235:/mnt/docker/devicemapper/devicemapper# docker history 6cc0fc2a5ee3
IMAGE CREATED CREATED BY SIZE COMMENT
6cc0fc2a5ee3 2 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
f80999a1f330 2 weeks ago /bin/sh -c sed -i 's/^#\s_(deb._universe)$/ 1.895 kB
2ef91804894a 2 weeks ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.5 kB
92ec6d044cb3 2 weeks ago /bin/sh -c #(nop) ADD file:7ce20ce3daa6af21db 187.7 MB

root@ip-10-231-39-235:/mnt/docker/devicemapper/devicemapper# docker history 2ed2c8373675
IMAGE CREATED CREATED BY SIZE COMMENT
2ed2c8373675 3 hours ago /bin/sh -c #(nop) ENV OTX_SRVDATA=/srv/data 0 B
20baf676070d 3 hours ago /bin/sh -c #(nop) ENV OTX_SRVCONFIG=/srv/conf 0 B
c108605684ca 3 hours ago /bin/sh -c #(nop) ENV OTX_SRVSCRIPT=/srv/scri 0 B
475404e0932d 3 hours ago /bin/sh -c #(nop) ENV OTX_SRVPROJ=/srv/otx 0 B
3a1fe8e35cfa 3 hours ago /bin/sh -c #(nop) ENV OTX_SRVHOME=/srv 0 B
b223ab4b75c3 3 hours ago /bin/sh -c #(nop) ENV OTX_DATA=data 0 B
a8dca960521f 3 hours ago /bin/sh -c #(nop) ENV OTX_CONFIG=config 0 B
6c7c1d338ddd 3 hours ago /bin/sh -c #(nop) ENV OTX_SCRIPT=scripts 0 B
c0853c176c64 3 hours ago /bin/sh -c #(nop) ENV OTX_SRC=otx 0 B
d20584e82f9b 3 hours ago /bin/sh -c #(nop) MAINTAINER AlienVault 0 B
6cc0fc2a5ee3 2 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
f80999a1f330 2 weeks ago /bin/sh -c sed -i 's/^#\s_(deb._universe)$/ 1.895 kB
2ef91804894a 2 weeks ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.5 kB
92ec6d044cb3 2 weeks ago /bin/sh -c #(nop) ADD file:7ce20ce3daa6af21db 187.7 MB

Just one container:

root@ip-10-231-39-235:/mnt/docker/devicemapper/devicemapper# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
775ef165c585 2ed2c8373675 "/bin/sh -c 'apt-get " 3 hours ago Exited (100) 3 hours ago compassionate_wescoff

Context is I was building a docker image on a (brand new) Elastic Bamboo build server - it worked on my OSX laptop fine. Just ubuntu plus some light stuff. When building on the build server, it did a bunch of apt-get updates then hung when it filled up /mnt courtesy that data file.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Feb 4, 2016

@ernestm note that those files in 'devicemapper' are sparse files, so ls -l won't show the actual size it's taking up on disk, can you try ls -lsh ? (also see #3182)

Wondering; any reason you're using devicemapper on Ubuntu? The default (and recommended for most situations) driver for Ubuntu is aufs, but it does require linux-image-extra to be installed.https://docs.docker.com/engine/installation/ubuntulinux/

@ernestm

This comment has been minimized.

Copy link

ernestm commented Feb 4, 2016

Sure, and it is using less than that it turns out
(I'm not in front of it right now to ls -lsh, but
after making a much larger EBS it got by with
only about 5 GB - still a lot, but eh.).

aufs isn't on the Atlassian-provided Elastic
Bamboo Ubuntu images for whatever reason. We're
looking into adding it and reconfiguring the
existing docker installation to use it, though
I'm not sure how that'll interact with the bamboo
built-in docker tasks. I'll log it with Atlassian
too as an improvement request.

Thanks,
Ernest

At 08:29 PM 2/3/2016, Sebastiaan van Stijn wrote:

https://github.com/ernestm@ernestm note that
those files in 'devicemapper' are sparse files,
so ls -l won't show the actual size it's taking
up on disk, can you try ls -lsh ? (also see
#3182 (comment))

Wondering; any reason you're using devicemapper
on Ubuntu? The default (and recommended for most
situations) driver for Ubuntu is aufs, but it
does require linux-image-extra to be
installed.https://docs.docker.com/engine/installation/ubuntulinux/https://docs.docker.com/engine/installation/ubuntulinux/


Reply to this email directly or
#9786 (comment)
it on GitHub.

@ghost

This comment has been minimized.

Copy link

ghost commented Feb 7, 2016

So, is there a workaround to recover from this, without loosing my image? Once my 16GB file on Amazon Linux was filled up, I cannot even do a docker save to save my file:
Error response from daemon: Error mounting '/dev/mapper/docker-202:48-262145-974321c5cc57b89530836ba059cea51e0a0fa5f616eeba9a0e357d9fd6a7dd85' on '/docker/devicemapper/mnt/974321c5cc57b89530836ba059cea51e0a0fa5f616eeba9a0e357d9fd6a7dd85': input/output error

@ghost

This comment has been minimized.

Copy link

ghost commented Feb 7, 2016

Oh, and an ls -lsh data shows 100GB, but my volume is only 16GB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment