Data space full and cannot be free'd #26015

Closed
osallou opened this Issue Aug 25, 2016 · 6 comments

Projects

None yet

4 participants

@osallou
osallou commented Aug 25, 2016

Output of docker version:

Client:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 8
 Running: 0
 Paused: 0
 Stopped: 8
Images: 41
Server Version: 1.12.1
Storage Driver: devicemapper
 Pool Name: docker-8:3-373157-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 107.4 GB
 Data Space Total: 107.4 GB
 Data Space Available: 0 B
 Metadata Space Used: 119.8 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.028 GB
 Thin Pool Minimum Free Space: 10.74 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host bridge null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-229.4.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.26 GiB
Name: cl1n002
ID: S7PW:ZMJW:JKIZ:V7SW:PUTL:2VFY:B7WM:QGQ7:3RLE:Z7EQ:2S6D:HZLQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):

physical, package installed from Docker repo

Steps to reproduce the issue:

I upgraded from docker 1.6 to 1.12

docker rm $(docker ps -a -q)
Error response from daemon: Driver devicemapper failed to remove root filesystem 4989b296d6014c7ce9c1b31cbc3fb49fe4929e0be4e119cbdcfe8bc708249bab: devicemapper: Error running DeleteDevice dm_task_run failed

Docker pull fails because there is no more space

Describe the results you received:

What is strange is from docker info: Images: 41, but if I list images

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
<none>              <none>              6e62ef6c8f87        8 days ago          518.4 MB
<none>              <none>              d9106b29abce        8 days ago          1.429 GB
<none>              <none>              927c70a63573        8 days ago          500.3 MB
<none>              <none>              9b08c1a7e9d5        8 days ago          722.2 MB
<none>              <none>              054a19cbabc4        8 days ago          569.1 MB
<none>              <none>              d1692e256fa2        8 days ago          569.1 MB
<none>              <none>              93ec7c1b43d7        8 days ago          847.7 MB
<none>              <none>              be99b5872394        14 months ago       19.89 MB
<none>              <none>              6a4d87838fa5        2 years ago         355.3 MB

# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
4989b296d601        d9106b29abce        "/bin/sh -c /mnt/go-d"   8 days ago          Dead                                    mesos-1351ad65-0267-4a78-b45b-17807946756a
....

Images seem to be broken (no repo nor tag) after upgrade.
However I cannot delete images nor containers due to previous log

I face the issue on 2 servers where Data Space is full (same config).

Describe the results you expected:

Data used should not be so big, only a few images and containers (a few Gb vs 107gb available).
I don't mind a "full cleanup" of Docker, but a yum remove does not fix the problem.

Additional information you deem important (e.g. issue happens only occasionally):

After uninstalling and reinstalling docker-engine, docker info reported

Images: 42 while previous reported Images: 41 ....

@bcdonadio

When you run docker rm, you're removing volumes, not images. Try this:

# docker rmi $(docker ps -qf dangling=true)

Does it work?

@osallou
osallou commented Aug 25, 2016

I can t remove images because containers reference them, and i can t remove
container for previouscomment error

Le jeu. 25 août 2016 18:47, Bernardo Donadio notifications@github.com a
écrit :

When you run docker rm, you're removing volumes, not images. Try this:

docker rmi $(docker ps -qf dangling=true)

Does it work?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#26015 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA-gYhw1qlyBu5ASBAYWl0E41SiCblH5ks5qjccbgaJpZM4JtGtZ
.

@bcdonadio

Oh, makes sense.

I'm interested in that unable to delete message. Could you Look in your system journal and see if there's a more descriptive message there?

And just a wild guess: to make sure that there isn't a rogue proccess keeping the FD open, did you restart the system?

@osallou
osallou commented Aug 25, 2016

I rebooted the system. I even uninstalled docker-engine and resintalled, same issue.
In journal I find:

Aug 25 22:58:13 cl1n002 kernel: Buffer I/O error on device dm-1, logical block 2621407
Aug 25 22:58:13 cl1n002 kernel: Buffer I/O error on device dm-1, logical block 2621407
Aug 25 22:58:13 cl1n002 systemd-udevd: error: /dev/dm-1: No such device or address
Aug 25 22:58:13 cl1n002 kernel: device-mapper: thin: 253:0: unable to service pool target messages in READ_ONLY or FAIL mode
Aug 25 22:58:13 cl1n002 kernel: device-mapper: thin: 253:0: unable to service pool target messages in READ_ONLY or FAIL mode
Aug 25 22:58:13 cl1n002 dockerd: time="2016-08-25T22:58:13.511550047+02:00" level=error msg="Error removing mounted layer 726011e710c80876b0d9b3058064d4eb512ef30a8a0692313485a991f5875157: devicemapper: Error running DeleteDevice dm_task_run failed"
Aug 25 22:58:13 cl1n002 dockerd: time="2016-08-25T22:58:13.511618056+02:00" level=error msg="Handler for DELETE /v1.24/containers/726011e710c8 returned error: Driver devicemapper failed to remove root filesystem 726011e710c80876b0d9b3058064d4eb512ef30a8a0692313485a991f5875157: devicemapper: Error running DeleteDevice dm_task_run failed"

On the other host having the same issue, I uninstalled docker, deleted /var/lib/docker and reinstalled. Everything was fine after that.

@osallou
osallou commented Aug 25, 2016

thin-pool seems to have switch to read-only:

# dmsetup status
docker-8:3-373157-pool: 0 209715200 thin-pool 4558 29247/524288 1638400/1638400 - ro discard_passdown queue_if_no_space

Looks like https://bugzilla.redhat.com/show_bug.cgi?id=1121736 though not using lvm but lookback device.

@thaJeztah
Member

Once you have run out of space on the loopback device, I don't think there's a way to recover, see #20272. Docker 1.11 added an option to prevent getting in this situation by specifying a minimum amount of space to keep free (see #20786), but that won't help you now that this is the case.

I don't think there's a real solution for this, so I'll close this issue for now, but feel free to continue the discussion

@thaJeztah thaJeztah closed this Sep 27, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment