New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error pulling image (...) no space left on device #10613

Closed
MitchK opened this Issue Feb 6, 2015 · 59 comments

Comments

Projects
None yet
@MitchK

MitchK commented Feb 6, 2015

Hi, I try to pull an image from a private corporate registry to a machine.

The image is a NodeJS image with some custom environment variables + a NodeJS application with npm dependencies.

The machine has 4GB of RAM.

user@machine:~$ docker pull my.private.registry.corp/org/image
Pulling repository my.private.registry.corp/org/image
a076dcf8de89: Pulling dependent layers 
511136ea3c5a: Download complete 
27d47432a69b: Download complete 
5f92234dcf1e: Download complete 
51a9c7c1f8bb: Download complete 
5ba9dab47459: Download complete 
1b5cb86bd8eb: Download complete 
e052bcc1b051: Download complete 
a076dcf8de89: Error pulling image (latest) from my.private.registry.corp/org/image, Untar exit status 1 open /tmp/app/node_modules/node-rest-client/node_modules/xml2js/node_modules/xmlbuilder/node_modules/lodash/utility/attempt.js: no space left on device odash/utility/attempt.js: no space left on device wnload complete 
9a7129a697b6: Download complete 
5a4df78f03f1: Download complete 
91e17b8f0ad0: Download complete 
27fd9249b530: Download complete 
21d10d188d73: Download complete 
95c43c63c917: Download complete 
875aec76aa78: Download complete 
a8ed7d8cb50f: Download complete 
300671eaa3d4: Download complete 
be9f77f4e0cb: Download complete 
4f58aff69463: Download complete 
de93133d1b6e: Download complete 
1965d5845989: Download complete 
cae34eb68397: Download complete 
dfcf337450ef: Download complete 
bf8c96846c44: Download complete 
e76410f1b8c1: Download complete 
644792e8361d: Download complete 
8a9b2274fed7: Download complete 
3fc11a89092f: Error downloading dependent layers 
FATA[0007] Error pulling image (latest) from my.private.registry.corp/org/image, Untar exit status 1 open /tmp/app/node_modules/node-rest-client/node_modules/xml2js/node_modules/xmlbuilder/node_modules/lodash/utility/attempt.js: no space left on device 

The strange thing is that there is space left on the device.

$ df -h
Filesystem                                          Size  Used Avail Use% Mounted on
/dev/mapper/packer--UBUNTU1204--BUILD--54--vg-root   39G   23G   15G  61% /
udev                                                2.0G   12K  2.0G   1% /dev
tmpfs                                               395M  244K  395M   1% /run
none                                                5.0M     0  5.0M   0% /run/lock
none                                                2.0G  188K  2.0G   1% /run/shm
/dev/sda1                                           228M   28M  189M  13% /boot
cgroup                                              2.0G     0  2.0G   0% /sys/fs/cgroup
@MitchK

This comment has been minimized.

Show comment
Hide comment
@MitchK

MitchK commented Feb 6, 2015

@phemmer

This comment has been minimized.

Show comment
Hide comment
@phemmer

phemmer Feb 6, 2015

Contributor

Can you please provide docker info and docker version?

Contributor

phemmer commented Feb 6, 2015

Can you please provide docker info and docker version?

@morgante

This comment has been minimized.

Show comment
Hide comment
@morgante

morgante Feb 6, 2015

Contributor

You might want to see if you have free inodes left with df -i.

We've been having major issues (#9755) with Docker exhausting inodes.

Contributor

morgante commented Feb 6, 2015

You might want to see if you have free inodes left with df -i.

We've been having major issues (#9755) with Docker exhausting inodes.

@MitchK

This comment has been minimized.

Show comment
Hide comment
@MitchK

MitchK Feb 10, 2015

Hi,

we needed a quick fix, so throwing away all images and containers helped. But I'm not sure I this will fix the actual issue.

I'll keep you posted.

MitchK commented Feb 10, 2015

Hi,

we needed a quick fix, so throwing away all images and containers helped. But I'm not sure I this will fix the actual issue.

I'll keep you posted.

@lamielle

This comment has been minimized.

Show comment
Hide comment
@lamielle

lamielle Feb 12, 2015

I just ran into this same inode-exhaustion issue on a CoreOS instance running Docker 1.5 (alpha 591.0.0 showed up as a vagrant box today).

core@core-01 ~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda9        16G  5.4G  9.7G  36% /
devtmpfs        991M     0  991M   0% /dev
tmpfs          1004M     0 1004M   0% /dev/shm
tmpfs          1004M  732K 1003M   1% /run
tmpfs          1004M     0 1004M   0% /sys/fs/cgroup
/dev/sda3       985M  308M  626M  33% /usr
tmpfs          1004M     0 1004M   0% /media
tmpfs          1004M     0 1004M   0% /tmp
/dev/sda6       108M   88K   99M   1% /usr/share/oem


core@core-01 ~ $ df -i
Filesystem      Inodes   IUsed  IFree IUse% Mounted on
/dev/sda9      1058720 1050693   8027  100% /
devtmpfs        253694     309 253385    1% /dev
tmpfs           256770       1 256769    1% /dev/shm
tmpfs           256770     474 256296    1% /run
tmpfs           256770      14 256756    1% /sys/fs/cgroup
/dev/sda3       260096    5250 254846    3% /usr
tmpfs           256770       1 256769    1% /media
tmpfs           256770      10 256760    1% /tmp
/dev/sda6        32768      19  32749    1% /usr/share/oem


core@core-01 ~ $ docker info
Containers: 22
Images: 281
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Kernel Version: 3.18.6
Operating System: CoreOS 591.0.0
CPUs: 1
Total Memory: 1.959 GiB
Name: core-01
ID: DOHI:P4U4:3EW2:6KZX:KXBT:P2E4:E4Z6:MQZK:73JF:ZGHD:37PX:WBHL
Debug mode (server): true
Debug mode (client): false
Fds: 112
Goroutines: 87
EventsListeners: 1
Init SHA1: ce0bed698ca2c3b90d2ecb50a7baa451459d8c30
Init Path: /usr/libexec/docker/dockerinit
Docker Root Dir: /var/lib/docker

core@core-01 ~ $ docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.3.3
Git commit (client): a8a31ef-dirty
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.3.3
Git commit (server): a8a31ef-dirty

Short of giving up and building this host from scratch, is there anything I can do to free up inodes on this filesystem?

lamielle commented Feb 12, 2015

I just ran into this same inode-exhaustion issue on a CoreOS instance running Docker 1.5 (alpha 591.0.0 showed up as a vagrant box today).

core@core-01 ~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda9        16G  5.4G  9.7G  36% /
devtmpfs        991M     0  991M   0% /dev
tmpfs          1004M     0 1004M   0% /dev/shm
tmpfs          1004M  732K 1003M   1% /run
tmpfs          1004M     0 1004M   0% /sys/fs/cgroup
/dev/sda3       985M  308M  626M  33% /usr
tmpfs          1004M     0 1004M   0% /media
tmpfs          1004M     0 1004M   0% /tmp
/dev/sda6       108M   88K   99M   1% /usr/share/oem


core@core-01 ~ $ df -i
Filesystem      Inodes   IUsed  IFree IUse% Mounted on
/dev/sda9      1058720 1050693   8027  100% /
devtmpfs        253694     309 253385    1% /dev
tmpfs           256770       1 256769    1% /dev/shm
tmpfs           256770     474 256296    1% /run
tmpfs           256770      14 256756    1% /sys/fs/cgroup
/dev/sda3       260096    5250 254846    3% /usr
tmpfs           256770       1 256769    1% /media
tmpfs           256770      10 256760    1% /tmp
/dev/sda6        32768      19  32749    1% /usr/share/oem


core@core-01 ~ $ docker info
Containers: 22
Images: 281
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Kernel Version: 3.18.6
Operating System: CoreOS 591.0.0
CPUs: 1
Total Memory: 1.959 GiB
Name: core-01
ID: DOHI:P4U4:3EW2:6KZX:KXBT:P2E4:E4Z6:MQZK:73JF:ZGHD:37PX:WBHL
Debug mode (server): true
Debug mode (client): false
Fds: 112
Goroutines: 87
EventsListeners: 1
Init SHA1: ce0bed698ca2c3b90d2ecb50a7baa451459d8c30
Init Path: /usr/libexec/docker/dockerinit
Docker Root Dir: /var/lib/docker

core@core-01 ~ $ docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.3.3
Git commit (client): a8a31ef-dirty
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.3.3
Git commit (server): a8a31ef-dirty

Short of giving up and building this host from scratch, is there anything I can do to free up inodes on this filesystem?

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael
Contributor

crosbymichael commented Feb 23, 2015

@dmp42

This comment has been minimized.

Show comment
Hide comment
@dmp42

dmp42 Feb 23, 2015

Contributor

@MitchK do you still have this issue?
Were you able to confirm whether this is an inode exhaustion issue, or purely disk space issue?

Thanks a lot!

Contributor

dmp42 commented Feb 23, 2015

@MitchK do you still have this issue?
Were you able to confirm whether this is an inode exhaustion issue, or purely disk space issue?

Thanks a lot!

@allinwonder

This comment has been minimized.

Show comment
Hide comment
@allinwonder

allinwonder Mar 18, 2015

I hit the same problem and it is the issue of running out of inode in the root volume, created seperated volume for /var/lib/docker mount point to work around this issue

allinwonder commented Mar 18, 2015

I hit the same problem and it is the issue of running out of inode in the root volume, created seperated volume for /var/lib/docker mount point to work around this issue

@bitliner

This comment has been minimized.

Show comment
Hide comment
@bitliner

bitliner Apr 13, 2015

It is happening on my laptop with kernel Linux bitliner-laptop 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

ubuntu 14.04

docker-info

Containers: 0
Images: 691
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 691
Execution Driver: native-0.2
Kernel Version: 3.13.0-37-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 7.257 GiB
Name: bitliner-laptop
ID: VQZJ:63LB:7DUF:IF7Y:JEMF:VNKB:PXVJ:PA4A:6M5W:BUIH:633N:DHP7
Username: bitliner
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support
docker version

Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef

bitliner commented Apr 13, 2015

It is happening on my laptop with kernel Linux bitliner-laptop 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

ubuntu 14.04

docker-info

Containers: 0
Images: 691
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 691
Execution Driver: native-0.2
Kernel Version: 3.13.0-37-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 7.257 GiB
Name: bitliner-laptop
ID: VQZJ:63LB:7DUF:IF7Y:JEMF:VNKB:PXVJ:PA4A:6M5W:BUIH:633N:DHP7
Username: bitliner
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support
docker version

Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef

@dmp42

This comment has been minimized.

Show comment
Hide comment
@dmp42

dmp42 Apr 13, 2015

Contributor

@bitliner is this a inode exhaustion issue? (see previous comments)
or purely disk space?

Contributor

dmp42 commented Apr 13, 2015

@bitliner is this a inode exhaustion issue? (see previous comments)
or purely disk space?

@bitliner

This comment has been minimized.

Show comment
Hide comment
@bitliner

bitliner Apr 13, 2015

I have 8.8gb free, I guess it is inode exhaustion, I'll check it out

bitliner commented Apr 13, 2015

I have 8.8gb free, I guess it is inode exhaustion, I'll check it out

@xueshanf

This comment has been minimized.

Show comment
Hide comment
@xueshanf

xueshanf Apr 14, 2015

I have a dedicated 100GB docker volume but the system was running out of inode out of 6million! I got "So space left on device error".

I cleaned up some images, but not enough. Is it really using 6 million inodes?

core@ip-10-42-2-93 /var/lib/docker/overlay $ df -i
Filesystem       Inodes   IUsed    IFree IUse% Mounted on
/dev/xvda9     12444032   22778 12421254    1% /
devtmpfs         954863     328   954535    1% /dev
tmpfs            957948       1   957947    1% /dev/shm
tmpfs            957948    1302   956646    1% /run
tmpfs            957948      14   957934    1% /sys/fs/cgroup
/dev/xvda3       260096    5258   254838    3% /usr
tmpfs            957948     638   957310    1% /tmp
tmpfs            957948       1   957947    1% /media
/dev/xvda6        32768      12    32756    1% /usr/share/oem
/dev/xvdb       6553600 6050368   503232   93% /var/lib/docker
core@ip-10-42-2-93 /var/lib/docker/overlay $ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/xvda9     ext4       47G  882M   44G   2% /
devtmpfs       devtmpfs  3.7G     0  3.7G   0% /dev
tmpfs          tmpfs     3.7G     0  3.7G   0% /dev/shm
tmpfs          tmpfs     3.7G  3.6M  3.7G   1% /run
tmpfs          tmpfs     3.7G     0  3.7G   0% /sys/fs/cgroup
/dev/xvda3     ext4      985M  309M  626M  34% /usr
tmpfs          tmpfs     3.7G   45M  3.7G   2% /tmp
tmpfs          tmpfs     3.7G     0  3.7G   0% /media
/dev/xvda6     ext4      108M   56K   99M   1% /usr/share/oem
/dev/xvdb      ext4       99G   22G   72G  24% /var/lib/docker
/dev/xvdc      ext4      158G  1.3G  149G   1% /opt/data
core@ip-10-42-2-93 /var/lib/docker/overlay $ docker info
Containers: 7
Images: 794
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Kernel Version: 3.18.6
Operating System: CoreOS 607.0.0
CPUs: 2
Total Memory: 7.309 GiB
Name: ip-10-42-2-93.us-west-2.compute.internal
ID: UZW4:PCTF:C3KH:AZ62:3E4D:RBLN:66ZT:PTBQ:BNC2:7W2V:U4DZ:RGWR
core@ip-10-42-2-93 /var/lib/docker/overlay $ docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.3.3
Git commit (client): a8a31ef-dirty
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.3.3
Git commit (server): a8a31ef-dirty

xueshanf commented Apr 14, 2015

I have a dedicated 100GB docker volume but the system was running out of inode out of 6million! I got "So space left on device error".

I cleaned up some images, but not enough. Is it really using 6 million inodes?

core@ip-10-42-2-93 /var/lib/docker/overlay $ df -i
Filesystem       Inodes   IUsed    IFree IUse% Mounted on
/dev/xvda9     12444032   22778 12421254    1% /
devtmpfs         954863     328   954535    1% /dev
tmpfs            957948       1   957947    1% /dev/shm
tmpfs            957948    1302   956646    1% /run
tmpfs            957948      14   957934    1% /sys/fs/cgroup
/dev/xvda3       260096    5258   254838    3% /usr
tmpfs            957948     638   957310    1% /tmp
tmpfs            957948       1   957947    1% /media
/dev/xvda6        32768      12    32756    1% /usr/share/oem
/dev/xvdb       6553600 6050368   503232   93% /var/lib/docker
core@ip-10-42-2-93 /var/lib/docker/overlay $ df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/xvda9     ext4       47G  882M   44G   2% /
devtmpfs       devtmpfs  3.7G     0  3.7G   0% /dev
tmpfs          tmpfs     3.7G     0  3.7G   0% /dev/shm
tmpfs          tmpfs     3.7G  3.6M  3.7G   1% /run
tmpfs          tmpfs     3.7G     0  3.7G   0% /sys/fs/cgroup
/dev/xvda3     ext4      985M  309M  626M  34% /usr
tmpfs          tmpfs     3.7G   45M  3.7G   2% /tmp
tmpfs          tmpfs     3.7G     0  3.7G   0% /media
/dev/xvda6     ext4      108M   56K   99M   1% /usr/share/oem
/dev/xvdb      ext4       99G   22G   72G  24% /var/lib/docker
/dev/xvdc      ext4      158G  1.3G  149G   1% /opt/data
core@ip-10-42-2-93 /var/lib/docker/overlay $ docker info
Containers: 7
Images: 794
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Kernel Version: 3.18.6
Operating System: CoreOS 607.0.0
CPUs: 2
Total Memory: 7.309 GiB
Name: ip-10-42-2-93.us-west-2.compute.internal
ID: UZW4:PCTF:C3KH:AZ62:3E4D:RBLN:66ZT:PTBQ:BNC2:7W2V:U4DZ:RGWR
core@ip-10-42-2-93 /var/lib/docker/overlay $ docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.3.3
Git commit (client): a8a31ef-dirty
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.3.3
Git commit (server): a8a31ef-dirty
@lamielle

This comment has been minimized.

Show comment
Hide comment
@lamielle

lamielle Apr 14, 2015

@xueshanf if you created this additional 100GB partition yourself, what parameters did you use when formatting the filesystem? There was a fix in CoreOS 591.0.0 which tweaked the formatting parameters to tune for high numbers of small files (vs. fewer but larger files). The workload for a volume that stores a registry is certainly the former. I would suggest looking into the filesystem parameters of your /dev/xvdb partition.

lamielle commented Apr 14, 2015

@xueshanf if you created this additional 100GB partition yourself, what parameters did you use when formatting the filesystem? There was a fix in CoreOS 591.0.0 which tweaked the formatting parameters to tune for high numbers of small files (vs. fewer but larger files). The workload for a volume that stores a registry is certainly the former. I would suggest looking into the filesystem parameters of your /dev/xvdb partition.

@xueshanf

This comment has been minimized.

Show comment
Hide comment
@xueshanf

xueshanf Apr 14, 2015

I build the file system with the following unit. I will look for the inode tuning tweak you mentioned. If you happen to have the reference, please post?

[Unit]
Description=Formats the disk drive
[Service]
Type=oneshot
RemainAfterExit=yes
Environment="LABEL=var-lib-docker"
Environment="DEV=/dev/xvdb"
ExecStart=-/bin/bash -c "wipefs -a -f $DEV && mkfs.ext4 -F -L $LABEL $DEV && echo wiped"

xueshanf commented Apr 14, 2015

I build the file system with the following unit. I will look for the inode tuning tweak you mentioned. If you happen to have the reference, please post?

[Unit]
Description=Formats the disk drive
[Service]
Type=oneshot
RemainAfterExit=yes
Environment="LABEL=var-lib-docker"
Environment="DEV=/dev/xvdb"
ExecStart=-/bin/bash -c "wipefs -a -f $DEV && mkfs.ext4 -F -L $LABEL $DEV && echo wiped"
@xueshanf

This comment has been minimized.

Show comment
Hide comment
@xueshanf

xueshanf Apr 15, 2015

@lamielle I found the issue coreos/bugs#264. Looks like I should use 4096 inode ration (mkfs.ext4 -T news) to increase inode size from current 6.5 million to 24million on the 100GB volume.

xueshanf commented Apr 15, 2015

@lamielle I found the issue coreos/bugs#264. Looks like I should use 4096 inode ration (mkfs.ext4 -T news) to increase inode size from current 6.5 million to 24million on the 100GB volume.

@lamielle

This comment has been minimized.

Show comment
Hide comment
@lamielle

lamielle Apr 15, 2015

@xueshanf Yes! That was what I was remembering. Glad you found it. Hope it helps!

lamielle commented Apr 15, 2015

@xueshanf Yes! That was what I was remembering. Glad you found it. Hope it helps!

@xueshanf

This comment has been minimized.

Show comment
Hide comment
@xueshanf

xueshanf Apr 15, 2015

@lamielle Yup! I am up to 25 million inodes now.

xueshanf commented Apr 15, 2015

@lamielle Yup! I am up to 25 million inodes now.

@bitliner

This comment has been minimized.

Show comment
Hide comment
@bitliner

bitliner Apr 15, 2015

@dmp42 yes, I think it is node exhaustation

image

May you remember me how to change the location docker uses to store data?
I'll change that folder moving it to a bigger partition, given that other workarounds do not look to exist...

bitliner commented Apr 15, 2015

@dmp42 yes, I think it is node exhaustation

image

May you remember me how to change the location docker uses to store data?
I'll change that folder moving it to a bigger partition, given that other workarounds do not look to exist...

@xiaods

This comment has been minimized.

Show comment
Hide comment
@xiaods

xiaods May 5, 2015

Contributor

I am feel this issue can close.

Contributor

xiaods commented May 5, 2015

I am feel this issue can close.

@xueshanf

This comment has been minimized.

Show comment
Hide comment
@xueshanf

xueshanf May 5, 2015

@xiaods yes please. thank you!

xueshanf commented May 5, 2015

@xiaods yes please. thank you!

@MitchK

This comment has been minimized.

Show comment
Hide comment
@MitchK

MitchK Jun 8, 2015

Is the inode issue something that can be changed or is this wanted behaviour? What is the best strategy to tackle this? Bigger partition? Can those inodes be removed?

MitchK commented Jun 8, 2015

Is the inode issue something that can be changed or is this wanted behaviour? What is the best strategy to tackle this? Bigger partition? Can those inodes be removed?

@zenwanger

This comment has been minimized.

Show comment
Hide comment
@zenwanger

zenwanger Jun 11, 2015

I have run into a similar error "INFO[0718] Untar exit status 1 write /tools/file.v: no space left on device". I don't seem to have inode or disk space issues but still seem to get the 'no space left on device error'.

Any recommendation appreciated.

[root@vlsj-djap1 docker]# df -i

/dev/mapper/docker-docker_lv
Filesystem  Inodes  IUsed   IFree   IUse%   Mounted on
            9830400 77100   9753300 1%      /docker

[root@vlsj-djap1 docker]# df -i
/dev/mapper/docker-docker_lv
Filesystem Inodes IUsed IFree IUse% Mounted on
9830400 77100 9753300 1% /docker

[root@vlsj-djap1 docker]# docker -D info

Containers: 2
Images: 46
Storage Driver: devicemapper
 Pool Name: docker-253:1-1966081-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 11.93 GB
 Data Space Total: 107.4 GB
 Data Space Available: 95.45 GB
 Metadata Space Used: 8.253 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.139 GB
 Udev Sync Supported: false
 Data loop file: /docker/devicemapper/devicemapper/data
 Metadata loop file: /docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Kernel Version: 2.6.32-504.16.2.el6.x86_64
Operating System: <unknown>
CPUs: 1
Total Memory: 3.743 GiB
Name: vlsj-djap1
ID: 4Y6P:2CKS:PQA2:ZKUI:MUEL:XOVS:7DCT:TXKB:HSW5:IMSH:45KD:OGQK
Debug mode (server): false
Debug mode (client): true
Fds: 71
Goroutines: 58
System Time: Thu Jun 11 08:36:31 PDT 2015
EventsListeners: 0
Init SHA1: 7f9c6798b022e64f04d2aff8c75cbf38a2779493
Init Path: /usr/local/bin/docker
Docker Root Dir: /docker

[root@vlsj-djap1 Markdown_1.0.1]# docker -D images

REPOSITORY              TAG    IMAGE            IDCREATED       VIRTUAL SIZE
<none>                  <none>  0a655f60fd39    59 minutes ago  5.535 GB
vd:5000/sa              latest  aa8544f6dbfe    42 hours ago    5.535 GB
ubuntu                  latest  fa81ed084842    10 days ago     188.3 MB
vd:5000/ubuntu-local    latest  fa81ed084842    10 days ago     188.3 MB
registry                2.0     b39a445085a6    2 weeks ago     548.6 MB
centos                  latest  fd44297e2ddb    7 weeks ago     215.7 MB

[root@vlsj-djap1 Markdown_1.0.1]# docker -D ps -a

CONTAINER ID    IMAGE                                                                   COMMAND             CREATED                 STATUS  PORTSNAMES
eef5931f1355    0a655f60fd39b931f16fea293730c6529b50111080a8c41067ee355f220a365e:latest "/bin/sh -c '#(nop) About an hour ago               serene_bartik   
1d84604438cf    registry:2.0                                                            "registry cmd/regist    5 days ago  Up 5 days   0.0.0.0:5000->5000/tcp   elated_perlman 

zenwanger commented Jun 11, 2015

I have run into a similar error "INFO[0718] Untar exit status 1 write /tools/file.v: no space left on device". I don't seem to have inode or disk space issues but still seem to get the 'no space left on device error'.

Any recommendation appreciated.

[root@vlsj-djap1 docker]# df -i

/dev/mapper/docker-docker_lv
Filesystem  Inodes  IUsed   IFree   IUse%   Mounted on
            9830400 77100   9753300 1%      /docker

[root@vlsj-djap1 docker]# df -i
/dev/mapper/docker-docker_lv
Filesystem Inodes IUsed IFree IUse% Mounted on
9830400 77100 9753300 1% /docker

[root@vlsj-djap1 docker]# docker -D info

Containers: 2
Images: 46
Storage Driver: devicemapper
 Pool Name: docker-253:1-1966081-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 11.93 GB
 Data Space Total: 107.4 GB
 Data Space Available: 95.45 GB
 Metadata Space Used: 8.253 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.139 GB
 Udev Sync Supported: false
 Data loop file: /docker/devicemapper/devicemapper/data
 Metadata loop file: /docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Kernel Version: 2.6.32-504.16.2.el6.x86_64
Operating System: <unknown>
CPUs: 1
Total Memory: 3.743 GiB
Name: vlsj-djap1
ID: 4Y6P:2CKS:PQA2:ZKUI:MUEL:XOVS:7DCT:TXKB:HSW5:IMSH:45KD:OGQK
Debug mode (server): false
Debug mode (client): true
Fds: 71
Goroutines: 58
System Time: Thu Jun 11 08:36:31 PDT 2015
EventsListeners: 0
Init SHA1: 7f9c6798b022e64f04d2aff8c75cbf38a2779493
Init Path: /usr/local/bin/docker
Docker Root Dir: /docker

[root@vlsj-djap1 Markdown_1.0.1]# docker -D images

REPOSITORY              TAG    IMAGE            IDCREATED       VIRTUAL SIZE
<none>                  <none>  0a655f60fd39    59 minutes ago  5.535 GB
vd:5000/sa              latest  aa8544f6dbfe    42 hours ago    5.535 GB
ubuntu                  latest  fa81ed084842    10 days ago     188.3 MB
vd:5000/ubuntu-local    latest  fa81ed084842    10 days ago     188.3 MB
registry                2.0     b39a445085a6    2 weeks ago     548.6 MB
centos                  latest  fd44297e2ddb    7 weeks ago     215.7 MB

[root@vlsj-djap1 Markdown_1.0.1]# docker -D ps -a

CONTAINER ID    IMAGE                                                                   COMMAND             CREATED                 STATUS  PORTSNAMES
eef5931f1355    0a655f60fd39b931f16fea293730c6529b50111080a8c41067ee355f220a365e:latest "/bin/sh -c '#(nop) About an hour ago               serene_bartik   
1d84604438cf    registry:2.0                                                            "registry cmd/regist    5 days ago  Up 5 days   0.0.0.0:5000->5000/tcp   elated_perlman 
@dmp42

This comment has been minimized.

Show comment
Hide comment
@dmp42

dmp42 Jun 11, 2015

Contributor

@zenwanger

Your kernel is very old (2.6.32).
I'm not sure what docker version you are running here, but it's likely unsupported, and I would advise you try upgrading.

Contributor

dmp42 commented Jun 11, 2015

@zenwanger

Your kernel is very old (2.6.32).
I'm not sure what docker version you are running here, but it's likely unsupported, and I would advise you try upgrading.

@zenwanger

This comment has been minimized.

Show comment
Hide comment
@zenwanger

zenwanger Jun 11, 2015

@dmp42 Thanks. Will update to the latest.

zenwanger commented Jun 11, 2015

@dmp42 Thanks. Will update to the latest.

@liquid-sky

This comment has been minimized.

Show comment
Hide comment
@liquid-sky

liquid-sky Jul 8, 2015

Running latest Docker with one of the latest kernels and the inodes are consumed non-proportionally:

Started to happen when I switched to OverlayFS.

$ docker -v
Docker version 1.7.0, build 0baf609

$ uname -svr
Linux 3.19.0-21-generic #21~14.04.1-Ubuntu SMP Sun Jun 14 18:45:42 UTC 2015

$ df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvdb        30G  1.5G   27G   6% /mnt

$ df -i /mnt
Filesystem      Inodes  IUsed   IFree IUse% Mounted on
/dev/xvdb      1966080 308612 1657468   16% /mnt

liquid-sky commented Jul 8, 2015

Running latest Docker with one of the latest kernels and the inodes are consumed non-proportionally:

Started to happen when I switched to OverlayFS.

$ docker -v
Docker version 1.7.0, build 0baf609

$ uname -svr
Linux 3.19.0-21-generic #21~14.04.1-Ubuntu SMP Sun Jun 14 18:45:42 UTC 2015

$ df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvdb        30G  1.5G   27G   6% /mnt

$ df -i /mnt
Filesystem      Inodes  IUsed   IFree IUse% Mounted on
/dev/xvdb      1966080 308612 1657468   16% /mnt
@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jul 8, 2015

Contributor

@liquid-sky This is a known issue with overlay.

Contributor

cpuguy83 commented Jul 8, 2015

@liquid-sky This is a known issue with overlay.

@saada

This comment has been minimized.

Show comment
Hide comment
@saada

saada Aug 29, 2016

Still a pain in Docker 1.12.
Shocked this is still an open issue.

saada commented Aug 29, 2016

Still a pain in Docker 1.12.
Shocked this is still an open issue.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Aug 29, 2016

Contributor

@saada Can you explain what the pain is?

Contributor

cpuguy83 commented Aug 29, 2016

@saada Can you explain what the pain is?

@dmcgowan

This comment has been minimized.

Show comment
Hide comment
@dmcgowan

dmcgowan Aug 29, 2016

Member

@saada we added overlay2 to address this issue. The existing overlay driver cannot be updated to fix the inode overuse issue and maintain backwards compatibility since it was not designed to use the newer kernel feature for multiple lower directories. My preference is to keep this issue open until we are confident overlay2 is a stable alternative to overlay.

Member

dmcgowan commented Aug 29, 2016

@saada we added overlay2 to address this issue. The existing overlay driver cannot be updated to fix the inode overuse issue and maintain backwards compatibility since it was not designed to use the newer kernel feature for multiple lower directories. My preference is to keep this issue open until we are confident overlay2 is a stable alternative to overlay.

@saada

This comment has been minimized.

Show comment
Hide comment
@saada

saada Aug 30, 2016

@cpuguy83 @dmcgowan I know you guys are doing your best. Thanks for the effort. Looking forward for overlay2 to solve resolve this issue.

saada commented Aug 30, 2016

@cpuguy83 @dmcgowan I know you guys are doing your best. Thanks for the effort. Looking forward for overlay2 to solve resolve this issue.

@zagorulkinde

This comment has been minimized.

Show comment
Hide comment
@zagorulkinde

zagorulkinde Sep 11, 2016

Same issue here.


λ Dmitry [/var/lib] → docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888 cloudera/quickstart /usr/bin/docker-quickstar
Unable to find image 'cloudera/quickstart:latest' locally
latest: Pulling from cloudera/quickstart
1d00652ce734: Downloading 733.6 MB/4.444 GB
1d00652ce734: Downloading 4.444 GB/4.444 GB
docker: write /var/lib/docker/tmp/GetImageBlob699786905: no space left on device.
See 'docker run --help'.

λ Dmitry [/var/lib] →

λ Dmitry [/var/lib] → ls /var/lib/
postfix


λ Dmitry [~] → df -h /
Filesystem   Size   Used  Avail Capacity  iused   ifree %iused  Mounted on
/dev/disk1  233Gi  209Gi   23Gi    90% 54874760 6104054   90%   /

Mac os 10.11.6

docker version
Client:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 17:32:24 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 17:32:24 2016
 OS/Arch:      linux/amd64
 Experimental: true

docker info 

Containers: 57
 Running: 3
 Paused: 0
 Stopped: 54
Images: 21
Server Version: 1.12.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 237
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.19-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.953 GiB
Name: moby
ID: ZJSN:HTDT:TZZM:5ZAO:VCNI:J6GS:LD5A:QXJ5:54WO:22PD:WY4M:ZM4G
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 35
 Goroutines: 66
 System Time: 2016-09-11T17:14:01.448934038Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8

zagorulkinde commented Sep 11, 2016

Same issue here.


λ Dmitry [/var/lib] → docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888 cloudera/quickstart /usr/bin/docker-quickstar
Unable to find image 'cloudera/quickstart:latest' locally
latest: Pulling from cloudera/quickstart
1d00652ce734: Downloading 733.6 MB/4.444 GB
1d00652ce734: Downloading 4.444 GB/4.444 GB
docker: write /var/lib/docker/tmp/GetImageBlob699786905: no space left on device.
See 'docker run --help'.

λ Dmitry [/var/lib] →

λ Dmitry [/var/lib] → ls /var/lib/
postfix


λ Dmitry [~] → df -h /
Filesystem   Size   Used  Avail Capacity  iused   ifree %iused  Mounted on
/dev/disk1  233Gi  209Gi   23Gi    90% 54874760 6104054   90%   /

Mac os 10.11.6

docker version
Client:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 17:32:24 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 17:32:24 2016
 OS/Arch:      linux/amd64
 Experimental: true

docker info 

Containers: 57
 Running: 3
 Paused: 0
 Stopped: 54
Images: 21
Server Version: 1.12.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 237
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.19-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.953 GiB
Name: moby
ID: ZJSN:HTDT:TZZM:5ZAO:VCNI:J6GS:LD5A:QXJ5:54WO:22PD:WY4M:ZM4G
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 35
 Goroutines: 66
 System Time: 2016-09-11T17:14:01.448934038Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
@mbentley

This comment has been minimized.

Show comment
Hide comment
@mbentley

mbentley Sep 11, 2016

Contributor

@zagorulkinde - looks like you are running Docker for Mac. The VM is out of space. There might be some useful suggestions here: https://forums.docker.com/t/consistently-out-of-disk-space-in-docker-beta/9438/9

Contributor

mbentley commented Sep 11, 2016

@zagorulkinde - looks like you are running Docker for Mac. The VM is out of space. There might be some useful suggestions here: https://forums.docker.com/t/consistently-out-of-disk-space-in-docker-beta/9438/9

@justincormack

This comment has been minimized.

Show comment
Hide comment
@justincormack

justincormack Sep 19, 2016

Contributor

overlay2 seems fine for me, and would recommend anyone using overlay upgrades at this point if they are seeing this issue and they are using a recent kernel.

Contributor

justincormack commented Sep 19, 2016

overlay2 seems fine for me, and would recommend anyone using overlay upgrades at this point if they are seeing this issue and they are using a recent kernel.

@whosthatknocking

This comment has been minimized.

Show comment
Hide comment
@whosthatknocking

whosthatknocking Sep 21, 2016

Please keep in mind it's compatibility issues; Overlay doesn't fully implement the posix standards potentially impacting your application (rename system call) or platform (open system call, yum install on RH like systems).

whosthatknocking commented Sep 21, 2016

Please keep in mind it's compatibility issues; Overlay doesn't fully implement the posix standards potentially impacting your application (rename system call) or platform (open system call, yum install on RH like systems).

@justincormack

This comment has been minimized.

Show comment
Hide comment
@justincormack

justincormack Sep 24, 2016

Contributor

@whosthatknocking overlay and overlay2 are the same with respect to Posix. The issue here is inode exhaustion, which os only a problem with overlay.

The fixes are

  1. Use overlay2 if you have a recent kernel that supports it. (4.x)
  2. reformat the drive you are using if you use an old kernel to have more inodes.

I am going to close this issue as overlay2 should be as stable as overlay now; feel free to continue to comment if there are still issues, or open a new issue if you have specific problems using `overlay2.

Contributor

justincormack commented Sep 24, 2016

@whosthatknocking overlay and overlay2 are the same with respect to Posix. The issue here is inode exhaustion, which os only a problem with overlay.

The fixes are

  1. Use overlay2 if you have a recent kernel that supports it. (4.x)
  2. reformat the drive you are using if you use an old kernel to have more inodes.

I am going to close this issue as overlay2 should be as stable as overlay now; feel free to continue to comment if there are still issues, or open a new issue if you have specific problems using `overlay2.

@Paxxi

This comment has been minimized.

Show comment
Hide comment
@Paxxi

Paxxi Feb 14, 2017

How come overlay2 is not the default when setting up docker 1.12/1.13? (ubuntu 16.04 on digitalocean droplet with docker from official ppa)

We recently set up a swarm cluster and it's seen light usage in dev/test for about 2 months and today we hit this issue on one of 3 nodes, the others are at 95% and 78% inode usage.

Paxxi commented Feb 14, 2017

How come overlay2 is not the default when setting up docker 1.12/1.13? (ubuntu 16.04 on digitalocean droplet with docker from official ppa)

We recently set up a swarm cluster and it's seen light usage in dev/test for about 2 months and today we hit this issue on one of 3 nodes, the others are at 95% and 78% inode usage.

@antonin07130

This comment has been minimized.

Show comment
Hide comment
@antonin07130

antonin07130 Feb 24, 2017

+1 without overlay

antonin07130 commented Feb 24, 2017

+1 without overlay

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Feb 24, 2017

Member

docker uses a priority list, that automatically selects the first driver that is supported to get you going. The overlay2 and overlay drivers were given a higher priority in docker 1.13; https://github.com/docker/docker/pull/27932/files#diff-8517a02806afb5571ad3c11a52e431e6R55

If your machine supports aufs, that's used, if not, and your kernel supports it, overlay2, overlay or devicemapper is used

I suspect aufs is installed on the 16.04 machine, so it's selected, but you can configure the daemon to use a different driver using a daemon.json configuration file

Member

thaJeztah commented Feb 24, 2017

docker uses a priority list, that automatically selects the first driver that is supported to get you going. The overlay2 and overlay drivers were given a higher priority in docker 1.13; https://github.com/docker/docker/pull/27932/files#diff-8517a02806afb5571ad3c11a52e431e6R55

If your machine supports aufs, that's used, if not, and your kernel supports it, overlay2, overlay or devicemapper is used

I suspect aufs is installed on the 16.04 machine, so it's selected, but you can configure the daemon to use a different driver using a daemon.json configuration file

@hgontijo

This comment has been minimized.

Show comment
Hide comment
@hgontijo

hgontijo Mar 17, 2017

@thaJeztah et al, I followed all the steps to setup direct-lvm and I'm still getting the no space left on device even though there's enough space on the root/docker volumes and metadata/data.
Whenever I tried with overlay2 and everything running under root volume (no extra device), it worked fine.

The docker image size is 10 GiB.

I appreciate if you can provide any clue on this issue.

Issue

$ docker pull <image from ecr>
...
f0d0216c7ba3: Pull complete
73451a479100: Extracting [==================================================>] 741.1 MB/741.1 MB
80cf804cccf0: Download complete
344c7bbe11c9: Download complete
469d40f6eebd: Download complete
656132e1351f: Download complete
ef660b25f676: Download complete
failed to register layer: ApplyLayer exit status 1 stdout:  stderr: write /usr/oracle/oradata/EMERUNI/datafile/o1_mf_undotbs1_cztx6v2z_.dbf: no space left on device

Environment


$ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.6
Storage Driver: devicemapper
 Pool Name: docker-thinpool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 14.76 GB
 Data Space Total: 102 GB
 Data Space Available: 87.24 GB
 Metadata Space Used: 1.958 MB
 Metadata Space Total: 1.07 GB
 Metadata Space Available: 1.068 GB
 Thin Pool Minimum Free Space: 10.2 GB
  Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay null host bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 4.4.41-36.55.amzn1.x86_64
Operating System: Amazon Linux AMI 2016.09
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.42 GiB
Name: ip-10-60-1-188
ID: Q6O7:J7QA:AT2F:PA32:RCQO:HWPN:6T7A:CVEX:TEUJ:Z3D2:WGDW:CYFP
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8

$ df -i
Filesystem  Inodes IUsed   IFree IUse% Mounted on
devtmpfs       4116180   564 4115616    1% /dev
tmpfs          4118899     1 4118898    1% /dev/shm
/dev/xvda1     3276800 42235 3234565    2% /

$ df -h
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs         16G  112K   16G   1% /dev
tmpfs            16G     0   16G   0% /dev/shm
/dev/xvda1   50G  2.8G   47G   6% /

lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,LABEL
NAME                    TYPE  SIZE FSTYPE      MOUNTPOINT LABEL
xvda                    disk   50G
└─xvda1                 part   50G ext4        /          /
xvdb                    disk  100G LVM2_member
├─docker-thinpool_tmeta lvm  1020M
│ └─docker-thinpool     lvm    95G
└─docker-thinpool_tdata lvm    95G
  └─docker-thinpool     lvm    95G

hgontijo commented Mar 17, 2017

@thaJeztah et al, I followed all the steps to setup direct-lvm and I'm still getting the no space left on device even though there's enough space on the root/docker volumes and metadata/data.
Whenever I tried with overlay2 and everything running under root volume (no extra device), it worked fine.

The docker image size is 10 GiB.

I appreciate if you can provide any clue on this issue.

Issue

$ docker pull <image from ecr>
...
f0d0216c7ba3: Pull complete
73451a479100: Extracting [==================================================>] 741.1 MB/741.1 MB
80cf804cccf0: Download complete
344c7bbe11c9: Download complete
469d40f6eebd: Download complete
656132e1351f: Download complete
ef660b25f676: Download complete
failed to register layer: ApplyLayer exit status 1 stdout:  stderr: write /usr/oracle/oradata/EMERUNI/datafile/o1_mf_undotbs1_cztx6v2z_.dbf: no space left on device

Environment


$ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.6
Storage Driver: devicemapper
 Pool Name: docker-thinpool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 14.76 GB
 Data Space Total: 102 GB
 Data Space Available: 87.24 GB
 Metadata Space Used: 1.958 MB
 Metadata Space Total: 1.07 GB
 Metadata Space Available: 1.068 GB
 Thin Pool Minimum Free Space: 10.2 GB
  Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay null host bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 4.4.41-36.55.amzn1.x86_64
Operating System: Amazon Linux AMI 2016.09
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.42 GiB
Name: ip-10-60-1-188
ID: Q6O7:J7QA:AT2F:PA32:RCQO:HWPN:6T7A:CVEX:TEUJ:Z3D2:WGDW:CYFP
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8

$ df -i
Filesystem  Inodes IUsed   IFree IUse% Mounted on
devtmpfs       4116180   564 4115616    1% /dev
tmpfs          4118899     1 4118898    1% /dev/shm
/dev/xvda1     3276800 42235 3234565    2% /

$ df -h
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs         16G  112K   16G   1% /dev
tmpfs            16G     0   16G   0% /dev/shm
/dev/xvda1   50G  2.8G   47G   6% /

lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,LABEL
NAME                    TYPE  SIZE FSTYPE      MOUNTPOINT LABEL
xvda                    disk   50G
└─xvda1                 part   50G ext4        /          /
xvdb                    disk  100G LVM2_member
├─docker-thinpool_tmeta lvm  1020M
│ └─docker-thinpool     lvm    95G
└─docker-thinpool_tdata lvm    95G
  └─docker-thinpool     lvm    95G
@whosthatknocking

This comment has been minimized.

Show comment
Hide comment
@whosthatknocking

whosthatknocking Mar 17, 2017

@hgontijo Perhaps check xvdb not xvda1?

whosthatknocking commented Mar 17, 2017

@hgontijo Perhaps check xvdb not xvda1?

@mbentley

This comment has been minimized.

Show comment
Hide comment
@mbentley

mbentley Mar 18, 2017

Contributor

Check the dm.basesize option: https://docs.docker.com/engine/reference/commandline/dockerd/#storage-driver-options. The default is 10GB which is probably why you're hitting it while extracting your image that is 10GB.

Contributor

mbentley commented Mar 18, 2017

Check the dm.basesize option: https://docs.docker.com/engine/reference/commandline/dockerd/#storage-driver-options. The default is 10GB which is probably why you're hitting it while extracting your image that is 10GB.

@hgontijo

This comment has been minimized.

Show comment
Hide comment
@hgontijo

hgontijo Mar 18, 2017

@mbentley super, it worked! Just for reference, here's how my Docker daemon options looks like:

OPTIONS="--default-ulimit nofile=1024:4096 -H tcp://0.0.0.0:4243 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool --storage-opt=dm.use_deferred_removal=true --storage-opt=dm.use_deferred_deletion=true --storage-opt dm.basesize=50G"

hgontijo commented Mar 18, 2017

@mbentley super, it worked! Just for reference, here's how my Docker daemon options looks like:

OPTIONS="--default-ulimit nofile=1024:4096 -H tcp://0.0.0.0:4243 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool --storage-opt=dm.use_deferred_removal=true --storage-opt=dm.use_deferred_deletion=true --storage-opt dm.basesize=50G"

@sivabudh

This comment has been minimized.

Show comment
Hide comment
@sivabudh

sivabudh Mar 31, 2017

Just execute this command:

docker images --no-trunc | grep '<none>' | awk '{ print $3 }' | xargs -r docker rmi

Ref: https://lebkowski.name/docker-volumes/

sivabudh commented Mar 31, 2017

Just execute this command:

docker images --no-trunc | grep '<none>' | awk '{ print $3 }' | xargs -r docker rmi

Ref: https://lebkowski.name/docker-volumes/

@RajdeepSardar

This comment has been minimized.

Show comment
Hide comment
@RajdeepSardar

RajdeepSardar Aug 23, 2017

We faced similar problem but Inode % seems fine (40% used). DiskSpace is used about 81%. It is pulling a docker image about 1.7 GB. Seems still about 2GB space would have left. But following error was found. So I am not sure why this happened. I am beginner in this area and may not have much information in this topic. Any help is deeply appreciated.

+ /usr/local/bin/docker version
Client:
 Version:      17.05.0-ce

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5

+ /usr/local/bin/docker build --rm=true -f Dockerfile -t $VALUE . --pull=true
Sending build context to Docker daemon  1.677GB
.........
ee804876babe: Pull complete
225c66a863d8: Pull complete
2df5bb5034a3: Pull complete
96acbc28c73d: Pull complete
write /var/lib/docker/tmp/GetImageBlob194874586: no space left on device

$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 12K 3.9G 1% /dev
tmpfs 799M 520K 798M 1% /run
/dev/vda1 20G 16G 3.8G 81% /

$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 1018557 415 1018142 1% /dev
tmpfs 1022081 1346 1020735 1% /run
/dev/vda1 1310720 523561 787159 40% /

RajdeepSardar commented Aug 23, 2017

We faced similar problem but Inode % seems fine (40% used). DiskSpace is used about 81%. It is pulling a docker image about 1.7 GB. Seems still about 2GB space would have left. But following error was found. So I am not sure why this happened. I am beginner in this area and may not have much information in this topic. Any help is deeply appreciated.

+ /usr/local/bin/docker version
Client:
 Version:      17.05.0-ce

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5

+ /usr/local/bin/docker build --rm=true -f Dockerfile -t $VALUE . --pull=true
Sending build context to Docker daemon  1.677GB
.........
ee804876babe: Pull complete
225c66a863d8: Pull complete
2df5bb5034a3: Pull complete
96acbc28c73d: Pull complete
write /var/lib/docker/tmp/GetImageBlob194874586: no space left on device

$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 12K 3.9G 1% /dev
tmpfs 799M 520K 798M 1% /run
/dev/vda1 20G 16G 3.8G 81% /

$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 1018557 415 1018142 1% /dev
tmpfs 1022081 1346 1020735 1% /run
/dev/vda1 1310720 523561 787159 40% /

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Aug 23, 2017

Contributor

@RajdeepSardar no space left on device comes directly from the kernel.
Note that you are both pulling an image and sending a massive build context.

Contributor

cpuguy83 commented Aug 23, 2017

@RajdeepSardar no space left on device comes directly from the kernel.
Note that you are both pulling an image and sending a massive build context.

@orodbhen

This comment has been minimized.

Show comment
Hide comment
@orodbhen

orodbhen Mar 22, 2018

@mbentley and @hgontijo Note that changing dm.basesize won't work if the image already has layers cached on the system from before. You'll first need to prune all the existing layers, which may require removing some containers.

orodbhen commented Mar 22, 2018

@mbentley and @hgontijo Note that changing dm.basesize won't work if the image already has layers cached on the system from before. You'll first need to prune all the existing layers, which may require removing some containers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment