Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
docker rmi can not clean up as fast as new images are created #11053
I have am running a continuous integration and build server using docker as isolation. We are heavily leverating docker isolation for dependency checking and repeatilble builds. However due to the variety of packages which we are building relatively quickly run out of disk space.
To prevent running out of disk space we have developed a script which cleans up old images and containers. There's discussion related to that here: #928 However, having worked around those issues we've discovered that on a loaded system docker cannot clean images faster than it can create them.
On one of the loaded systems I observed a single
This is a quad core/ 16GB specifically an EC2 xlarge with 200GB of EBS optimized storage. (Specifics below in the docker info)
I understand that I'm heavily loading the system and that things are expected to slow down. But I believe that the asymmetry of slowdown between the build and rmi commands is undesired, and I expect that rmi should operate faster as it's just needing to deindex data instead of creating an inserting new data.
To reproduce this run 3-4 different builds generating similar but not identical containers simultaneously. As disk space starts getting full, (~175/200Gb) try to rmi the outdated images. We're using this python script: https://github.com/ros-infrastructure/buildfarm_deployment/blob/master/slave/slave_files/files/home/jenkins-slave/cleanup_docker_images.py But simply calling
Timing for removing an image under load:
It's not a large image(info on that image from docker images)
Removing a containers is reasonably quick
Listing images is also slower than I would expect
The standard docker information
$ time docker images [[SNIP 6831 images]] real 0m31.818s user 0m0.061s sys 0m0.039s
Removing a containers:
$ time docker rmi 916bac8f5800 Deleted: 916bac8f5800 real 0m22.520s user 0m0.016s sys 0m0.008s
The standard docker information:
$ docker info Containers: 4 Images: 6831 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 6860 Dirperm1 Supported: false Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.13.0-48-generic Operating System: Ubuntu 14.04.2 LTS CPUs: 4 Total Memory: 15.67 GiB Name: ip-172-31-10-44 ID: VO2N:RR5Z:MOYG:SQDU:EX5F:R6UW:FXEQ:XWDT:W2EL:WX5K:UCBF:RZDK WARNING: No swap limit support
$ docker version Client version: 1.7.0 Client API version: 1.19 Go version (client): go1.4.2 Git commit (client): 0baf609 OS/Arch (client): linux/amd64 Server version: 1.7.0 Server API version: 1.19 Go version (server): go1.4.2 Git commit (server): 0baf609 OS/Arch (server): linux/amd64
Is there perhaps a best approach in handling and working with large numbers of images?