Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker-compose run --rm does not remove volumes #2419

Closed
ajc161 opened this issue Nov 18, 2015 · 17 comments
Closed

docker-compose run --rm does not remove volumes #2419

ajc161 opened this issue Nov 18, 2015 · 17 comments

Comments

@ajc161
Copy link

ajc161 commented Nov 18, 2015

Running one-off commands with docker-compose does not delete volumes used by the container. This is different than docker run --rm which does remove volumes after the container is deleted.

@ajc161 ajc161 changed the title docker-compose run --rm does not remove volumes docker-compose run --rm does not remove volumes Nov 18, 2015
@yordis
Copy link

yordis commented Dec 1, 2015

👍

@schmunk42
Copy link

We also ran into this.
What's very strange is that we had a cronjob running for months without any problems, but now every docker-compose run --rm leaves behind a volume with ~200 MB.

I am not 100% sure if this is related to an upgrade from 1.2.0 to 1.5.2, but it's the only change we've made.

docker info
Containers: 14
Images: 198
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 506
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-38-generic
Operating System: Ubuntu 14.10
CPUs: 2
Total Memory: 7.798 GiB
Name: ro109
ID: 4I66:JXGX:AAGY:ABRV:X7FD:ADPZ:IPHK:42BS:EUA2:QDRD:CJI3:EX6U
WARNING: No swap limit support
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 18:01:15 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 18:01:15 UTC 2015
 OS/Arch:      linux/amd64
docker-compose version
docker-compose version 1.5.2, build 7240ff3
docker-py version: 1.5.0
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013

CC: @Quexer69

@dnephin
Copy link

dnephin commented Feb 11, 2016

That's strange, we've never set v=True for this rm, and I don't think that it's ever defaulted to false, so I'm not sure how the volumes were being removed before.

@schmunk42
Copy link

After digging through some code, I think it's very likely that someone ran also https://github.com/chadoe/docker-cleanup-volumes in a cronjob on the server.

Is there another way to remove the volumes from a docker-compose run?
Or would you recommend docker exec?

@dnephin
Copy link

dnephin commented Feb 11, 2016

It seems a bit strange that a one-off container would use a volume and just remove it immediately. I would think that it would easier use a host volume, or a named volume that stays around.

Is the volume just being created because the image has a volume in it?

docker exec would be one way around that. Implementing this feature would also solve it. I believe it's a small change to run_one_off_container() by passing v=True to project.client.remove_container().

@mixja
Copy link

mixja commented Feb 20, 2016

This is a fundamental docker behaviour - if the image has volumes in it, they will created whenever you run a container from that image.

It's common to run one off tasks from an image that has volumes you may never use for that task.

The docker-compose run --rm behaviour should mirror the docker run --rm behaviour, which does correctly remove volumes.

@schmunk42
Copy link

@dnephin Our use-case is a rather simple web-app. We usually have web (nginx), php and worker (same image as php).
We're sharing static-files (assets) between web and php, but in other scenarios we also need to share files between php and worker.

The volume is defined in docker-compose.yml - sure there could be optimizations how much files are shared, but it still looks like a general problem to me.

docker exec could be a workaround, but I'd like to stay with docker-compose and I would rather not rely on the fact that my container is named project_php_1 because it may be named project_php_2 in some cases.

I also noticed that we ran a cleanup script before, I had to reactivate/fix that, but having an option to remove volumes after docker-compose run would still be great.

I need to look into named volumes a bit more I think, basically all apps we run, are running on a swarm, so I need to configure that properly.

@binary-data
Copy link

I also have this problem. I am using Docker Compose 1.7.1

@ebr
Copy link

ebr commented Jun 28, 2016

same exact use case as @schmunk42. we use named volumes. docker-compose down does not remove the named volumes, which is preventing us from doing a complete clean up of artifacts on build. We are can workarounds like docker volume ls | grep myassets, but that is unmaintainable in many ways.

@jmenga
Copy link

jmenga commented Jun 30, 2016

You can work around this issue in Docker Compose 1.7 as follows:

docker-compose run xxx
docker-compose down -v

The key here is to not use the --rm flag on the run command. Because Docker Compose 1.7 removes containers started with the run command, it cleans up everything correctly including all volumes (as long as you use -v)

@jmenga
Copy link

jmenga commented Jun 30, 2016

@schmunk42

docker-compose down -v always removes named local volumes (1.6+), the current issue relates to volumes defined in the image of the service you are running that are automatically created not declared as explicit volumes in your docker-compose.yml.

For example given the following docker-compose.yml file:

version: '2'

volumes:
  mysql_run:
    driver: local

services:
  db:
    image: mysql:5.6
    volumes:
      - mysql_run:/var/run/mysqld

If we use docker-compose run --rm:

$ docker volume ls
DRIVER              VOLUME NAME
$ docker-compose run --rm db
Creating volume "tmp_mysql_run" with local driver
error: database is uninitialized and password option is not specified
  You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
$ docker-compose down -v
Removing network tmp_default
Removing volume tmp_mysql_run
$ docker volume ls
DRIVER              VOLUME NAME
local               a79c78267ed6907afb3e6fc5d4877c160b3723551f499a3da15b13b685523c69

Notice that the volume tmp_mysql_run is created and destroyed correctly, but we get an orphaned volume which is the /var/lib/mysql volume in the mysql image.

If we use docker-compose run instead:

$ docker volume ls
DRIVER              VOLUME NAME
$ docker-compose run db
Creating volume "tmp_mysql_run" with local driver
error: database is uninitialized and password option is not specified
  You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
$ docker-compose down -v
Removing tmp_db_run_1 ... done
Removing network tmp_default
Removing volume tmp_mysql_run
$ docker volume ls
DRIVER              VOLUME NAME

Everything is cleaned up correctly...

nkovacs added a commit to nkovacs/compose that referenced this issue Jul 22, 2016
Named volumes will not be removed.
This is consistent with the behavior of docker run --rm.

Fixes docker#2419, docker#3611
nkovacs added a commit to nkovacs/compose that referenced this issue Jul 22, 2016
Named volumes will not be removed.
This is consistent with the behavior of docker run --rm.

Fixes docker#2419, docker#3611

Signed-off-by: Nikola Kovacs <nikola.kovacs@gmail.com>
nkovacs added a commit to nkovacs/compose that referenced this issue Jul 25, 2016
Named volumes will not be removed.
This is consistent with the behavior of docker run --rm.

Fixes docker#2419, docker#3611

Signed-off-by: Nikola Kovacs <nikola.kovacs@gmail.com>
@reiven
Copy link

reiven commented Jan 16, 2017

Can this be closed now?

@nkovacs
Copy link

nkovacs commented Jan 16, 2017

Is it fixed? My PR is still open.

@hholst80
Copy link

I also don't understand this issue. Using docker-compose down is not an option if we are running a service stack in the same name space as the "one off" command we want to clean up?

@nkovacs
Copy link

nkovacs commented Jan 19, 2017

docker-compose down -v would bring down the service stack and delete named volumes. I want neither to happen.

@ryneeverett
Copy link

Pretty sure this issue was about run --rm deleting unnamed volumes, which indeed now seems to be fixed.

@nkovacs
Copy link

nkovacs commented Jan 20, 2017

It's not fixed in 1.10, I just tried it, and it doesn't look like it's fixed in master either (v=true is still missing): https://github.com/docker/compose/blob/master/compose/cli/main.py#L985

The issue is that --rm does not delete unnamed volumes. It should.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests