New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Compose support in Task definitions #324

Open
oppegard opened this Issue Mar 25, 2016 · 61 comments

Comments

@oppegard
Copy link
Contributor

oppegard commented Mar 25, 2016

External services (MySQL, redis, memcached) and the need for an "uber container" to run our test suites are a pain point for the Pivotal Tracker team. For example, our Dockerfile installs the same version of MySQL 5.6.x that we run on pivotaltracker.com to run tests. But, we also have a PCF tile that runs against p-mysql. There seem to be two options for testing both DBs:

  1. Have a second docker image that duplicates the uber image, substituting MariaDB 10.0.x for MySQL 5.6.x.

  2. Jump through hoops of installing both MySQL and MariaDB in the same docker image (using something like http://mysqlsandbox.net/)

We've gone with option 1 for the time being. While an external service such as RDS is an option, I'd rather not be in the business of managing pools of instances and incur the extra network latency.


In an ideal, future-Concourse world, I could mix and match my external services with Docker Compose. As an example, see how Logplex runs their suite on TravisCI:

If they want to run against redis 3.x, it's as simple as changing a line: https://github.com/heroku/logplex/blob/master/docker-compose.yml#L5

An added benefit of Docker Compose is how quickly you get an environment running locally on your workstation. I don't have any of the Logplex dependencies on my MacBook (erlang, redis, etc.). But, my onboarding experience was:

  1. git clone git@github.com:heroku/logplex.git
  2. cd logplex
  3. docker-compose build && docker-compose run test

In the span of 5 minutes I had an erlang test suite running against redis 2.8.

@concourse-bot

This comment has been minimized.

Copy link

concourse-bot commented Mar 25, 2016

Hi there!

We use Pivotal Tracker to provide visibility into what our team is working on. A story for this issue has been automatically created.

The current status is as follows:

  • #116368043 Docker Compose support in Task definitions

This comment, as well as the labels on the issue, will be automatically updated as the status in Tracker changes.

@arnisoph

This comment has been minimized.

Copy link

arnisoph commented Mar 30, 2016

To run integration tests I need to bootstrap some containers using compose or similar. Is this somehow possible yet?

@ahelal

This comment has been minimized.

Copy link
Contributor

ahelal commented Mar 30, 2016

Or some thinking like Travis matrix or test kitchen suites. So you can run tests with different combination of things i.e. version, os's, ...

@rajatvig

This comment has been minimized.

Copy link

rajatvig commented Apr 20, 2016

Is this support likely to land anytime soon?

@oppegard

This comment has been minimized.

Copy link
Contributor Author

oppegard commented Apr 20, 2016

It'd be nice to hear from @vito if something like this fits into the conceptual model of Concourse. I haven't thought through what supporting Docker Compose means, only that I want it. For example, I'm not sure if this makes sense as a Resource. To be more clear about what I'd like:

  • A task.yml can specify "service resources" as additional containers that are started along with the main build container. All the containers in a build are networked to each other.
  • Running a one-off build via fly execute still works.

It seems like a docker-compose.yml would be a good way to describe the desired containers and their networking. It lets teams use Concourse for CI and Docker Compose for local development.

@vito

This comment has been minimized.

Copy link
Member

vito commented Apr 20, 2016

When I have more time I want to play with allowing resources to expose network addresses to builds. That way you could model throwaway databases and services as resources that go away when the build is done. This could be done either with a single Docker Compose resource or with individual postgres/mysql/etc. resources.

@Tankanow

This comment has been minimized.

Copy link

Tankanow commented May 14, 2016

+1
I vote for as-native-as-possible docker-compose support. One simple sounding, if not implementable, solution is to give tasks access to the worker OS.

@royseto

This comment has been minimized.

Copy link

royseto commented May 28, 2016

Hi, I learned about Concourse this morning and like almost everything I've read about it. Currently we use Jenkins and have a build pipeline that first builds a Docker image, then runs a bunch of test suites against that image in downstream jobs.

However, I think not being able to link dependent containers into a single Concourse build job will block us from migrating to Concourse. Our app is a Python/Flask app with dependencies on postgres and redis. My build script needs to look something like this:

docker-compose up
docker-compose run runtests /tests.sh

where my docker-compose.yml looks something like this:

runtests:
  image: testbuild
  command: /bin/true
  volumes:
   - ./local.py:/app/config/local.py
   - ./runtests.sh:/runtests.sh
  links:
   - db
   - redis
db:
  image: my.docker.registry/postgres-dbimage
redis:
  image: redis

I want to containerize postgres and redis so that we can parallelize our tests and avoid having them step on each other.

Please let me know if there's a way to accomplish what I am trying to do in Concourse today. Unless I'm missing something, I think I need to make a shell script that does the above and keep calling it from Jenkins for now. Thanks!

@vito

This comment has been minimized.

Copy link
Member

vito commented May 29, 2016

With 1.3 it'll be pretty trivial to run Docker within your task:

docker daemon &
docker-compose up
docker-compose runtests /tests.sh

This can already be done today, with some workarounds that won't be necessary come 1.3 (e.g. mounting a tmpfs for the Docker graph). You'd also need to run the task with privileged: true, which may also be fixable now that cgroup namespacing is supported in Linux 4.6, though that will have to wait for Garden-runC to support it (/cc @julz).

@royseto

This comment has been minimized.

Copy link

royseto commented May 29, 2016

Thanks @vito. I'll try it out after 1.3 comes out. Do you have a sense for what the performance overhead would be of running Docker containers inside Garden instead of running just Docker or Garden inside the host machine?

@julz

This comment has been minimized.

Copy link

julz commented May 29, 2016

There shouldn't really be much of an impact: containers are just processes (there's no virtualisation going on), the overhead is very very small. The only real impact I can think of is you don't get the shared image cache from the host.

@royseto

This comment has been minimized.

Copy link

royseto commented May 29, 2016

OK, thanks for clarifying that @julz.

@seanknox

This comment has been minimized.

Copy link

seanknox commented Jun 2, 2016

With 1.3 it'll be pretty trivial to run Docker within your task:

docker daemon &
docker-compose up
docker-compose runtests /tests.sh

This can already be done today, with some workarounds that won't be necessary come 1.3 (e.g. mounting a tmpfs for the Docker graph). You'd also need to run the task with privileged: true, which may also be fixable now that cgroup namespacing is supported in Linux 4.6, though that will have to wait for Garden-runC to support it (/cc @julz).

@julz @vito, short of docker-compose or concourse v1.3 availability, how does one run tasks that need to run multiple, linked Docker containers? E.g. a Rails app container that also needs additional service containers running like Redis and Postgres.

@drnic

This comment has been minimized.

Copy link
Contributor

drnic commented Jun 5, 2016

@Tankanow

This comment has been minimized.

Copy link

Tankanow commented Jun 8, 2016

@drnic, can you explain this a bit more? How do I integrate this pool of VMs into concourse?

@concourse-bot concourse-bot added in-flight and removed unscheduled labels Jul 8, 2016

@vito

This comment has been minimized.

Copy link
Member

vito commented Jul 8, 2016

@seanknox That sounds like something you'd just use docker-compose for. I'm not as familiar with it as I should be though - I personally prefer to have my test suites self-contained and spin up their own Postgres server (e.g. here) - but that's purely a stylistic thing.

@oppegard oppegard changed the title Docker Compose resource Docker Compose support in Task definitions Jul 21, 2016

@simonvanderveldt

This comment has been minimized.

Copy link

simonvanderveldt commented Jul 21, 2016

With 1.3 it'll be pretty trivial to run Docker within your task:

docker daemon &
docker-compose up
docker-compose runtests /tests.sh
This can already be done today, with some workarounds that won't be necessary come 1.3 (e.g. mounting a tmpfs for the Docker graph). You'd also need to run the task with privileged: true, which may also be fixable now that cgroup namespacing is supported in Linux 4.6, though that will have to wait for Garden-runC to support it (/cc @julz).

@vito This means you'd need an image that contains both the docker binary as well as the docker-compose binary to run these commands in, correct?
And because you're running docker in a container there won't be any image caching between runs, correct?

Would it be possible to pass in the docker binary as well as the docker socket of the host the worker is on? That way you can make use of the docker daemon on the host and it's caching.
Or alternatively I guess garden(-runc) would need to support Docker Compose somehow?

@vito

This comment has been minimized.

Copy link
Member

vito commented Aug 8, 2016

@simonvanderveldt There is no Docker binary or socket on the host - they're just running a Garden backend (probably Guardian). Concourse runs at an abstraction layer above Docker, so providing any sort of magic there doesn't really make sense.

The one thing missing post-1.3 is that Docker requires you to set up cgroups yourself. I forgot how annoying that is. I wish they did what Guardian does and auto-configure it, but what can ya do.

So, the full set of instructions is:

  1. Use or a build an image with docker in it, e.g. docker:dind.
  2. Run the following at the start of your task: https://github.com/concourse/docker-image-resource/blob/master/assets/common.sh#L1-L40
  3. Spin up Docker with docker daemon &.

Then you can run docker-compose and friends as normal.

The downside of this is that you'll be fetching the images every time. #230 will address that.

In the long run, #324 (comment) is the direction I want to go.

@mumoshu

This comment has been minimized.

Copy link

mumoshu commented Aug 9, 2016

an image that contains both the docker binary as well as the docker-compose binary to run these commands in

@simonvanderveldt FYI, I've created a dcind docker image for it https://github.com/mumoshu/dcind

It's working for me and my colleagues.

One caveat is that, to prevent docker-compose from fetching all the dependent docker images from scratch on each task run, you have to provide appropriate docker-image-resource inputs to the task, e.g. if your compose.yml references the redis image, provide that image via an input and docker load it and so on like I do in https://github.com/mumoshu/concourse-aws/blob/master/ci/tasks/compose.yml#L25

As @vito mentioned, once #230 lands, this docker load will be unnecessary.

@meAmidos

This comment has been minimized.

Copy link

meAmidos commented Sep 15, 2016

@mumoshu Thank you for the dcind! It turned out to be a very useful solution, and I used it as a reference for a slightly simplified image: https://github.com/meAmidos/dcind

@martinsbalodis

This comment has been minimized.

Copy link

martinsbalodis commented Oct 7, 2016

I am having the same issue as everyone else in here. I would suggest a solution that is similar to travis.ci approach. Basically the required services for the test could be spawned as a docker containers that are accessible via network via dns names. For example a build.yml might look like this:

platform: linux

image_resource:
  type: docker-image
  source: {repository: busybox}

services:
  - name: sql (acessible via dns by resolving sql)
    type: docker-image
    source: {repository: mysql}
  - name: rabbitmq (acessible via dns by resolving rabbitmq)
    type: docker-image
    source: {repository: rabbitmq}

inputs:
- name: flight-school

run:
  path: ./flight-school/ci/test.sh

@concourse-bot concourse-bot added unscheduled and removed in-flight labels Oct 9, 2016

@thanizebra

This comment has been minimized.

@devdavidkarlsson

This comment has been minimized.

Copy link

devdavidkarlsson commented Mar 20, 2017

Some temporary down time, @thanizebra try now...

@dsyer

This comment has been minimized.

Copy link

dsyer commented Jun 7, 2017

I tried following the pattern in http://www.davidkarlsson.no/2016/12/26/integration-testing-in-concourse-ci/. It looks like a decent idea but isn't working for me yet. I can run everything I need on my local docker daemon, but the non-docker environment in concourse seems to behave differently, and I can't see how to fix it (or even if it's possible to fix it).

When I try to start dockerd in the concourse worker it fails like this:

# dockerd --storage-driver=vfs --config-file=/etc/docker/daemon.json -p /var/run/docker-bootstrap.pid
WARN[0000] could not change group /var/run/docker.sock to docker: group docker not found 
DEBU[0000] Listener created for HTTP on unix (/var/run/docker.sock) 
INFO[0000] libcontainerd: new containerd process, pid: 148 
DEBU[0000] containerd: grpc api on /var/run/docker/libcontainerd/docker-containerd.sock 
DEBU[0000] containerd: read past events                  count=0
DEBU[0000] containerd: supervisor running                cpus=4 memory=7983 runtime=docker-runc runtimeArgs=[] stateDir="/var/run/docker/libcontainerd/containerd"
DEBU[0000] libcontainerd: containerd health check returned error: rpc error: code = 14 desc = grpc: the connection is unavailable 
WARN[0001] failed to rename /var/lib/docker/tmp for background deletion: %!s(<nil>). Deleting synchronously 
DEBU[0001] Using default logging driver json-file       
DEBU[0001] Golang's threads limit set to 57240          
DEBU[0001] [graphdriver] trying provided driver: vfs    
DEBU[0001] Using graph driver vfs                       
DEBU[0001] Max Concurrent Downloads: 3                  
DEBU[0001] Max Concurrent Uploads: 5                    
INFO[0001] Graph migration to content-addressability took 0.00 seconds 
WARN[0001] Your kernel does not support cgroup memory limit 
WARN[0001] Unable to find cpu cgroup in mounts          
WARN[0001] Unable to find blkio cgroup in mounts        
WARN[0001] Unable to find cpuset cgroup in mounts       
WARN[0001] mountpoint for pids not found                
DEBU[0001] Cleaning up old mountid : start.             
Error starting daemon: Devices cgroup isn't mounted

I tried various different options for --storage-driver, having noted that the blog link above used bitrfs, but none of them worked, and all except vfs resulted in errors in the logs. It doesn't make any difference if I add privileged: true to the task.

When I run locally it looks like this:

# dockerd --config-file=/etc/docker/daemon.json -p /var/run/docker-bootstrap.pid
WARN[0000] could not change group /var/run/docker.sock to docker: group docker not found 
DEBU[0000] Listener created for HTTP on unix (/var/run/docker.sock) 
INFO[0000] libcontainerd: new containerd process, pid: 14 
DEBU[0000] containerd: grpc api on /var/run/docker/libcontainerd/docker-containerd.sock 
DEBU[0000] containerd: read past events                  count=0
DEBU[0000] containerd: supervisor running                cpus=4 memory=7863 runtime=docker-runc runtimeArgs=[] stateDir="/var/run/docker/libcontainerd/containerd"
DEBU[0000] libcontainerd: containerd health check returned error: rpc error: code = 14 desc = grpc: the connection is unavailable 
DEBU[0001] libcontainerd: containerd health check returned error: rpc error: code = 14 desc = grpc: the connection is unavailable 
DEBU[0001] Using default logging driver json-file       
DEBU[0001] Golang's threads limit set to 55800          
DEBU[0001] Using graph driver aufs                      
DEBU[0001] Max Concurrent Downloads: 3                  
DEBU[0001] Max Concurrent Uploads: 5                    
INFO[0001] Graph migration to content-addressability took 0.00 seconds 
WARN[0001] Your kernel does not support swap memory limit 
WARN[0001] Your kernel does not support cgroup rt period 
WARN[0001] Your kernel does not support cgroup rt runtime 
WARN[0001] mountpoint for pids not found                
INFO[0001] Loading containers: start.                   
DEBU[0001] Option Experimental: false                   
DEBU[0001] Option DefaultDriver: bridge                 
DEBU[0001] Option DefaultNetwork: bridge                
WARN[0001] Running modprobe bridge br_netfilter failed with message: modprobe: can't change directory to '/lib/modules': No such file or directory
, error: exit status 1 
WARN[0001] Running modprobe nf_nat failed with message: `modprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1 
WARN[0001] Running modprobe xt_conntrack failed with message: `modprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1 
DEBU[0001] Fail to initialize firewalld: Failed to connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory, using raw iptables instead 
DEBU[0001] /sbin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER] 
...
@devdavidkarlsson

This comment has been minimized.

Copy link

devdavidkarlsson commented Jun 7, 2017

Sorry it is not working for you @dsyer. What docker image are you using where I specify: image: docker-registry-dind

@dsyer

This comment has been minimized.

Copy link

dsyer commented Jun 7, 2017

I was using docker:dind

@vito vito added this to Researching in Core Aug 12, 2017

@dajulia3

This comment has been minimized.

Copy link

dajulia3 commented Sep 23, 2017

Hey @vito I just want to resurface this issue as a number of teams and clients here at Pivotal Chicago have been clamoring for this. I've also personally felt pain around this, and it is seen as a primary disadvantage compared to Travis or Gitlab when our clients are discussing which tool to use. If there's anything we can do to help the concourse team understand the use cases, let me know!

@josebarn

This comment has been minimized.

Copy link

josebarn commented Oct 19, 2017

@dajulia3 I updated an example from @meAmidos and is located here: https://github.com/josebarn/dcind

The example should work out of the box.

It is really clean - and allows the images which used by the docker-compose.yml file to be incoming resources (per the paradigm of concourse).

So it should be easy to use AWS images for example w/o having to jump through hoops in the compose file.

I have to really stretch the legs on this approach before I declare this completely solid - but check it our for yourself. :)

@EugenMayer

This comment has been minimized.

Copy link

EugenMayer commented Nov 12, 2017

Drone.io implements this and since its open-sourced, maybe its worth trying to fish for and implementation there. They do use the services-syntax exactly the same way as stated initially here , and they are written in go - so at least we could reuse the BL of it, fitting it into the cci framework.

Usually i would tend to say that running the stack for smoke-tests, integration tests and also acceptance tests is off the grid for a CI too. It should be part of a deployment plan, be it the whole stack or just a microservice, deploying a docker-compose with all you need and then running a test against it. You could even ship the images using save/load without putting it into your private repo yet ( even though, that is unpractical due to the time save/load takes)

Sure that is a cool feature to have in a CI tool, since you can test the images right were you created them, on the other side, CI agents tend to have not the right specs to run thus applications and produce issues which staging/production nodes would not. Its probably wishing for something which will never be quiet perfect.

Little hint for dind: when you are using it, be sure to know, that you cannot resolve containers running in the parend docker process in regular cases, that means, e.g. accessing concourse-web using concourse-web will not work, since it cannot be resolved in the dind container.

Hint for save/load:
Usually it takes about 2-3 (on my laptop its far more, but on a DC node with proper hardware its about that average, but i have seen people reporting a number as high as 10x slower) times to save/load and image instead of pushing it to a registry. In addition, save is a very very blocking task, that means the node doing a save most probably cannot do anything else - its like running superpi. In general cases, its rather useful to run an extra docker-registry as "snapshot artifact registry" and pushing there mindlessly instead of using save/load. Since a registry is version-save and using shasums, you can easily use passed to ensure we use the same image - what you get in addition is, that you can build and image in job X and run you stuff in Job Y since its a proper resource now, not a local artifact. Since you are going to push a lot of latest containers and also a lot of untagged layers will be created on the registry, be sure to have a registry with a garbage collector ( the official distribution image / portus do not have this ). We use sonatype nexus for this, works fine.

@josebarn you might want to derive your container from docker:stable-dind rather then replicating it ( and just adding docker-compose - it should be in docker:stable-dind in the first place .. i see that this is the core reason people not using it directly all the time ). This way you would not need to maintain the docker-version in use.

@vito vito removed the enhancement label Nov 28, 2017

@vito vito moved this from Define to Icebox in Core Nov 28, 2017

@gmile

This comment has been minimized.

Copy link

gmile commented Dec 20, 2017

When I have more time I want to play with allowing resources to expose network addresses to builds.

@vito do you see any significant technical obstacles to this?

@pecigonzalo

This comment has been minimized.

Copy link

pecigonzalo commented Apr 6, 2018

Based on @josebarn example and some other examples, here is one based on docker:dind and also with autoloading of all images placed under images/.
This simplifies the loading process by just putting all images in a specific path.

https://github.com/pecigonzalo/docker-concourse-dind

PS: All wip commits as its just an example.

@samgurtman

This comment has been minimized.

Copy link

samgurtman commented Apr 11, 2018

@pecigonzalo did you not need to mess with the cgroups?

@engrun

This comment has been minimized.

Copy link

engrun commented Jul 9, 2018

+1 for support in Concourse for this.
This issue has been open for over 2 years, it would be nice if we could get an update on wether support for docker-compose is on the roadmap

@hukl

This comment has been minimized.

Copy link

hukl commented Jul 12, 2018

I was just looking for that ability. Right now we're using CircleCI for a project, which offers this feature as well. As this issue is still unresolved with Concourse, we'll stick with Circle for now :/

@willejs

This comment has been minimized.

Copy link

willejs commented Jul 26, 2018

@vito is this on the roadmap? Doing dind with docker compose isnt very nice at all. I am having issues on larger projects using this method.

@vito

This comment has been minimized.

Copy link
Member

vito commented Aug 17, 2018

@willejs Sorry, this is currently not on the roadmap. We're focusing right now on spaces (#1707) and RBAC (#1317) and don't have the bandwidth to start this in parallel. I do want to see discussion/planning on this revived. Maybe people could submit RFCs (https://github.com/concourse/rfcs) with their own mock-ups and we can see where things go from there?

@jchesterpivotal

This comment has been minimized.

Copy link
Contributor

jchesterpivotal commented Aug 21, 2018

As an aside, this feature might become simpler to implement as/when Kubernetes takes over as the container management layer: https://github.com/kubernetes/kompose

@edtan

This comment has been minimized.

Copy link
Contributor

edtan commented Nov 20, 2018

For those of you using docker-compose with dind, have you tried caching Docker's data-root directory (e.g. /var/lib/docker)? I'm trying to do this instead of docker load, but it seems like everything except the btrfs subvolumes directory is getting cached. Thus, I'm getting errors such as:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"ls\": executable file not found in $PATH": unknown.

Here's an example of what I'm trying to do. (Some of the functions are from docker-image-resource/assets/common.sh) When I run the job a second time, dockerdata's btrfs/subvolumes/<hash> directory exists, but is empty.

Docker version 18.09.0
docker-compose version 1.23.1

jobs:
- name: docker-cache-test
  plan:
  - task: run
    privileged: true
    config:
      platform: linux
      image_resource:
        type: docker-image
        source: { repository: my-local-dind }
      caches:
        - path: dockerdata
      run:
        path: sh
        args:
          - -exc
          - |
            source /common.sh

            rm /scratch/docker
            ln -s dockerdata /scratch/docker

            start_docker
            # I have start_docker calling dockerd like this:
            #dockerd --data-root /scratch/docker >/tmp/docker.log 2>&1 &

            docker image ls
            docker run --rm busybox ls
@vito

This comment has been minimized.

Copy link
Member

vito commented Nov 20, 2018

@edtan

This comment has been minimized.

Copy link
Contributor

edtan commented Nov 21, 2018

Interesting, thanks! In that case, I'll stick to docker loading docker-image-resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment