Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Compose support in Task definitions #324

Open
oppegard opened this issue Mar 25, 2016 · 76 comments
Open

Docker Compose support in Task definitions #324

oppegard opened this issue Mar 25, 2016 · 76 comments

Comments

@oppegard
Copy link
Contributor

@oppegard oppegard commented Mar 25, 2016

External services (MySQL, redis, memcached) and the need for an "uber container" to run our test suites are a pain point for the Pivotal Tracker team. For example, our Dockerfile installs the same version of MySQL 5.6.x that we run on pivotaltracker.com to run tests. But, we also have a PCF tile that runs against p-mysql. There seem to be two options for testing both DBs:

  1. Have a second docker image that duplicates the uber image, substituting MariaDB 10.0.x for MySQL 5.6.x.

  2. Jump through hoops of installing both MySQL and MariaDB in the same docker image (using something like http://mysqlsandbox.net/)

We've gone with option 1 for the time being. While an external service such as RDS is an option, I'd rather not be in the business of managing pools of instances and incur the extra network latency.


In an ideal, future-Concourse world, I could mix and match my external services with Docker Compose. As an example, see how Logplex runs their suite on TravisCI:

If they want to run against redis 3.x, it's as simple as changing a line: https://github.com/heroku/logplex/blob/master/docker-compose.yml#L5

An added benefit of Docker Compose is how quickly you get an environment running locally on your workstation. I don't have any of the Logplex dependencies on my MacBook (erlang, redis, etc.). But, my onboarding experience was:

  1. git clone git@github.com:heroku/logplex.git
  2. cd logplex
  3. docker-compose build && docker-compose run test

In the span of 5 minutes I had an erlang test suite running against redis 2.8.

@arnisoph
Copy link

@arnisoph arnisoph commented Mar 30, 2016

To run integration tests I need to bootstrap some containers using compose or similar. Is this somehow possible yet?

@ahelal
Copy link
Contributor

@ahelal ahelal commented Mar 30, 2016

Or some thinking like Travis matrix or test kitchen suites. So you can run tests with different combination of things i.e. version, os's, ...

@rajatvig
Copy link

@rajatvig rajatvig commented Apr 20, 2016

Is this support likely to land anytime soon?

@oppegard
Copy link
Contributor Author

@oppegard oppegard commented Apr 20, 2016

It'd be nice to hear from @vito if something like this fits into the conceptual model of Concourse. I haven't thought through what supporting Docker Compose means, only that I want it. For example, I'm not sure if this makes sense as a Resource. To be more clear about what I'd like:

  • A task.yml can specify "service resources" as additional containers that are started along with the main build container. All the containers in a build are networked to each other.
  • Running a one-off build via fly execute still works.

It seems like a docker-compose.yml would be a good way to describe the desired containers and their networking. It lets teams use Concourse for CI and Docker Compose for local development.

@vito
Copy link
Member

@vito vito commented Apr 20, 2016

When I have more time I want to play with allowing resources to expose network addresses to builds. That way you could model throwaway databases and services as resources that go away when the build is done. This could be done either with a single Docker Compose resource or with individual postgres/mysql/etc. resources.

@Tankanow
Copy link

@Tankanow Tankanow commented May 14, 2016

+1
I vote for as-native-as-possible docker-compose support. One simple sounding, if not implementable, solution is to give tasks access to the worker OS.

@royseto
Copy link

@royseto royseto commented May 28, 2016

Hi, I learned about Concourse this morning and like almost everything I've read about it. Currently we use Jenkins and have a build pipeline that first builds a Docker image, then runs a bunch of test suites against that image in downstream jobs.

However, I think not being able to link dependent containers into a single Concourse build job will block us from migrating to Concourse. Our app is a Python/Flask app with dependencies on postgres and redis. My build script needs to look something like this:

docker-compose up
docker-compose run runtests /tests.sh

where my docker-compose.yml looks something like this:

runtests:
  image: testbuild
  command: /bin/true
  volumes:
   - ./local.py:/app/config/local.py
   - ./runtests.sh:/runtests.sh
  links:
   - db
   - redis
db:
  image: my.docker.registry/postgres-dbimage
redis:
  image: redis

I want to containerize postgres and redis so that we can parallelize our tests and avoid having them step on each other.

Please let me know if there's a way to accomplish what I am trying to do in Concourse today. Unless I'm missing something, I think I need to make a shell script that does the above and keep calling it from Jenkins for now. Thanks!

@vito
Copy link
Member

@vito vito commented May 29, 2016

With 1.3 it'll be pretty trivial to run Docker within your task:

docker daemon &
docker-compose up
docker-compose runtests /tests.sh

This can already be done today, with some workarounds that won't be necessary come 1.3 (e.g. mounting a tmpfs for the Docker graph). You'd also need to run the task with privileged: true, which may also be fixable now that cgroup namespacing is supported in Linux 4.6, though that will have to wait for Garden-runC to support it (/cc @julz).

@royseto
Copy link

@royseto royseto commented May 29, 2016

Thanks @vito. I'll try it out after 1.3 comes out. Do you have a sense for what the performance overhead would be of running Docker containers inside Garden instead of running just Docker or Garden inside the host machine?

@julz
Copy link

@julz julz commented May 29, 2016

There shouldn't really be much of an impact: containers are just processes (there's no virtualisation going on), the overhead is very very small. The only real impact I can think of is you don't get the shared image cache from the host.

@royseto
Copy link

@royseto royseto commented May 29, 2016

OK, thanks for clarifying that @julz.

@seanknox
Copy link

@seanknox seanknox commented Jun 2, 2016

With 1.3 it'll be pretty trivial to run Docker within your task:

docker daemon &
docker-compose up
docker-compose runtests /tests.sh

This can already be done today, with some workarounds that won't be necessary come 1.3 (e.g. mounting a tmpfs for the Docker graph). You'd also need to run the task with privileged: true, which may also be fixable now that cgroup namespacing is supported in Linux 4.6, though that will have to wait for Garden-runC to support it (/cc @julz).

@julz @vito, short of docker-compose or concourse v1.3 availability, how does one run tasks that need to run multiple, linked Docker containers? E.g. a Rails app container that also needs additional service containers running like Redis and Postgres.

@drnic
Copy link
Contributor

@drnic drnic commented Jun 5, 2016

@Tankanow
Copy link

@Tankanow Tankanow commented Jun 8, 2016

@drnic, can you explain this a bit more? How do I integrate this pool of VMs into concourse?

@vito
Copy link
Member

@vito vito commented Jul 8, 2016

@seanknox That sounds like something you'd just use docker-compose for. I'm not as familiar with it as I should be though - I personally prefer to have my test suites self-contained and spin up their own Postgres server (e.g. here) - but that's purely a stylistic thing.

@oppegard oppegard changed the title Docker Compose resource Docker Compose support in Task definitions Jul 21, 2016
@simonvanderveldt
Copy link

@simonvanderveldt simonvanderveldt commented Jul 21, 2016

With 1.3 it'll be pretty trivial to run Docker within your task:

docker daemon &
docker-compose up
docker-compose runtests /tests.sh
This can already be done today, with some workarounds that won't be necessary come 1.3 (e.g. mounting a tmpfs for the Docker graph). You'd also need to run the task with privileged: true, which may also be fixable now that cgroup namespacing is supported in Linux 4.6, though that will have to wait for Garden-runC to support it (/cc @julz).

@vito This means you'd need an image that contains both the docker binary as well as the docker-compose binary to run these commands in, correct?
And because you're running docker in a container there won't be any image caching between runs, correct?

Would it be possible to pass in the docker binary as well as the docker socket of the host the worker is on? That way you can make use of the docker daemon on the host and it's caching.
Or alternatively I guess garden(-runc) would need to support Docker Compose somehow?

@vito
Copy link
Member

@vito vito commented Aug 8, 2016

@simonvanderveldt There is no Docker binary or socket on the host - they're just running a Garden backend (probably Guardian). Concourse runs at an abstraction layer above Docker, so providing any sort of magic there doesn't really make sense.

The one thing missing post-1.3 is that Docker requires you to set up cgroups yourself. I forgot how annoying that is. I wish they did what Guardian does and auto-configure it, but what can ya do.

So, the full set of instructions is:

  1. Use or a build an image with docker in it, e.g. docker:dind.
  2. Run the following at the start of your task: https://github.com/concourse/docker-image-resource/blob/master/assets/common.sh#L1-L40
  3. Spin up Docker with docker daemon &.

Then you can run docker-compose and friends as normal.

The downside of this is that you'll be fetching the images every time. #230 will address that.

In the long run, #324 (comment) is the direction I want to go.

@mumoshu
Copy link

@mumoshu mumoshu commented Aug 9, 2016

an image that contains both the docker binary as well as the docker-compose binary to run these commands in

@simonvanderveldt FYI, I've created a dcind docker image for it https://github.com/mumoshu/dcind

It's working for me and my colleagues.

One caveat is that, to prevent docker-compose from fetching all the dependent docker images from scratch on each task run, you have to provide appropriate docker-image-resource inputs to the task, e.g. if your compose.yml references the redis image, provide that image via an input and docker load it and so on like I do in https://github.com/mumoshu/concourse-aws/blob/master/ci/tasks/compose.yml#L25

As @vito mentioned, once #230 lands, this docker load will be unnecessary.

@meAmidos
Copy link

@meAmidos meAmidos commented Sep 15, 2016

@mumoshu Thank you for the dcind! It turned out to be a very useful solution, and I used it as a reference for a slightly simplified image: https://github.com/meAmidos/dcind

@martinsbalodis
Copy link

@martinsbalodis martinsbalodis commented Oct 7, 2016

I am having the same issue as everyone else in here. I would suggest a solution that is similar to travis.ci approach. Basically the required services for the test could be spawned as a docker containers that are accessible via network via dns names. For example a build.yml might look like this:

platform: linux

image_resource:
  type: docker-image
  source: {repository: busybox}

services:
  - name: sql (acessible via dns by resolving sql)
    type: docker-image
    source: {repository: mysql}
  - name: rabbitmq (acessible via dns by resolving rabbitmq)
    type: docker-image
    source: {repository: rabbitmq}

inputs:
- name: flight-school

run:
  path: ./flight-school/ci/test.sh
@pecigonzalo
Copy link

@pecigonzalo pecigonzalo commented Apr 6, 2018

Based on @josebarn example and some other examples, here is one based on docker:dind and also with autoloading of all images placed under images/.
This simplifies the loading process by just putting all images in a specific path.

https://github.com/pecigonzalo/docker-concourse-dind

PS: All wip commits as its just an example.

@samgurtman
Copy link

@samgurtman samgurtman commented Apr 11, 2018

@pecigonzalo did you not need to mess with the cgroups?

@engrun
Copy link

@engrun engrun commented Jul 9, 2018

+1 for support in Concourse for this.
This issue has been open for over 2 years, it would be nice if we could get an update on wether support for docker-compose is on the roadmap

@hukl
Copy link

@hukl hukl commented Jul 12, 2018

I was just looking for that ability. Right now we're using CircleCI for a project, which offers this feature as well. As this issue is still unresolved with Concourse, we'll stick with Circle for now :/

@willejs
Copy link

@willejs willejs commented Jul 26, 2018

@vito is this on the roadmap? Doing dind with docker compose isnt very nice at all. I am having issues on larger projects using this method.

@vito
Copy link
Member

@vito vito commented Aug 17, 2018

@willejs Sorry, this is currently not on the roadmap. We're focusing right now on spaces (#1707) and RBAC (#1317) and don't have the bandwidth to start this in parallel. I do want to see discussion/planning on this revived. Maybe people could submit RFCs (https://github.com/concourse/rfcs) with their own mock-ups and we can see where things go from there?

@jchesterpivotal
Copy link
Contributor

@jchesterpivotal jchesterpivotal commented Aug 21, 2018

As an aside, this feature might become simpler to implement as/when Kubernetes takes over as the container management layer: https://github.com/kubernetes/kompose

@edtan
Copy link
Contributor

@edtan edtan commented Nov 20, 2018

For those of you using docker-compose with dind, have you tried caching Docker's data-root directory (e.g. /var/lib/docker)? I'm trying to do this instead of docker load, but it seems like everything except the btrfs subvolumes directory is getting cached. Thus, I'm getting errors such as:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"ls\": executable file not found in $PATH": unknown.

Here's an example of what I'm trying to do. (Some of the functions are from docker-image-resource/assets/common.sh) When I run the job a second time, dockerdata's btrfs/subvolumes/<hash> directory exists, but is empty.

Docker version 18.09.0
docker-compose version 1.23.1

jobs:
- name: docker-cache-test
  plan:
  - task: run
    privileged: true
    config:
      platform: linux
      image_resource:
        type: docker-image
        source: { repository: my-local-dind }
      caches:
        - path: dockerdata
      run:
        path: sh
        args:
          - -exc
          - |
            source /common.sh

            rm /scratch/docker
            ln -s dockerdata /scratch/docker

            start_docker
            # I have start_docker calling dockerd like this:
            #dockerd --data-root /scratch/docker >/tmp/docker.log 2>&1 &

            docker image ls
            docker run --rm busybox ls
@vito
Copy link
Member

@vito vito commented Nov 20, 2018

@edtan
Copy link
Contributor

@edtan edtan commented Nov 21, 2018

Interesting, thanks! In that case, I'll stick to docker loading docker-image-resources.

@pswenson
Copy link

@pswenson pswenson commented Apr 3, 2019

in my old CI systems (jenkins and drone) my workflow was the following:

  1. checkout out code
  2. run unit tests
  3. build docker image
  4. using docker-compose run a smoke test against the docker image that was just built (hit health check plus send a few messages to http endpoint and validate response)
  5. publish image

I'm finding that with concourse it appears the smoke test is not possible unless I use DinD, which has significant downsides. It's very unfortunate that I have to publish a possibly bad image to the registry.

just because unit tests pass doesn't mean that starting a container will work....

Is there an alternative that I don't know about?

thanks

@AnianZ
Copy link
Contributor

@AnianZ AnianZ commented Apr 3, 2019

@pswenson I've had good experiences with a version of the DinD image

My pipeline pulls in the git resouce, does a set of unit tests and linting first, if those passes it triggers a build step that build a docker container and pushes it with a special tag to the registry. If the build is successful that image triggers two more jobs that run some smoke and and integration tests against multiple containers spun up with docker-compose including a empty postgres db which just needed some small tweaks to run faster in that setup. A essential pice of this setup is wait-for-it

My team is not especially big but I've run more over 1000 of each of these jobs without major problems. If the jobs fail for reasons other then a real problem with the build it's the concourse worker running out of IOPS if too many jobs run on it in parallel but even that is very rare with the database in docker-compose configured so it does not fsync.

It's not optimal but very workable for my use case right now. But of course I would still appreciate official support for something like this.

@trulede
Copy link

@trulede trulede commented Apr 8, 2019

rs the smoke test is not possible unless I use DinD, which has significant downsides. It's very unfortunate that I have to publish a possibly bad image to the registry.

just because unit tests pass doesn't meant that starting a container will work....

Is there an alternative that I don't know about?

Yes, you can do something like this:

jobs:
  - name: Big Foo
    plan:
      - aggregate:
        - get: foo_git
          trigger: true

      - task: Build Test
        privileged: true
        params:
          IMAGE_NAME: foo_image
        config:
          platform: linux
          image_resource:
            type: docker-image
            source:
              repository: foo:443/foo/alpine_dind
          inputs:
            - name: foo_git
          outputs:
            - name: docker-images
          run:
            path: /bin/bash
            args:
              - -exc
              - |
                source /docker-lib.sh
                start_docker # Starts Docker in Swarm Mode
                cd foo_git
                make build# Builds the Docker container ... docker blah/Dockerfile etc ...
                make start # starts a Docker Stack with the build container, and all others needed
                make test # Runs some tests
                docker save foo:latest > ../docker-images/foo.latest.tar

      - put: foo
        params:
          load_repository: foo
          load_file: docker-images/foo.latest.tar
          tag_as_latest: true
        get_params:
          skip_download: true

That starts an Alpine container, turns on DinD, builds an image, starts it, runs some tests, saves the image to a file .... and then if everything is OK puts the image (from file) to the Docker repo.

Not every part is there, but the rest of the answers are not too far away.

@AnianZ
Copy link
Contributor

@AnianZ AnianZ commented Apr 17, 2019

This workflow seems to have stopped working with the release of concourse 5.0.0. I can't get the docker-compose in a container setup working any more. It fails with mount: can't find /sys/fs/cgroup in /proc/mounts

Related post in the forum by another person affected: https://discuss.concourse-ci.org/t/docker-compose-inside-concourse-5-no-longer-working/1232

@meAmidos
Copy link

@meAmidos meAmidos commented Apr 17, 2019

@AnianZ This might help with the issue: meAmidos/dcind#12

@AnianZ
Copy link
Contributor

@AnianZ AnianZ commented Apr 17, 2019

@meAmidos Thank you! That was exactly what I was looking for. Apparently I was watching the wrong version of dcind.

@willejs
Copy link

@willejs willejs commented Jan 9, 2020

@vito is there any future plan or ideas as to how to tackle this?

@vito
Copy link
Member

@vito vito commented Jan 9, 2020

@willejs The situation hasn't really changed from last time. We have too much on our plate and no one from the community has really tried to tackle this and come up with RFCs/proposals, so until that happens this likely won't go anywhere beyond the current solutions. 🙁

@Sispheor
Copy link

@Sispheor Sispheor commented Feb 28, 2020

What about something like proposed on Github action?

services:
  nginx:
    image: nginx
    # Map port 8080 on the Docker host to port 80 on the nginx container
    ports:
      - 8080:80
  redis:
    image: redis
    # Map TCP port 6379 on Docker host to a random free port on the Redis container
    ports:
      - 6379/tcp

The only thing missing in Action is the ability to use a private registry for the image. But otherwise the implementation seems correct.

@lucasmdrs
Copy link

@lucasmdrs lucasmdrs commented Mar 20, 2020

Most CIs use a similar definition to the on above as shown by @Sispheor , TravisCI, CircleCI, Github Actions, GitlabCI... it doesn't seem like a big feature request.

This makes it harder to run integration tests, that currently only available through some sort of workaround with docker-compose or scripts.

@jchesterpivotal
Copy link
Contributor

@jchesterpivotal jchesterpivotal commented Mar 20, 2020

it doesn't seem like a big feature request.

It doesn't appear to be a big request, no.

But I think the history of this issue shows that it's a non-trivial change in the current architecture of Concourse.

It might become easier in future as various architectural refactoring work takes place in order to support Kubernetes as a runtime.

@lucasmdrs
Copy link

@lucasmdrs lucasmdrs commented Mar 21, 2020

But I think the history of this issue shows that it's a non-trivial change in the current architecture of Concourse.

By no means I meant to say it's a trivial change, I apologize if I misspelled and seemed that way.

But the history of this issue, the date of opening and the large adoption of this feature in other CI tools, shows that it shouldn't be diminished to a Core side-road ("Small features or bug fixes that aren't part of any epic on the roadmap") even less to a Backburner priority lane.

I'm here to support the request and keep this running, because it's something I would love to see being payed more attention to, as it's something that prevents me to fully adopt the tool.

@vito
Copy link
Member

@vito vito commented Mar 22, 2020

But the history of this issue, the date of opening and the large adoption of this feature in other CI tools, shows that it shouldn't be diminished to a Core side-road ("Small features or bug fixes that aren't part of any epic on the roadmap") even less to a Backburner priority lane.

I fully agree with this, and it would be great if someone from the community could help out. As I said before, we have too much on our plate right now to take this on ourselves. :/

While this is clearly a highly-sought-after feature, there are other highly-sought-after features which are more important right now because the pain felt without them is much worse: namely, support for branches/pull-requests (see the v10 roadmap) and support for runtimes like Kubernetes. Compared to those feature gaps, this issue is a quality-of-life improvement which has existing workarounds and patterns (though they may not be perfect), which means it has lower priority.

Concourse is a small team and we have to choose our battles. Unfortunately with as many users as we have it's not a matter of simply saying things are high priority, it's a matter of choosing who to disappoint, and hoping the community can pick things up where we can't. 🙁

Once we get past those big features we can start to take a deeper look at this. But y'all don't have to wait. Progress in the form of technical proposals and proof-of-concepts would really help get the ball rolling. If you want to support this request, that's really the way.

@rucciva
Copy link

@rucciva rucciva commented Jul 17, 2020

Hi all, i'm new to concourse, and i have a question, does concourse has some kind of detachable/background task or maybe a persistent resource? Regarding this issue, i'm thinking of running something like docker-compose run --no-deps --use-aliases <service-name> as concourse (detachable) task or maybe (persistent) resources for each of the dependency services before running the final test service. As for the docker daemon, i'm thinking of mapping /var/run/docker.sock to tcp port using something like mutagen so that concourse worker can have access to it.

@danihodovic
Copy link

@danihodovic danihodovic commented Oct 14, 2020

A Concourse CI resource that executes docker-compose against a remote host:

https://github.com/troykinsella/concourse-docker-compose-resource

@aoldershaw
Copy link
Contributor

@aoldershaw aoldershaw commented Nov 18, 2020

I put together an RFC with a proposal for a new services: field for tasks that looks like:

task: integration-tests
file: ci/tasks/test.yml
params:
  POSTGRES_HOST: ((.svc:postgres.host))
  POSTGRES_PORT: ((.svc:postgres.port))
services:
- name: postgres
  file: ci/services/postgres.yml

Would love to hear anyone's thoughts! concourse/rfcs#84

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet