Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: stack composition #9459

aanand opened this issue Dec 2, 2014 · 54 comments

Proposal: stack composition #9459

aanand opened this issue Dec 2, 2014 · 54 comments


Copy link

@aanand aanand commented Dec 2, 2014

NOTE: this proposal has been replaced by #9694.

This proposal, replacing #9175, is to bring straightforward, Fig-inspired stack composition to the Docker client. The ultimate goal is to provide an out-of-the-box Docker development experience that is:

  1. delightful
  2. eventually suitable for production orchestration in the “80% case”
  3. compatible with Docker clustering

(The previous proposal built on the docker groups proposal, #8637. This proposal does not, as I've determined that - since groups aren't necessary for implementing composition - it's preferable to avoid making changes or additions to the Docker API.)

I’ve already implemented an alpha of the required functionality on my composition branch - though it’s not ready for prime time, everyone is very much encouraged to try it out, especially Fig users. Scroll down for test builds!

The basic idea of stack composition is that with a very simple configuration file which describes what containers you want your application to consist of, you can type a single command and Docker will do everything necessary to get it running.

Configuration file

A group.yml is used to describe your application. It looks a lot like fig.yml, except that what Fig calls the “project name” is specified explicitly:

name: rails_example

    image: postgres:latest
    build: .
    command: bundle exec rackup -p 3000
      - .:/myapp
      - "3000:3000"
      - db

A container entry must specify either an image to be pulled or a build directory (but not both). Other configuration options mostly map to their docker run counterparts.

docker up

There’s a new docker up command. It performs the following steps:

  1. Parse group.yml
  2. For each defined container, check whether we have an image for it. If not, either build or pull it as needed.
  3. Create/recreate the defined containers. (For now, containers are always recreated if they already exist - this is the simplest way to ensure changes to group.yml are picked up.)
  4. Start the containers in dependency order (based on links and volumes_from declarations in group.yml).
  5. Unless docker up was invoked with -d, attach to all containers and stream their aggregated log output until the user sends Ctrl-C, at which point attempt to stop all containers with SIGTERM. Subsequent Ctrl-Cs result in a SIGKILL.

Enhancements to existing CLI commands

An optional NAME_PREFIX argument is added to docker ps to allow filtering of containers based on name prefix (on the client side, initially).

A new syntax is introduced as a shorthand for referring to containers, images and build directories:

  • :web designates the container named web in group.yml.
  • : designates all containers defined in group.yml. Depending on the exact command being invoked, this is restricted to containers which currently exist on the host, or those whose image has been built.

Here are some example commands with their longwinded/non-portable equivalents.

List our containers:

$ docker ps rails_example_
$ docker ps :

Rebuild the web image:

$ docker build -t rails_example_web .
$ docker build :web

Re-pull the db image:

$ docker pull postgres:latest
$ docker pull :db

Kill the web container:

$ docker kill rails_example_web
$ docker kill :web

Kill all containers:

$ docker kill rails_example_web rails_example_db
$ docker kill :

Kill and remove all containers:

$ docker rm -f rails_example_web rails_example_db
$ docker rm -f :

Delete the web image:

$ docker rmi rails_example_web
$ docker rmi :web

Open a bash shell in the web container:

$ docker exec -ti rails-example/web bash
$ docker exec :web bash

Run a one-off container using web’s image and configuration:

$ docker build -t rails_example_web . && docker run -ti -v `pwd`:/myapp --link rails_example_db:db rails_example_web bash
$ docker run -ti :web bash

Topics for discussion: an inexhaustive list

Including the app name in the file. I’m unsure about making this the default - lots of Fig users want to be able to do it, but I’m worried that it’ll hurt portability (we don’t do it with Dockerfile, and in my opinion it’s better off for it). Alternate approaches include using the basename of the current directory (like Fig does), or generating a name and storing it in a separate, unversioned file.

Clustering and production. People are already deploying single-host production sites with fig up -d, validating this general approach in simple scenarios, but we need to be sure that it’ll port well to a clustered Docker instance.

Scaling. I don't think an equivalent to fig scale is necessary on day one, but it will eventually be needed as Docker becomes a multi-host platform, so there shouldn't be anything in the design that'll make that difficult to implement later.

Test builds

Here's how to test it out if you're running boot2docker. First, replace the binary in your VM:

$ boot2docker ssh
docker@boot2docker:~$ sudo -i
root@boot2docker:~# curl -LO
root@boot2docker:~# /etc/init.d/docker stop
root@boot2docker:~# mv /usr/local/bin/docker ./docker-stable
root@boot2docker:~# mv ./docker-1.3.2-dev-linux /usr/local/bin/docker
root@boot2docker:~# chmod +x /usr/local/bin/docker
root@boot2docker:~# /etc/init.d/docker start
root@boot2docker:~# docker version   # both "Git commit"s should be c6bf574
root@boot2docker:~# exit
docker@boot2docker:~$ exit

Next, replace your client binary:

$ curl -LO
$ mv /usr/local/bin/docker ./docker-stable
$ mv ./docker-1.3.2-dev-darwin-amd64 /usr/local/bin/docker
$ chmod +x /usr/local/bin/docker
$ docker version                     # both "Git commit"s should be c6bf574

Not yet implemented

There are a few things left to implement:

  • Supporting volumes_from
  • Validation of the YAML file
  • Code cleanup
  • Test coverage

Example app 1: Python/Redis counter

Here’s a sample app you can try:

from flask import Flask
from redis import Redis

app = Flask(__name__)
redis = Redis(host="redis", port=6379)

def hello():
    return 'Hello World! I have been seen %s times.' % redis.get('hits')

if __name__ == "__main__":"", debug=True)




FROM python:2.7
ADD . /code
RUN pip install -r requirements.txt


name: counter

    build: .
    command: python
      - "5000:5000"
      - .:/code
      - redis
    image: redis:latest
    command: redis-server --appendonly yes

If you put those four files in a directory and type docker up, you should see everything start up:

docker up example

It'll build the web image, pull the redis image, start both containers and stream their aggregated output. If you Ctrl-C, it'll shut them down.

Example app 2: Fresh Rails app

I’ve ported the Rails example from Fig. See


To get hacking, check out the composition branch on my fork:

# if you don’t already have Docker cloned
$ git clone
$ cd docker

$ git remote add aanand
$ git fetch --all
$ git checkout -b composition aanand/composition
Copy link

@thaJeztah thaJeztah commented Dec 2, 2014

Third time's a charm? :)

In think my comments in the previous proposal still apply;

  1. Since we're defining a "stack" not a "group", perhaps the file should be called Dockerstack.yml? (#9175 (comment))
  2. For naming and possibly multiple instances of a stack (#9175 (comment))

Thanks (again), will give these builds a try soon.

Copy link

@andrewmichaelsmith andrewmichaelsmith commented Dec 4, 2014

As has been discussed in fig issue #159 are there plans to allow for sharing (in this example) redis between YAML files? So I can have 2 applications that both get the same redis when I docker up?

Copy link
Contributor Author

@aanand aanand commented Dec 4, 2014

@andrewmichaelsmith I agree that we should support links to existing containers. This should actually work in the test build (though I haven't stress-tested it much) - try prepending a link with a slash, e.g.

  - /external-redis:redis

Zooming out, ideally there'd be a way to do "dependency injection" of containers, so I could e.g. specify a stock redis container in group.yml and then override it in production so it points to my already-running, separately-managed Redis instance.

Copy link

@sbuss sbuss commented Dec 4, 2014

Including the app name in the file.
I’m unsure about making this the default - lots of Fig users want to be able to do it, but I’m worried that it’ll hurt portability (we don’t do it with Dockerfile, and in my opinion it’s better off for it). Alternate approaches include using the basename of the current directory (like Fig does), or generating a name and storing it in a separate, unversioned file.


Are there use cases for referencing the app name besides building test or dev containers? If not, I think you should keep the current fig behavior, but also support referencing other containers in a FROM :web style. If you don't add support for FROM :web, then I do support adding an app_name key to the yaml file, since that removes broken behavior when working in differently-named directories.

I'd love to do FROM :web to build a tests-only container, rather than have to fig build web && FROM myappdir_web.

Using fig for building testing containers

I'm currently using fig to build a container for running tests, which requires knowing the basename of the current directory. This is problematic if devs check out the code into differently-named directories. Let me show you an example, using the sample python app you provided in the ticket description.

In addition to the Dockerfile and fig.yml you provide above, I also have a dockerfiles/test directory which contains the following Dockerfile:

FROM myapp_web
ADD requirements-dev.txt /srv/myapp
RUN pip install -r requirements-dev.txt



And I add these lines to fig.yml:

    build: dockerfiles/test
        DEBUG: True

This lets me do: fig run test nosetests, and lets me avoid polluting the production container with my development dependencies. However, it also means that before running the tests I have to do fig build web so the myapp_web tag will get created.

The detail to note is FROM myapp_web, which is of the format <myapp_basedir>_<fig_base_container_name>. I'd love to do something like FROM :web. Or, if support for that will not be added, then I'd at least like to be independent on the directory name. A nice bonus would be allowing me to run fig run test without first building base (I'd like fig docker composition to just figure that out).

I haven't found a good pattern for building containers for testing, so please let me know if you know of a better pattern that sidesteps these issues without adding new features.

Copy link

@ahawkins ahawkins commented Dec 4, 2014

I haven't found a good pattern for building containers for testing, so please let me know if you know of a better pattern that sidesteps these issues without adding new features.

@sbuss Fig previously supported --projectname (or --project-name) Can't remember right now off the top of my head. I codified this in the project Makefile with something like FIG:=fig --project-name foo, then use $(FIG) everywhere. If you're invoking fig through the shell itself, there is an environment variable you can use as well. All in all I've found the directory scoping things very annoying in general. I'd like to opt out of it completely since I run everything inside a VM anyways, so no collisions can happen regardless.

@aanand I previously used fig for development environments for my team. I switched away because it did not expose docker's full functionality or made it awkward in places. The previous paragraph also speaks to this. Does your implementation also postfix everything with _N e.g. some_container_1? Also does your implementation also remove underscores from containers/group names? This may be a small gripe but I found it be very annoying if you ever wanted to do something outside fig's context. There were times when I wanted to start a one off docker container via docker run but had to link to something created with fig. This requires you to know two things: the namespace may change (if not using --project-name as mentioned above), that container name is not what's listed in the config files. Of course you learn these things but it's a strange hurdle to jump and introduces odd dependencies between different things. Does your proposal include referencing containers for use with --link? e.g. --link :posgres:db. If not I think you should certainly consider it. That abstraction should stretch out to all docker aspects of the docker CLI. Also, does this mean docker groups will be built into docker officially and use the same internal APIs as the CLI? I also ran into problems with fig where the API it communicated with did use the same configuration as the daemon (--insecure-registry). Will such concerns go away in your implementation?

Copy link

@deniszgonjanin deniszgonjanin commented Dec 4, 2014

This is awesome!

I would say keep the name: optional and use directory as default, like fig does. But it's definitely a needed feature. One thing I don't like about fig is that I am not able to choose what my images are named aside from renaming directories.

And I don't want to be a killjoy here but, this really is just fig. Re-implemented in Go and looking to find its way into the core. So my preference, as a heavy fig & Docker user, would be to keep this stuff separate.

  • It's mostly syntactic sugar.
  • It already exists as a healthy and active open source project
  • Fig is already governed by Docker, so it's direction is not an issue. Any desired improvements can find their way into fig instead.
  • The direction we've been hearing about is towards a tight and modular core.
Copy link

@tomfotherby tomfotherby commented Dec 4, 2014

In the argument as to whether to have compose included in the docker binary or separate, I vote included. I've been using fig and found myself wishing I could do docker up instead of fig up.

Other thoughts:

  • It would be great to support the --env-file option in compose from the beginning. Having a file for environment variables is a nice way to tweak the container config instead of having to tweak the fig.yml/group.yml files directly, especially for teams where some settings are individual (e.g. a catchall email address for the dev container). (fig issue: docker/compose#479)
  • I find fig adding _1 to the container name really annoying, especially as I don't need the scale option. It would be great if compose didn't do the same. and when a scale option is implemented, subsequent copies can start from _2 onwards.
  • As well as having a file (group.yml) which docker up magically knows to use, it would be great to also be able to provide the file directly, e.g. docker up myconfig.yml . I have a folder with several fig.yml files and they each have to be in their own sub-folder to be usable, which is a little annoying. (Not valid - thanks for the tip @itorres )
Copy link

@olimart olimart commented Dec 4, 2014

@aanand 👍

Copy link

@itorres itorres commented Dec 4, 2014

@tomfotherby: about the fig.yml per directory I think you're missing this fig option:

$ fig -h
-f, --file FILE           Specify an alternate fig file (default: fig.yml)

I'm not sure that I really like the idea of including composition in the main docker binary. I see composition/fig as a (very) convenient rather than core functionality (very as in "I don't really use docker without fig nowadays").

Copy link

@potto007 potto007 commented Dec 4, 2014

I vote to keep Compose separate. It will be much easier to add the hooks necessary to do cluster management between Compose and Swarm without risking regressions to Docker. Commands like 'docker up' could be added when Swarm is installed onto a system where Docker resides - ie: shell aliases.

Copy link

@weihanwang weihanwang commented Dec 5, 2014

I vote to keep Compose separate, since its functions are largely independent from the core and bear quite different responsibilities. Modularity would make maintenance (of both the core & Compose) and third-party integration easier. So far I'm not aware of strong technical reasons against separation.

Copy link

@legdba legdba commented Dec 5, 2014

Please keep it separate.
It's best having different tools, each with a clear and simple goal.
This will make life of users better by making each tool purpose simple to understand and letting them choose which tools to use for each purpose (several projects are working on composition already). This will make life of coders easier by reducing test/scope surface and dependencies.

Copy link

@chenzhiwei chenzhiwei commented Dec 5, 2014

I agree to separate it. Just like git-review, if you installed it, you can use either git review or git-review.

Copy link

@phaygoweb phaygoweb commented Dec 5, 2014

As composing seems to be becoming the norm, I'd like to see it included in the Docker binary.

Copy link

@dnephin dnephin commented Dec 5, 2014

it's preferable to avoid making changes or additions to the Docker API.

This is unfortunate. I thought at least adding support to query for a list of images or containers by prefix would make the client a LOT better. Is it possible this could include some minor API additions, even if it doesn't include all of the docker groups stuff? This is what I was looking forward to the most.

As far as incremental releases, by adding the API support first, current users of fig would start to gain some of the benefits right away. Otherwise we are probably waiting for this to be feature complete with, and as polished as, fig.

docker up
2. ... either build or pull it as needed.

I think this is one of the parts of fig that I'm less than thrilled with. I've tried to outline my concerns here: docker/compose#693. I'd like to see this behaviour change from what fig does.

  1. don't build/pull by default, error if things are missing. If a flag is specified (let say something like --build-and-pull) then all containers are force rebuilt and images pulled, even if an container/image exists. Otherwise it seems like you have to build/pull anyway, and there is no way to skip those steps
  2. don't accept any command line args for build/pull, otherwise the list of arguments will just be ridiculous (all of docker build , docker pull, plus docker up specific ones)

A new syntax is introduced as a shorthand for referring to containers

I like this

Including the app name in the file

I would like to see this as an option at the very least, it doesn't have to be required. I can say that every single fig setup I have forces an explicit project name because it's necessary for jenkins hosts. Being able to override it locally is still nice for hosts with multiple users, or being able to run multiple environments on the same host (one for interactive testing, one for someone else to preview, one for automated testing, etc). Override from environment variable I think would be my preference.


I've never used it personally, and it feels like you could get away with just adding multiple entries to the groups.yml in some cases.

Overall, I am still a fan of this being separate repo. I agree with @chenzhiwei and the comparison to git commands, this would be nice.

Docker can still distribute packages that contain multiple clients, which makes it feel like a single binary. But for users that are interested in custom features, or experimenting with new features before they get accepted upstream, being able to build and run a separate client is much easier than trying to rebuild the entire docker client.

Separate repositories also makes it possible to have separate release schedules, which is always nice.

Copy link

@mirath mirath commented Dec 5, 2014

It would be nice to have python bindings to programatically make the files/containers

Copy link

@jokeyrhyme jokeyrhyme commented Dec 5, 2014

Any chance TOML and/or JSON can be supported in addition to (or instead of) YAML? I've had some issues in the past with ambiguous YAML structures.

Copy link

@jakajancar jakajancar commented Dec 5, 2014

Don't get deterred by the "monolithic blob fiasco" and too defensive: this belongs to core docker.

Once you do "docker up", it might be cool if there's a way to launch multiple instances of a "role" (like "heroku ps:scale web=3 bg=4"), for testing.

Copy link

@jakajancar jakajancar commented Dec 5, 2014

One other thought:

  • With Dockerfile, I can give my repo to Docker Hub,, etc., and they know exactly how to build my image.
  • With Dockerstack.yml/group.yml, I can (hopefully) give my repo to Heroku and they know what tiers to launch and how to link them.
  • For running tests, I still have to write a proprietary config (e.g. circle.yml for CircleCI), although a very basic one.

This is probably out of scope, but just mentioning it, since for tests we also need a sort of a stack/group.

Copy link

@goloroden goloroden commented Dec 5, 2014

@jokeyrhyme +1 for JSON support in addition to (or instead of) YAML.

Copy link

@deepflame deepflame commented Dec 5, 2014

Maybe this is not well thought through but the simplest for me would be to just integrate it into fig.

Setting the app name would also be nice in fig as well. Maybe we could just deprecate the old file format while supporting both for a while in parallel until "everyone" migrated?

At least I would support having a separate tool for doing the orchestration and people familiar with fig can also then use the same tool to run their containers in production.
People already use fig for production btw...

Copy link

@ifraixedes ifraixedes commented Dec 5, 2014

@jokeyrhyme +1 for JSON support in addition to (or instead of) YAML

Copy link

@Hokutosei Hokutosei commented Dec 5, 2014

@jokeyrhyme +1 for JSON support

Copy link

@softprops softprops commented Dec 5, 2014

-1 for bundling with docker releases

I think there are few implementations out there now that do this task. Fig happens to be the one that pop up more often on the radar as it's the (only?) one promoted by Docker, Inc. Since there are a few implementations out there, it means that the other 20% of the 80% solution needed accounting for. Bundling the 80% with docker seems for force additional complexity, repressibility, opinions, an features into the docker cli.

I think this kind of idea was the motivation for one of the heaviest docker users, coreos, to go their own route a try to envision how a container platform could be structured with a unix philosophy in mind

I get the argument for release syncing but there's a flip side to that. You don't always get everything right in a software release. The more complexity and responsibility you take on the likely hook that something breaks goes up. Decoupled software allows for incremental releases, in other words, the component that has the bug can be released independently with faster turn around than re-releasing every component even if every other component has not changed.

+1 for providing a separate library

I'm not at all against docker being in this game. I believe its definitely a use case. I think that's why docker bought the company that makes fig in the first place. Since Docker, inc already owns that company, why not just make fig do what you're proposing here. If it's go you you're getting at, why not just make a go version of fig? If that's difficult, another area to focus on would be improved (remote) apis. In that case every tool wins!

Copy link

@ifraixedes ifraixedes commented Dec 5, 2014

I agree with @softprops so +1 for providing a separate library

Copy link

@r4j4h r4j4h commented Dec 5, 2014

Also agree with previous replies. +1 for separate library. Stays modular and contained and leaves people open to other approaches to orchestration.

Copy link

@kcmerrill kcmerrill commented Dec 5, 2014

Sounds like I'm in the minority ...

+1 for having it in the main binary.

I understand the reasoning in keeping docker binary smaller and modularizing it all, but but simple container dependency management I would've expected to come out of the box with docker.

  • Ease of development
  • Ease of distributing open source projects. (Docker up, instead of, run this container, then this, then this, etc ...)
  • Really, how much larger is the binary with essentially fig tacked on to it?
  • When docker updates with new functionality, I don't have to worry about do I have the right version of compose to match with the right version of docker.
  • When docker updates, compose updates with all of the new functionality(piggybacking off my last point)

I suppose I see quite a few upside to having it contained in the main binary.

Copy link

@gabrielgrant gabrielgrant commented Dec 5, 2014

All this hand-wringing over whether or not to bundle the functionality into the docker binary seems like a good reason look at moving towards git's model of allowing 3rd-party commands to be added via extension.

Copy link

@vladfr vladfr commented Dec 5, 2014

@gabrielgrant allowing for an extension model like git would be great. I still see fig-like templating as part of the core binary though.

Copy link

@thaJeztah thaJeztah commented Dec 5, 2014

I actually like the "git" approach as it might give the best of both worlds; a separate binary, but a "single" endpoint from a user perspective. If only to stop the negative hype (founded or not) that Docker is over-reaching.

What I'm worried about when using this approach, is the vast amount of duplicated code between the compose binary and docker-client. If I understand correctly, compose (in its current state) basically is the docker-client with only a limited number of additions (parsing the group.yml and convert that to API calls, filter containers based on "stack prefix", extract image/container names from the information in group.yml). If I'm correct in this, creating a separate binary would almost be possible via a build-flag to enable/disable those parts in the client.

On the other hand, creating a separate repo does make it easier to add more functionality in the future, without affecting the standard client, so I'm not sure what's best here.

Copy link

@Peeja Peeja commented Dec 6, 2014

To @thaJeztah's point, it's worth noting that in git everything is an external command. Even the "internals" are implemented as git-* commands. That's how git avoids the code duplication @thaJeztah is worried about here.

Copy link

@ndeloof ndeloof commented Dec 6, 2014

I use to have two Dockerfile for my project :

  • one for development mode with source code mounted in container and hot reloaded by my dev framework, for efficiency (i.e play run)
  • one to compile and package my app ready for production (i.e play dist)

(so I miss #7284, but that's unrelated here)

In both case I need the container to be configured with third party middleware and like the fig/compose approach, but I'm missing some "profile" option. So I propose to allow some switch-like statement to define the target environment and avoid duplicating configuration just because I don't run the development container the same way I deploy the production app.

name: counter

        - production
                image: my_cool_app:latest
                command: /target/universal/bin/run
        - development
                build: .
                command: play run
      - "5000:5000"
      - .:/code
      - redis
    image: redis:latest
    command: redis-server --appendonly yes
Copy link

@x4lldux x4lldux commented Dec 6, 2014

+1 for having profiles in Dockerstack.yml file.

Copy link

@thaJeztah thaJeztah commented Dec 6, 2014

So I propose to allow some switch-like statement to define the target environment and avoid duplicating configuration

Personally, I'm more in favor of separate files for that. Easier to compare and less clutter in the groups.yml. I share your concern wrt duplication, so perhaps an #include group.base.yml option could solve this?

However, I wonder if this should be part of the initial implementation. Yes, I want this (functionally), but there are a lot of related issues that will need to be taken care of as well. For example, multiple Dockerfiles, sharing the same build context (#9198, #7284, #2112), .dockerignore files depending on environment, and perhaps templating? (#8446).

Because those haven't really materialised yet, I think this should be put on the roadmap, but not for the initial implementation, otherwise this may take a long time before it gets implemented... baby steps.

Again, just my thoughts, always open to other opinions.

Copy link

@dnephin dnephin commented Dec 6, 2014

What I'm worried about when using [the separate repo] approach, is the vast amount of duplicated code between the compose binary and docker-client.

@thaJeztah Ideally this would be solved by having a docker-go client library like there is with docker-py, docker-java, etc. That way clients don't have to re-implement anything. Docker is (primarily?) a developer tool, and developers are always going to prefer an API. I think these client libraries are an important part of the docker ecosystem. This is related to an argument that I made in an earlier proposal. Moving fig into docker doesn't really solve some of the perceived problems with having a separate client, it just shifts them onto other developers.

If I understand correctly, compose (in its current state) basically is the docker-client with only a limited number of additions

That is my understanding as well. Unfortunately the client is in the same codebase as the daemon, so this is actually a lot of code already. I think there are still a lot of interesting (and useful) features missing from fig. I would hate to see these features rejected because of the existing complexity in the code repo.

On the other hand, creating a separate repo does make it easier to add more functionality in the future

Yes it does!

I think there is another issue in this debate that has been overlooked so far as well. Right now has 850+ open issues. It's very difficult to track down existing issues related to the area you care about (requires a lot of work, and often luck, to hit the right search query and filter through dozens of issues). If compose is yet another feature in this repo it's going to just make the situation worse. With a lot of management , labels could be used to improve this situation, but as it is now, most things are unlabeled, and I don't see any reason to expect that to change. This shifts more of the maintenance/bug triage burden onto the user.

Copy link

@thaJeztah thaJeztah commented Dec 6, 2014

@dnephin I'll give a response, but I have a feeling that we're going into too many related (but important) issues here to keep this discussion readable. Perhaps there's some way to split up "topics"?

Ideally this would be solved by having a docker-go client library

Sounds good, similar to libcontainer is used right now. This will take time before that is realized, so if that's the desired (and possible) approach, either compose must be put "on hold" until that is realized or a roadmap /migration path layed out. Apart from being very interested in the project, I'm not familiar enough with the code to give a reasonable comment on that if it's possible.

Unfortunately the client is in the same codebase as the daemon, so this is actually a lot of code already.

I also wonder how that will work out (in the current situation) wrt maintainers; both compose and the "regular" client commands will cover the same parts of the code, but with different priorities / interests that may conflict. Something that may need to be solved.

If compose is yet another feature in this repo it's going to just make the situation worse.

Fully agree on that. I keep track of the docker issues on a daily base and it's a lot of issues. Unfortunately, a lot of issues regarding (for example) boot2docker also end up in the docker issue tracker, so I fear that the same will happen with compose, swarm and machine. Docker will have to find a way to handle that; close issues asap and redirect the people to the right repo. Maybe this is part of the "operators" responsibility (#9137), but I'm not sure.

Copy link

@smyrman smyrman commented Dec 7, 2014

First of all, I very much welcome a Go implementation of Compose. After all, a Go binary is much easier to distribute than a Python tool with several external dependencies (fig).

I don't mind Compose eventually becoming part of the main docker client; batteries included, as mentioned by @twirkman, is important for increased ease of use and adoption of Docker itself.

Initially it might feel safer for users if we could download and use this tool without replacing the system docker daemon (and client). If there was a patch available for the latest stable docker client(s), as well as patched binaries, that might do the trick right?

Otherwise, having the docker client and docker daemon part ways, and having a docker-go client library, would be very nice long-term goals. As @thaJeztah mentions though, this may take a long time, and I would hate to see "Compose" being put on hold until that is fixed. Ripping out part of the docker tool's code, and putting it into a very crude, incomplete and unstable first version of a docker-go client library, might be a faster aproach?

In the end, what ever gets compose out sooner, is a good approach:-)

Copy link

@fazy fazy commented Dec 7, 2014

Don't let Docker get monolithic. The core binary should contain the bare minimum to build, start, stop, view containers etc.

Orchestration is a distinct layer above the basic functionality of managing a container, and while the current proposal seems like a Fig rewrite, it could potentially grow beyond anything we imagine for now. A separate orchestration component would have far more freedom to grow and evolve without worrying about bloating the core.

On a political level, I would say this: Provide a separate application (like Fig) and those who thought it ought to be bundled will likely continue to use Docker anyway, but bundle another functionality layer into the core and you might lose the people who believe in a separate tool for each job.

Copy link

@pkieltyka pkieltyka commented Dec 7, 2014

👍 this sounds awesome. I prefer Dockerstack for the name of the compose file as well, even dropping the .yml.

Copy link

@isymbo isymbo commented Dec 7, 2014

+1 for separate application to orchestrate docker containers, or at least a docker-go client library

Copy link

@pkieltyka pkieltyka commented Dec 8, 2014

I also agree it would be preferable to have this tool in a separate application from the core docker container engine: docker-compose , docker-stack , stacker ...

Copy link

@ifraixedes ifraixedes commented Dec 8, 2014

+1 for separate application

Copy link

@simonvanderveldt simonvanderveldt commented Dec 8, 2014

Agree with @softprops (#9459 (comment))
Docker itself should only provide the interface to control containers, not implement an orchestration layer on top of this.

Also agree with @dnephin (#9459 (comment)) there should be a Docker Go binding.
This would support splitting the Docker daemon from the Docker CLI as well, which is a good idea in general. That way the Docker daemon provides the interface to control containers and tools like the Docker CLI and Compose talk to this interface using the Docker Go bindings.
Maybe we can start with Compose simply importing from the main Docker repo?

An orchestration tool should not control the containers itself but should only communicate with the Docker interface the Daemon exposes.

Also +1 for the git-* approach, this makes it possible to develop separate parts but still present them to the enduser in a coherent way.

Copy link

@jstoja jstoja commented Dec 10, 2014

It's a very good news! I'm really looking forward to see this in daily usage.
For the debate, I'm really a big fan of the unix philosophy and the KISS principle, so I would prefer a set of little binaries composing themselves beautifully.

Copy link

@jhoffner jhoffner commented Dec 13, 2014

+1 for integrating into core. If for no other reason so that the test suites are integrated. Docker 1.4 has broken Fig's ability to properly mount volumes and this really should never happen.

Copy link

@smyrman smyrman commented Dec 13, 2014 don't look to bad...
It's Go, it supports both JSON and YAML, it exposes most docker run options (without renaming them!), and it still seems to work with volumes after the Docker 1.4 upgrade.

Copy link

@jhoffner jhoffner commented Dec 14, 2014

Thank you @smyrman - Crane is actually even better than Fig for my needs, I'm now back up and running better than ever.

Copy link

@pkieltyka pkieltyka commented Dec 14, 2014

indeed crane does look nice at first glance

Copy link

@abonas abonas commented Dec 15, 2014

+1 for json support.
why 'docker up' doesn't receive a specific file as parameter? doesn't it limit the way users can arrange configuration files?

Copy link

@craftgear craftgear commented Dec 16, 2014

I don't get what's happening here.
The "docker" company has already acquired fig.
Then why do we need the same functionality as fig in the core?

Is this proposal an evolving of fig or a totally different thing?

Copy link
Contributor Author

@aanand aanand commented Dec 16, 2014

Deprecated in favour of #9694. Onwards!

@aanand aanand closed this Dec 16, 2014
Copy link

@thaJeztah thaJeztah commented Dec 16, 2014

Oh no, not again? 😄

Copy link

@bfirsh bfirsh commented Feb 26, 2015

Compose has now been released and is available here:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet