Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Proposal: stack composition #9459
NOTE: this proposal has been replaced by #9694.
This proposal, replacing #9175, is to bring straightforward, Fig-inspired stack composition to the Docker client. The ultimate goal is to provide an out-of-the-box Docker development experience that is:
(The previous proposal built on the
I’ve already implemented an alpha of the required functionality on my composition branch - though it’s not ready for prime time, everyone is very much encouraged to try it out, especially Fig users. Scroll down for test builds!
The basic idea of stack composition is that with a very simple configuration file which describes what containers you want your application to consist of, you can type a single command and Docker will do everything necessary to get it running.
name: rails_example containers: db: image: postgres:latest web: build: . command: bundle exec rackup -p 3000 volumes: - .:/myapp ports: - "3000:3000" links: - db
A container entry must specify either an image to be pulled or a build directory (but not both). Other configuration options mostly map to their
There’s a new
Enhancements to existing CLI commands
A new syntax is introduced as a shorthand for referring to containers, images and build directories:
Here are some example commands with their longwinded/non-portable equivalents.
List our containers:
Rebuild the web image:
Re-pull the db image:
Kill the web container:
Kill all containers:
Kill and remove all containers:
Delete the web image:
Open a bash shell in the web container:
Run a one-off container using
Topics for discussion: an inexhaustive list
Including the app name in the file. I’m unsure about making this the default - lots of Fig users want to be able to do it, but I’m worried that it’ll hurt portability (we don’t do it with Dockerfile, and in my opinion it’s better off for it). Alternate approaches include using the basename of the current directory (like Fig does), or generating a name and storing it in a separate, unversioned file.
Clustering and production. People are already deploying single-host production sites with
Scaling. I don't think an equivalent to
Here's how to test it out if you're running boot2docker. First, replace the binary in your VM:
Next, replace your client binary:
Not yet implemented
There are a few things left to implement:
Example app 1: Python/Redis counter
Here’s a sample app you can try:
from flask import Flask from redis import Redis app = Flask(__name__) redis = Redis(host="redis", port=6379) @app.route('/') def hello(): redis.incr('hits') return 'Hello World! I have been seen %s times.' % redis.get('hits') if __name__ == "__main__": app.run(host="0.0.0.0", debug=True)
name: counter containers: web: build: . command: python app.py ports: - "5000:5000" volumes: - .:/code links: - redis environment: - PYTHONUNBUFFERED=1 redis: image: redis:latest command: redis-server --appendonly yes
If you put those four files in a directory and type
It'll build the web image, pull the redis image, start both containers and stream their aggregated output. If you Ctrl-C, it'll shut them down.
Example app 2: Fresh Rails app
To get hacking, check out the
Third time's a charm? :)
In think my comments in the previous proposal still apply;
Thanks (again), will give these builds a try soon.
@andrewmichaelsmith I agree that we should support links to existing containers. This should actually work in the test build (though I haven't stress-tested it much) - try prepending a link with a slash, e.g.
links: - /external-redis:redis
Zooming out, ideally there'd be a way to do "dependency injection" of containers, so I could e.g. specify a stock redis container in
Are there use cases for referencing the app name besides building test or dev containers? If not, I think you should keep the current fig behavior, but also support referencing other containers in a
I'd love to do
Using fig for building testing containers
I'm currently using fig to build a container for running tests, which requires knowing the basename of the current directory. This is problematic if devs check out the code into differently-named directories. Let me show you an example, using the sample python app you provided in the ticket description.
In addition to the Dockerfile and fig.yml you provide above, I also have a
FROM myapp_web ADD requirements-dev.txt /srv/myapp RUN pip install -r requirements-dev.txt
And I add these lines to
test: build: dockerfiles/test environment: DEBUG: True
This lets me do:
The detail to note is
I haven't found a good pattern for building containers for testing, so please let me know if you know of a better pattern that sidesteps these issues without adding new features.
@sbuss Fig previously supported
@aanand I previously used fig for development environments for my team. I switched away because it did not expose docker's full functionality or made it awkward in places. The previous paragraph also speaks to this. Does your implementation also postfix everything with
This is awesome!
I would say keep the
And I don't want to be a killjoy here but, this really is just fig. Re-implemented in Go and looking to find its way into the core. So my preference, as a heavy fig & Docker user, would be to keep this stuff separate.
In the argument as to whether to have compose included in the docker binary or separate, I vote included. I've been using fig and found myself wishing I could do
@tomfotherby: about the fig.yml per directory I think you're missing this fig option:
I'm not sure that I really like the idea of including composition in the main docker binary. I see composition/fig as a (very) convenient rather than core functionality (very as in "I don't really use docker without fig nowadays").
I vote to keep Compose separate. It will be much easier to add the hooks necessary to do cluster management between Compose and Swarm without risking regressions to Docker. Commands like 'docker up' could be added when Swarm is installed onto a system where Docker resides - ie: shell aliases.
I vote to keep Compose separate, since its functions are largely independent from the core and bear quite different responsibilities. Modularity would make maintenance (of both the core & Compose) and third-party integration easier. So far I'm not aware of strong technical reasons against separation.
Please keep it separate.
This is unfortunate. I thought at least adding support to query for a list of images or containers by prefix would make the client a LOT better. Is it possible this could include some minor API additions, even if it doesn't include all of the docker groups stuff? This is what I was looking forward to the most.
As far as incremental releases, by adding the API support first, current users of fig would start to gain some of the benefits right away. Otherwise we are probably waiting for this to be feature complete with, and as polished as, fig.
I think this is one of the parts of fig that I'm less than thrilled with. I've tried to outline my concerns here: docker/compose#693. I'd like to see this behaviour change from what fig does.
I like this
I would like to see this as an option at the very least, it doesn't have to be required. I can say that every single
I've never used it personally, and it feels like you could get away with just adding multiple entries to the
Overall, I am still a fan of this being separate repo. I agree with @chenzhiwei and the comparison to
Docker can still distribute packages that contain multiple clients, which makes it feel like a single binary. But for users that are interested in custom features, or experimenting with new features before they get accepted upstream, being able to build and run a separate client is much easier than trying to rebuild the entire docker client.
Separate repositories also makes it possible to have separate release schedules, which is always nice.
One other thought:
This is probably out of scope, but just mentioning it, since for tests we also need a sort of a stack/group.
Maybe this is not well thought through but the simplest for me would be to just integrate it into
Setting the app name would also be nice in
At least I would support having a separate tool for doing the orchestration and people familiar with
-1 for bundling with docker releases
I think there are few implementations out there now that do this task. Fig happens to be the one that pop up more often on the radar as it's the (only?) one promoted by Docker, Inc. Since there are a few implementations out there, it means that the other 20% of the 80% solution needed accounting for. Bundling the 80% with docker seems for force additional complexity, repressibility, opinions, an features into the docker cli.
I think this kind of idea was the motivation for one of the heaviest docker users, coreos, to go their own route a try to envision how a container platform could be structured with a unix philosophy in mind
I get the argument for release syncing but there's a flip side to that. You don't always get everything right in a software release. The more complexity and responsibility you take on the likely hook that something breaks goes up. Decoupled software allows for incremental releases, in other words, the component that has the bug can be released independently with faster turn around than re-releasing every component even if every other component has not changed.
+1 for providing a separate library
I'm not at all against docker being in this game. I believe its definitely a use case. I think that's why docker bought the company that makes fig in the first place. Since Docker, inc already owns that company, why not just make fig do what you're proposing here. If it's go you you're getting at, why not just make a go version of fig? If that's difficult, another area to focus on would be improved (remote) apis. In that case every tool wins!
Sounds like I'm in the minority ...
+1 for having it in the main binary.
I understand the reasoning in keeping docker binary smaller and modularizing it all, but but simple container dependency management I would've expected to come out of the box with docker.
I suppose I see quite a few upside to having it contained in the main binary.
I do think 'Compose' works fine as a separate binary, however, so long as it's included with boot2docker. That way, devs get the batteries included on their laptops and can start experimenting with real applications before diving into the full client. Seems like a reasonable balance of UX and combating some of the misinformation floating around.
disclaimer: I’m a co-founder of Modit, but the only bias I’m aware of is wanting Docker adopted as broadly and quickly as possible.
I actually like the "git" approach as it might give the best of both worlds; a separate binary, but a "single" endpoint from a user perspective. If only to stop the negative hype (founded or not) that Docker is over-reaching.
What I'm worried about when using this approach, is the vast amount of duplicated code between the compose binary and docker-client. If I understand correctly, compose (in its current state) basically is the docker-client with only a limited number of additions (parsing the
On the other hand, creating a separate repo does make it easier to add more functionality in the future, without affecting the standard client, so I'm not sure what's best here.
I use to have two Dockerfile for my project :
(so I miss #7284, but that's unrelated here)
In both case I need the container to be configured with third party middleware and like the fig/compose approach, but I'm missing some "profile" option. So I propose to allow some switch-like statement to define the target environment and avoid duplicating configuration just because I don't run the development container the same way I deploy the production app.
Personally, I'm more in favor of separate files for that. Easier to compare and less clutter in the
However, I wonder if this should be part of the initial implementation. Yes, I want this (functionally), but there are a lot of related issues that will need to be taken care of as well. For example, multiple
Because those haven't really materialised yet, I think this should be put on the roadmap, but not for the initial implementation, otherwise this may take a long time before it gets implemented... baby steps.
Again, just my thoughts, always open to other opinions.
@thaJeztah Ideally this would be solved by having a
That is my understanding as well. Unfortunately the client is in the same codebase as the daemon, so this is actually a lot of code already. I think there are still a lot of interesting (and useful) features missing from fig. I would hate to see these features rejected because of the existing complexity in the code repo.
Yes it does!
I think there is another issue in this debate that has been overlooked so far as well. Right now https://github.com/docker/docker/issues has 850+ open issues. It's very difficult to track down existing issues related to the area you care about (requires a lot of work, and often luck, to hit the right search query and filter through dozens of issues). If compose is yet another feature in this repo it's going to just make the situation worse. With a lot of management , labels could be used to improve this situation, but as it is now, most things are unlabeled, and I don't see any reason to expect that to change. This shifts more of the maintenance/bug triage burden onto the user.
@dnephin I'll give a response, but I have a feeling that we're going into too many related (but important) issues here to keep this discussion readable. Perhaps there's some way to split up "topics"?
Sounds good, similar to
I also wonder how that will work out (in the current situation) wrt maintainers; both
Fully agree on that. I keep track of the docker issues on a daily base and it's a lot of issues. Unfortunately, a lot of issues regarding (for example) boot2docker also end up in the docker issue tracker, so I fear that the same will happen with
First of all, I very much welcome a Go implementation of Compose. After all, a Go binary is much easier to distribute than a Python tool with several external dependencies (fig).
I don't mind Compose eventually becoming part of the main docker client; batteries included, as mentioned by @twirkman, is important for increased ease of use and adoption of Docker itself.
Initially it might feel safer for users if we could download and use this tool without replacing the system docker daemon (and client). If there was a patch available for the latest stable docker client(s), as well as patched binaries, that might do the trick right?
Otherwise, having the docker client and docker daemon part ways, and having a docker-go client library, would be very nice long-term goals. As @thaJeztah mentions though, this may take a long time, and I would hate to see "Compose" being put on hold until that is fixed. Ripping out part of the docker tool's code, and putting it into a very crude, incomplete and unstable first version of a docker-go client library, might be a faster aproach?
In the end, what ever gets compose out sooner, is a good approach:-)
Don't let Docker get monolithic. The core binary should contain the bare minimum to build, start, stop, view containers etc.
Orchestration is a distinct layer above the basic functionality of managing a container, and while the current proposal seems like a Fig rewrite, it could potentially grow beyond anything we imagine for now. A separate orchestration component would have far more freedom to grow and evolve without worrying about bloating the core.
On a political level, I would say this: Provide a separate application (like Fig) and those who thought it ought to be bundled will likely continue to use Docker anyway, but bundle another functionality layer into the core and you might lose the people who believe in a separate tool for each job.
referenced this issue
Dec 8, 2014
Also agree with @dnephin (#9459 (comment)) there should be a Docker Go binding.
An orchestration tool should not control the containers itself but should only communicate with the Docker interface the Daemon exposes.
Also +1 for the git-* approach, this makes it possible to develop separate parts but still present them to the enduser in a coherent way.