New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for existing containers (allow sharing containers across services) #2075

Open
andyburke opened this Issue Sep 18, 2015 · 33 comments

Comments

Projects
None yet
@andyburke

andyburke commented Sep 18, 2015

I think it would be useful if compose could allow you to specify containers that may already exist, eg:

shareddata:
    container_name: shareddata
    allow_existing: true
    image: some/data_only_container

If the container 'shareddata' did not exist, it would be created as usual.

If, however, that container already existed, the allow_existing: true setting would not complain about a duplicate container, instead just skipping creation (and perhaps it would try to bring the container up if it were stopped?).

I haven't python-ed in a long time, but I might be able to create a PR for this feature if someone wanted to give me a little guidance on where the best place to start looking into the code would be.

@dnephin

This comment has been minimized.

Contributor

dnephin commented Sep 21, 2015

Compose already does this if it created the container. Why would you need to create the container outside of compose?

@andyburke

This comment has been minimized.

andyburke commented Sep 21, 2015

@dnephin I'm trying to share some containers between microservices. So I have multiple docker-compose.yml files in multiple projects. I would love for them to use existing containers that matched up (name, image) but they don't. There's a failure if I try to do that. (Unless something has changed very recently and I missed it.)

@dnephin

This comment has been minimized.

Contributor

dnephin commented Sep 21, 2015

http://docs.docker.com/compose/yml/#external-links was added to support that idea.

I think either a container is managed by compose (part of a service) and is linked with links, or it's external and linked to with external_links, but it should never be somewhere in the middle.

There is also #318

@andyburke

This comment has been minimized.

andyburke commented Sep 21, 2015

@dnephin I can use external_links to link to the containers, but I currently have to set those containers up in a shell script that I run before compose because I want the containers shared across services.

I suppose #318 could solve the problem, but it requires a shared config file to live somewhere that all the projects know about.

Allowing for existing containers (and erroring if anything is different or conflicting) would actually be more straightforward for my use case.

If there are technical reasons that the distinction between links and external_links must be very clear, I think another way to implement this would be something like:

shareddata:
    container_name: shareddata
    external: true
    image: some/data_only_container

Any 'external' container, if it does not exist, will be started with the given settings. Anyone referring to that container would need to use external_links. That would also solve my use case in a readable way.

@dnephin

This comment has been minimized.

Contributor

dnephin commented Feb 3, 2016

I can see an argument for supporting external: true as a way to replace external_links (it would be more consistent with how we handle volumes and networks in compose format v2).

However, I think it would be a mistake to have compose attempt to start any containers that are marked external. A service should be one or the other. If it's external it must be started externally, if it's internal compose will start or recreate it.

@JeanMertz

This comment has been minimized.

JeanMertz commented Mar 1, 2016

I have to say that I agree with @andyburke, and with the current knowledge that I have, I would be in favour of this feature.

We have multiple projects at our company, and one of those projects is shared between all other projects. It only has to run once to work for all projects, but ideally it would be included in the docker-compose.yml of each individual project, and started if it's not running already.

This way, each project is self-contained, but can use an already running instance of the shared project.

@dnephin

This comment has been minimized.

Contributor

dnephin commented Mar 1, 2016

I don't understand why you'd want to share a single instance of a project with all other projects. Why wouldn't you want to start a new instance of it for each (like #318).

Having a service be "external unless it's not running" just doesn't make sense semantically. A service is either externally managed, or it's locally managed, it can't be both. I really think the missing feature here is what's described in #318. A way to include projects from the local one. So you are sharing configuration, but not running containers.

@JeanMertz

This comment has been minimized.

JeanMertz commented Mar 2, 2016

@dnephin for example, I'd like to run https://traefik.io, but only one instance of this router (obviously), but I don't want to have a separate start script to start this if it isn't running already. I want to add its dependency to all our projects docker-compose files, and only start the service if one isn't already running (with the given configuration).

Does that make sense?

@dnephin

This comment has been minimized.

Contributor

dnephin commented Mar 2, 2016

If you're using it for a dev environment, I would run a single copy for every project, so it would just be a regular service in the Compose file. What's wrong with that setup?

@JeanMertz

This comment has been minimized.

JeanMertz commented Mar 2, 2016

Traefik is used to provide cross-service/project routing.

So we can do things like company.dev/project1 and company.dev/project2. For that to work only one has to be running, and it has to be cross-service.

That's (one of) the use-case that would be nice to solve if Docker Compose had the option to start a service, unless it's already started some other way, as described above.

@thaeli

This comment has been minimized.

thaeli commented Mar 15, 2016

I have the same use case - the shared service is a load balancer. I'm using https://github.com/jwilder/nginx-proxy which is very similar to Traefik - there is one instance of the LB running on the host, and it automagically connects to all the containers, with container-level config (labels) specifying the hostnames. This works great and the only sticking point is that there isn't a good, semantic way to do it within Compose.

Running one instance of the load balancer per project is not a viable option because the load balancer needs to bind to port 80/443 of the host.

@srwareham

This comment has been minimized.

srwareham commented Jun 6, 2016

Activity on this issue seems to have stalled out, but as the issue is still listed as open I thought it appropriate to continue the discussion here.

I, too, would very much like this feature. My use case, similar to @thaeli, involves ngnix. I am using a docker container to use ngnix to reverse proxy connections made to a host with one IP address. This host runs multiple websites which themselves are powered by docker-compose or just docker. This is done so that multiple websites can all be accessible via port 80 at a single IP address.

My current workflow is to start the single ngnix container and then docker-compose up all of the relevant services that all use the ngnix service. Most of the benefit of docker-compose is that everything can be spun up all at once with clear inter-service relationships defined. Needing to start a required service separately seems to very much contrast this advantageous design pattern.

It is not possible for each service to have its own instance of the external service because only one can be bound to port 80.

Is there an existing pattern to accomplish this using only docker-compose and no external scripting? If not, does this feature have a possibility of being added to the roadmap?

Thanks

@midnightconman

This comment has been minimized.

midnightconman commented Jun 15, 2016

As we start to move to a more distributed pattern (ie. swarm), wouldn't a flag something like create_if_missing be beneficial? Then you could use said flag to create the container if it is not running and if it is just use said container. This would also allow you to build or pull if you would like. Another thing to consider would be that any down, stop, or rm commands should leave these services, which could result in dangling services.

Does this go against any predefined standards or processes?

@mobiuscog

This comment has been minimized.

mobiuscog commented Oct 30, 2016

A possible use-case for this, is to ensure any shared docker networks are created.

Let's say I have 3 different compose files, A, B & C, which each have a few services within them. A&B are both referencing mynet1 and B&C are both referencing mynet2.

I need mynet1 to be created and running for the services, but I also need to be able to compose-up or down each of A, B & C separately.

Ideally, compose up of A or B would ensure that mynet1 was running and bring it up if not, etc.

@davidbnk

This comment has been minimized.

davidbnk commented Jan 26, 2017

I have the same use case with load balancers as @JeanMertz @thaeli @srwareham
Did you guys found any solution to this?

@davidbnk

This comment has been minimized.

davidbnk commented Jan 27, 2017

@ajbisoft

This comment has been minimized.

ajbisoft commented Feb 7, 2017

This would be very useful when ie. you want a single shared cache container across all of your apps. Or, you have one Postgres DB container with couple of schemas for your apps...

@atedja

This comment has been minimized.

atedja commented Feb 19, 2017

Echoing what others are saying. As @ajbisoft is saying, very useful for cache and database containers where you normally just want to have one running. At this point, we have to put all the containers settings under one file, but docker-compose will destroy and spin new containers when rerunning it, which is not what you want to do with cache and database.

@diginc

This comment has been minimized.

diginc commented Mar 5, 2017

Ideally, compose up of A or B would ensure that mynet1 was running and bring it up if not, etc.

This is what I'm envisioning for networks shared by things like jwilder's proxy too. I opted to create the network manually so docker-compose treats it as external and won't down the network. I feel it is cleaner than having any sort of strange dependencies between unrelated stacks. Outside of docker-compose I did docker network create discovery and then inside each compose stack added:

networks:
  # Discovery is manually created to avoid forcing any order of docker-compose stack creation
  discovery:
    external: true

# each proxied app also gets 
  networks:
    - discovery

# jwilder/nginx-proxy gets default since I have other apps in it's stack
  networks:
    - default
    - discovery

Currently the docs say this about external:

If set to true, specifies that this network has been created outside of Compose. docker-compose up will not attempt to create it, and will raise an error if it doesn’t exist.

So the intention of external is obviously to not manage them...however I think what we're both @mobiuscog and I are thinking is there'd be some sort of external_create_only: true option in doco, which creates them but doesn't attempt to manage / destroy them after the creation. A simple create and forget function so we don't worry about docker-compose down throwing errors about 'in use network` all the time (which I used to see before switching to this outside docker-compose network style)

The alternative is: It isn't terrible hard to write a little up.sh shell script that runs the docker network create command before upping a stack but I really prefer to try to teach anyone else using my stacks the native docker-compose commands rather than obscure the awesome features they could be using.

Lead here by jwilder/nginx-proxy#552

@jbguerraz

This comment has been minimized.

jbguerraz commented Mar 14, 2017

Having the exact same need than @JeanMertz (traefik usage in multiple projects)

@vschoener

This comment has been minimized.

vschoener commented Mar 28, 2017

I need something like this to use mailcatcher with all of my docker project. I don't want to boot it manually, I want my docker compose create or use it directly if the service is already running cause it's part of the project as developer.

Or maybe my approach is bad and you have some better practices for this case ? :)

Any idea ?

@diginc

This comment has been minimized.

diginc commented Mar 28, 2017

I don't want to boot it manually

@vschoener , If you're OK with some inelegance in boot order / error messages you can do this...You need one master project that defines the network as non external (and thus creates it), and the all your other projects can use it that network as external.

If that master project isn't up yet the other ones will not be able to find the external network however, creating an interproject dependency which a simple docker network create shared_mailcatcher could replace, like I suggested above (my favored practice). Keeping the network fully external from all docker-compose projects also prevent docker-compose from trying to destroy that in use network every time you docker-compose down too, so there won't be an error.

A good README.md would say in the requirements to just run 'docker network create shared_servicename` or if you have bootstrap scripts in each project using the shared network those would do it for you.

@vschoener

This comment has been minimized.

vschoener commented Mar 28, 2017

@diginc Thanks for your advice :) I will take a look and try something according to your example :)

@jonneroelofs

This comment has been minimized.

jonneroelofs commented Apr 9, 2017

We had this problem with the nginx-proxy. The workaround I use for our Development Environment is putting the shared container in every service/docker-compose.yml file where it is needed with the same service and container name.

To avoid the container name conflict error this would normally give you, I created a folder structure in every project where the docker-compose.yml is in a folder named "container" because the name of the folder the docker-compose.yml file is used (indirectly) to determine naming conflicts. I think I read somewhere that the folder name is used as a prefix for the container name in some way.

Anyhow, it results in docker recreating the nginx-proxy container, which is fine for our purposes.

@smxsm

This comment has been minimized.

smxsm commented Jun 12, 2017

I am using a script to check for the network and proxy container before I start any docker-compose project: http://www.rent-a-hero.de/wp/2017/06/09/use-j-wilders-nginx-proxy-for-multiple-docker-compose-projects/

@bgetsug

This comment has been minimized.

bgetsug commented Jun 20, 2017

Here's how I've tackled the problem of sharing containers (compose configurations) across projects: https://github.com/wheniwork/harpoon

@burtonrodman

This comment has been minimized.

burtonrodman commented Jul 11, 2017

My use case is that I have 20+ different "apps", all of which could use any of a set of shared services/databases. I want my testers to be able to "docker-compose up" on the set of apps they care about at the moment. I would like each compose file to fully specify all dependencies of each app. The shared services must be single instance because my testers need to be able to complete a work flow between 4 to 5 apps while maintaining the state from the previous steps.

@spaquet

This comment has been minimized.

spaquet commented Sep 12, 2017

@burtonrodman the same for me. Some services, especially database and monitoring, are shared across apps and we should have more flexibility to manage them within or outside of compose.

@denzel-morris

This comment has been minimized.

denzel-morris commented Oct 6, 2017

In agreement with @burtonrodman @spaquet @ajbisoft @atedja @mobiuscog @davidbnk @JeanMertz @andyburke @thaeli and @midnightconman re: the need for this with a similar use case as described above.

My company has many microservices. We want developers to be able to stand up sets of microservices separately without spinning up the entire cluster on their development machine. Furthermore, we'd prefer not to spin up a separate database container for each microservice... we'd prefer to share a database instance between microservices.

With that said, our current solution is less than ideal when it comes to user experience. We essentially write a wrapper script around docker-compose and duplicate it across all of our repositories. Something to the effect of:

create_docker_volume_if_not_exists $SHARED_VOLUME
create_docker_container_if_not_exists $SHARED_CONTAINER
docker-compose "$@"

Which is used by our developers with: ./scripts/docker-compose.sh up [...].

This is unfortunate. And sadly, this is just one of many minor issues with docker-compose that are all adding up to make it very frustrating to work with. It feels as if we're fighting docker-compose every step of the way. Maybe that means we're using it improperly and it's not meant for our use case or maybe it means there's room for improvement. I feel it's the latter.

If any core devs are willing to merge this, then please let me know what needs to be taken into consideration. With that information I will try to create a PR satisfying this use-case.

@bgetsug

This comment has been minimized.

bgetsug commented Oct 6, 2017

@denzel-morris, check out https://github.com/wheniwork/harpoon. It’s designed to assist with your particular use case.

@Axxxx0n

This comment has been minimized.

Axxxx0n commented Mar 28, 2018

I have the same problem. If I use only one nginx reverse proxy on host, it can't be defined in individual docker-compose files. That way I don't have explicitly defined services for individual apps and can't reproduce exact stack in development. (only manually)

@joshlrogers

This comment has been minimized.

joshlrogers commented Apr 19, 2018

+1 lack of this feature has caused a significant amount of extra work for us. Would really like to see this.

@FabianFrank

This comment has been minimized.

FabianFrank commented Apr 19, 2018

You can actually get this behavior as long as you ensure the following:

  • the project name (docker-compose -p) is the same
  • the container_name of the service(s) in your docker-compose.yaml file(s) is the same

Then docker-compose will detect that the container already exists and reuse it, starting it if needed. For example:

$ docker-compose -p SOME_PROJECT -f shared-services.yaml up -d
shared_vault is up-to-date
shared_postgres is up-to-date
shared_dynamodb is up-to-date
shared_redis is up-to-date

You might want to make sure that your containers are on the same network using the networks directive. Then they will be able to reach each other via their container_name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment