Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: docker-compose events #1510

Closed
dnephin opened this issue Jun 4, 2015 · 68 comments
Closed

Proposal: docker-compose events #1510

dnephin opened this issue Jun 4, 2015 · 68 comments
Assignees
Milestone

Comments

@dnephin
Copy link

dnephin commented Jun 4, 2015

There have been a few requests for supporting some form of "hooks" system within compose (#74, #1341).

A feature which runs commands on the host would add a lot of complexity to compose and the compose configuration. Another option is to support these features by providing a way for external tools to run the command triggered by an event.

docker events provides an event stream of all docker events, but would still require filtering.

docker-compose events could provide a similar interface by filtering the events returned by /events and return a stream of only the events related to the active compose project. The event stream could also include a new field "service": <service_name> in addition to the fields already provided by /events

@thaJeztah
Copy link
Member

Hm fancy!

If accompanied by some clear examples, this would be a great feature.

Given that compose doesn't have a service/daemon. How would this work? (just wondering). Also; will subscribers listen to events for all projects, or receive only events for a specific project?

@dnephin
Copy link
Author

dnephin commented Jun 4, 2015

I think it should be just the specific project, otherwise it's not really more useful than the existing /events endpoint from docker engine.

Given that compose doesn't have a service/daemon. How would this work?

I think it would work similar to docker-compose logs. Streaming from the docker daemon, and filter out events that aren't related to the project. To consume the events you would pipe stdout to another application.

@thaJeztah
Copy link
Member

Ah, yes. Makes sense. Count me interested :)

@bfirsh bfirsh added the backlog label Jul 7, 2015
@arioch
Copy link

arioch commented Jul 15, 2015

+1

1 similar comment
@Ozzyboshi
Copy link

+1

@hodonsky
Copy link

yeah actually, I prefer this concept to plain "allow to run scripts 'before/ after' nonsense"
+1

@Harrison-Uhl
Copy link

Example usage: OnExit (of my web server in a Docker container) run a script to close port 80 on the firewall.

@dnephin
Copy link
Author

dnephin commented Aug 31, 2015

I started to look into this, but I think to do it properly the docker remote api needs to support filtering by labels.

@thaJeztah
Copy link
Member

@dnephin with filtering, you mean filtering events based on labels?

@dnephin
Copy link
Author

dnephin commented Aug 31, 2015

@thaJeztah Exactly, https://docs.docker.com/reference/commandline/events/ only supports filtering by image id, container id, or event type

@thaJeztah
Copy link
Member

@dnephin feature requests are welcome 👍 sounds like a nice feature to work on for contributors as well (clear goal)

@LiberQuack
Copy link

Really in need of this feature
+1

@mgcrea
Copy link

mgcrea commented Nov 1, 2015

👍

@raphaelbrugier
Copy link

@alikor @manuelkiessling I've coded something similar in the JHispster project.
See the scripts here and documentation

The scripts run in a second container:

  • the auto-migrate.sh script ping the Cassandra container until it is ready.
  • then all the cql scripts (create schema, tables, etc) not already executed are executed by the execute-cql.sh script.

That's similar to what tools like FlywayDB/Liquibase offer for a SQL database.

@systemmonkey42
Copy link

I have to put my +1 and a donation of 2c....

I find with many containers in docker compose on ubuntu, the linux connection tracking can get in the way.
After restarting containers with down then up, the IP addresses may not be exactly the same, and connection tracking tables in the kernel get confused. (This isn't a problem with tcp port forwards, only UDP)

Thus, on-start or on-pre-start to execute 'conntrack -F' for me is a must.

For now, to ensure OPS get this right, I have to provide a start script, and ask them to avoid running docker compose directly.

@omeid
Copy link

omeid commented Aug 7, 2017

The biggest value of compose is that it is self-contained, if I need to run other tools to setup a deployment, I might as well use a tool that covers everything.

@ghost
Copy link

ghost commented Dec 5, 2017

I agree that a "post-run" mechanism for running provisioning steps would be amazing and would solve a great deal of deployment issues. While it's nice to say "Just build it into your dockerfile." what if I didn't write the dockerfile? What if I'm using a provided container with a set entrypoint and I don't want to have to edit or wrap the upstream dockerfile? The ability to arbitrarily fire off commands post-entrypoint seems like a basic piece of functionality to me.

I must admit, I get a little tired of seeing threads like this on github where a whole host of users are telling the developers how useful a basic feature would be only to be met with "do it this way instead." We know our use cases, we understand our needs, and we're pleading with you to provide a simple and highly sought after piece of functionality to simplify our deployments.

Why is there so much resistance to this? I get that just because the feature can be considered simple that the implementation of it may not be simple, but come on man. This is something that a great deal docker-compose users obviously have a real need for.

@hodonsky
Copy link

hodonsky commented Dec 6, 2017

This has already been solved and closed years ago.There isn't resistance, there's always a more clever better way to do what you're thinking of doing.

Also please don't tell the maintainers of a project or repo that you're sick of seeing requests for a simple solution. If it was so simple you should be able to do it yourself. To extend that note not every suggestion or feature even fits any project, it may be a singular need that can and should be solved other ways and breaks configuration or conformity or any other myriad reasons that aren't even specifically technical in nature.

You could also just write a bash script with spawn and expect if it's that big of a deal, but I still feel like you'd be doing something wrong.

Remember containers are not your VM...

@omeid
Copy link

omeid commented Dec 6, 2017

@relicmelex You need to go through #1809 to get a better idea of what is going on.

@ghost
Copy link

ghost commented Dec 6, 2017

@relicmelex I understand all of that, and I get that a feature that seems simple may, in fact, be very complicated to implement, and may not fit a project, but I commonly see developers arguing against something that dozens and dozens of users are requesting for nebulous reasons. I apologize if I came off as demanding, it is not my intention to make demands of busy developers, though I did intend to express my frustrations about a trend I see among some of the tools I consume.

What is the solution? Because I'm still looking for it. If you could point me in the direction of a best practices way to do this, it would be greatly appreciated; maybe it's someplace obvious, but I haven't come across it yet. I have a whole bunch of stuff to build, and guess what, I expect the tools I consume to be able to handle some of these features without taking the time to implement it myself since they often have whole development teams committed to them while I'm here on my own just doing the best I can with what little time I have.

If using spawn and expect is doing something wrong, what is the right way to run an arbitrary command on a container after it's running? I'm absolutely amenable to using whatever the correct solution is, if it already exists; it may be that my frustration is simply a lack of google-fu skills (google searches led me to issue #1809 which in turn lead me here) or because I'm not reading some section of important documentation somewhere. I'd definitely appreciate any help you can provide since you seem to be aware of the solution. As I gain a better understanding of these tools, I'm thinking I just need to wrap the source docker container in a dockerfile that includes the final provisioning steps at build time; does that sound correct? If so, I may have been being silly to get so frustrated in the first place.

@hodonsky
Copy link

hodonsky commented Dec 6, 2017

@TalosThoren Can you try and lay out what you're trying to accomplish as an end objective and then the steps you're currently taking? Because usually you can just write a script to execute as a step in the container. Maybe as part of the independent Dockerfile(s), or a bash script to run after build... maybe mount the volume on start-up and have it run a script as the CMD option? Lets explore.

@omeid I've been through all of that, I stick by what I said... Notice it's been over two years since my original post here about it as this issue came up for me in a different annoying way. And instead of breaking pattern I started using docker-compose in a more structured way and linked some containers to achieve what I was trying to do. It ( whatever it is ) can be done without that feature, I'm sure of it.

@hodonsky
Copy link

hodonsky commented Dec 6, 2017

Side note... @systemmonkey42 you may want to use env vars in docker-compose and the hostname of the container if they are linked is the name of the container in the docker-compose file. Maybe that will solve your cross container issues?

@omeid
Copy link

omeid commented Dec 7, 2017

@relicmelex Every feature that compose has can be done without it. The argument that you can hack your way without any feature is pointless. I still think that #1809 was closed unreasonably, @dnephin really wants to promote his tool, dopey or whatever it is.

And on the original issue, I will just iterate my quesiton, feel free to answer it @relicmelex.

@dnephin Do you think running init scripts is outside the scope of container based application deployments?
after all, compose is about "define and run multi-container applications with Docker".

@hodonsky
Copy link

hodonsky commented Dec 7, 2017

@omeid You're correct compose can be done without, and docker can be done without even computers can be done without... I think you missed my point. I never suggested any hack of any kind, I'm suggesting you use the correct tool for the job.

Instead of antagonizing and talking about the problem, try to find a solution. This is just pointless banter now.

@hodonsky
Copy link

hodonsky commented Dec 7, 2017

#1809 (comment)

@ghost
Copy link

ghost commented Dec 7, 2017

@relicmelex Thanks for following up. My use case, in this instance, is to simply create a table in a cratedb database upon initial deployment for use with the crate_adapter for persisting prometheus metrics. The cratedb service needs to be running already, and I'm pretty sure the nature of cratedb means I only need to do it on the first container to stand up in the cluster. The intention is to write a script that checks if a table exists, after allowing some time for the container to join the cluster using its built in service discovery, and create the table if it does not exist.

I may be able to check if the container has been elected a master as the sentinel, or as an additional sentinel for table creation, as well, but I haven't got that far yet, I'm mainly doing manual lab work to ensure I understand the deployment steps presently. I will have to write a dockerfile for the crate_adapter, as they don't presently supply a docker image, but that will be simple. I actually wonder if it would be appropriate to install the crash command line tool on the crate_adapter container and have it handle creation of the table upon connecting to the db, but that seems like it might introduce some dumb problems.

I've run into many situations where running an init script of some kind after deployment of a container would be desirable, as well. I think I agree with @omeid that this clearly falls within scope of container deployment and orchestration, but I also see your point that there are probably best-practices ways to implement this kind of thing without incorporating a "run-after" or some such capacity in docker-compose.

I think I see both sides of this argument, and I know which one I lean towards, though I may begin to feel differently once I've learned more about implementing this kind of build.

@hodonsky
Copy link

hodonsky commented Dec 7, 2017

@TalosThoren Thanks for being so polite, you make me want to help you.

I imagine you also want to check to see if that table is already available so you don't accidentally destroy data or just have a failed step? Then create the table, then maybe even seed some data? ( Say it's a credentials table and you need a 'system' type credential so you can always log into a platform )

I'm doing this right now with dynamodb-local & elastic search then hooking services to them in a docker-compose environment, so I'm certain it can be done.

My approach is to create multiple docker containers and point to those in my docker-compose instead of just the default docker container. It takes a little more work but it really allows you to customize your environment and it's ability to communicate across containers.

docker-compose

  • elastic_search
  • dynamodb-local
  • auth_service ( custom Dockerfile )
    • link: dynamodb-local
  • resources_service ( custom Dockerfile )
    • link: elastic_search
  • gateway ( custom Dockerfile )
    • link: auth_service & resources_service

In the custom Dockerfiles I use the normal Dockerfile parameters to run the commands to build the environment I'm looking to build, then insert / build the db as one of the steps.

If this gets unruly I then turn it into a bash script or many bash scripts with dedicated purposes so that the build can use cacheing if you want to make smaller changes.

@ghost
Copy link

ghost commented Dec 7, 2017

Thanks again @relicmelex, I'll have to think through that, and I may come back with some questions, but that approach gives me a lot to think about. I really appreciate you sharing your expertise with this.

@ghost
Copy link

ghost commented Dec 17, 2017

@relicmelex I wanted to follow up and let you and anyone else who google-search stumbles their way upon this my results.

Using your method it proved trivial to create a short-lived container that simply runs a bash script to perform the necessary bootstrapping operations.

I simply wrote a script that awaits availability of the containerized service (which happens to be a database) that I need to run bootstrapping operations against before querying the database for the table in question. It logs what it finds, creates what it needs to if it's missing, and exits gracefully.

Thanks again for assisting in a long closed issue, it took some outside perspective to get a better grasp on how I should be thinking about containerized code execution.

@omeid
Copy link

omeid commented Dec 18, 2017

@TalosThoren You could come up with a 100 kind of hacks to implement this feature, but a hack is still a hack, you have to explain it to people who use your project instead of expecting it as part of understanding Docker Compose. That is the major difference.

When I use docker-compose, I expect my colleagues and collaborators to know or learn Compose, and docker-compose is well documented, this means I don't have to document my hack on every project, nor use some promoware like @dnephin's dopy or whatever it is, that may or may not be documented properly and could be gone any moment without much of a community to keep track of it.

You could argue against every single feature, and up to the entirety of docker compose with use a bash script, and that is as meaningful to the conversation as mentioning the colour of my socks— not much at all.

@hodonsky
Copy link

@omeid Just because you don't understand it doesn't make a hack... end of conversation.

@omeid
Copy link

omeid commented Dec 18, 2017

@relicmelex That is a childish reply. I have deployed a very similar hack multiple times and that is the exact reason why I need the ON START feature.

@ghost
Copy link

ghost commented Dec 19, 2017

@omeid, hey man, I'm on your side. I think this needs to be a feature in the docker-compose files, but @relicmelex gave me a solution that I think is quite robust and that will serve me well into the future as I implement work I need done today. I can't wait around for the development team to decide they want to implement something that I'm happy about, I got stuff to build.

I'm not convinced this closed thread is the right place to get the development team's attention regarding this feature request, so I don't think it's very productive to continue to argue for it in this particular thread, even though I agree that post-service-launch provisioning should probably be a thing docker-compose supports. I'm less convinced it's critical to prioritize it, though, than I was at the beginning of this conversation, but I still think it's a long overdue feature that has been summarily dismissed for poorly argued reasons.

I absolutely agree with your sentiment that "use a bash script" is a bit of a cop-out argument. The fact of the matter is that should we see support for post-service-launch provisioning find it's way into docker-compose, we'll be supplying bash scripts as the provisioners anyway. It could be said that we're simply asking for a more built-in way to deliver and execute those bash scripts in this thread. I definitely consider what I ended up implementing a workaround for missing functionality, but it works well, and it's a solid standard for the time-being.

@jufis
Copy link

jufis commented Sep 10, 2018

+1

1 similar comment
@fabiomolinar
Copy link

+1

@analogrithems
Copy link

analogrithems commented Jan 11, 2019

what about taking advantage of an alias. Still hackish, but solves the issue now

add an alias like this
alias docker-compose='docker-compose-hooked'

place this script in your path somewhere and make it executable (chmod 755 docker-compose-hooked)
docker-compose-hooked

#!/bin/bash

if [ -f .docker-compose-pre ]
then
    #Example command
    sh .docker-compose-pre
fi

docker-compose $@

You can then do a normal docker-compose build and it will copy your ssh key first.
This would check if you have a file called .docker-compose-pre in the same directory as your docker-compose-yaml file (Really just current directory) and run it before calling the real docker-compose

@davi5e
Copy link

davi5e commented Jan 14, 2019

what about taking advantage of an alias. Still hackish, but solves the issue now

add an alias like this
alias docker-compose='docker-compose-hooked'

place this script in your path somewhere and make it executable (chmod 755 docker-compose-hooked)
docker-compose-hooked

#!/bin/bash

if [ -f .docker-compose-pre ]
then
    #Example command
    sh .docker-compose-pre
fi

docker-compose $@

You can then do a normal docker-compose build and it will copy your ssh key first.
This would check if you have a file called .docker-compose-pre in the same directory as your docker-compose-yaml file (Really just current directory) and run it before calling the real docker-compose

In my particular case, something like this would work.

But I must point out that, at least for me, the whole idea in having a hook inside the Docker Compose file is to precisely avoid another step that every developer in my team would need to take.

Let's assume I create this alias and my problem is solved. Then my developer doesn't follow along and I'm back to square one.

If I would be able to add a hook inside docker-compose.override.yml and commit it to my Git repository, that pretty much solves the issue and I'd never have to second guess whether my teams comply with a step-by-step set up your development environment...

Anyhow, that is my motivation to adding a plea for this feature. I also need to run stuff on the host machine before/after docker-compose runs.

@hanoii
Copy link

hanoii commented Jun 25, 2019

From @TalosThoren #1510 (comment) above:

Using your method it proved trivial to create a short-lived container that simply runs a bash script to perform the necessary bootstrapping operations.

I found this good enough for my use case, just leaving it here as it didn't found an immediate example. I needed to setup an initial solr directory with a specific config schema for an older solr image that needed a mount, so this is what I ended up doing:

version: "3"

services:
  setup:
    image: alpine:latest
    volumes:
      - ./:/mnt/setup
    command: >
      ash -c "mkdir -p /mnt/setup/.local/solr/data &&
               cp -R /mnt/setup/sites/all/modules/search_api_solr/solr-conf/3.x /mnt/setup/.local/solr/conf"
  solr:
    image: geerlingguy/solr:3.6.2
    depends_on:
      - setup
    ports:
      - "8900:8983"
    restart: always
    volumes:
      - ./.local/solr:/opt/solr/example/solr:cached
    command: >
      bash -c "cd /opt/solr/example &&
               java -jar start.jar"

@TheDSCPL
Copy link

@hanoii Uuuhhh!!! I love that. Thank you :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests