New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to delay container startup to support dependant services with a longer startup time #374

Closed
dancrumb opened this Issue Aug 4, 2014 · 313 comments

Comments

Projects
None yet
@dancrumb

dancrumb commented Aug 4, 2014

I have a MySQL container that takes a little time to start up as it needs to import data.

I have an Alfresco container that depends upon the MySQL container.

At the moment, when I use fig, the Alfresco service inside the Alfresco container fails when it attempts to connect to the MySQL container... ostensibly because the MySQL service is not yet listening.

Is there a way to handle this kind of issue in Fig?

@d11wtq

This comment has been minimized.

Show comment
Hide comment
@d11wtq

d11wtq Aug 4, 2014

Contributor

At work we wrap our dependent services in a script that check if the link is up yet. I know one of my colleagues would be interested in this too! Personally I feel it's a container-level concern to wait for services to be available, but I may be wrong :)

Contributor

d11wtq commented Aug 4, 2014

At work we wrap our dependent services in a script that check if the link is up yet. I know one of my colleagues would be interested in this too! Personally I feel it's a container-level concern to wait for services to be available, but I may be wrong :)

@nubs

This comment has been minimized.

Show comment
Hide comment
@nubs

nubs Aug 4, 2014

Contributor

We do the same thing with wrapping. You can see an example here: https://github.com/dominionenterprises/tol-api-php/blob/master/tests/provisioning/set-env.sh

Contributor

nubs commented Aug 4, 2014

We do the same thing with wrapping. You can see an example here: https://github.com/dominionenterprises/tol-api-php/blob/master/tests/provisioning/set-env.sh

@bfirsh

This comment has been minimized.

Show comment
Hide comment
@bfirsh

bfirsh Aug 4, 2014

Contributor

It'd be handy to have an entrypoint script that loops over all of the links and waits until they're working before starting the command passed to it.

This should be built in to Docker itself, but the solution is a way off. A container shouldn't be considered started until the link it exposes has opened.

Contributor

bfirsh commented Aug 4, 2014

It'd be handy to have an entrypoint script that loops over all of the links and waits until they're working before starting the command passed to it.

This should be built in to Docker itself, but the solution is a way off. A container shouldn't be considered started until the link it exposes has opened.

@dancrumb

This comment has been minimized.

Show comment
Hide comment
@dancrumb

dancrumb Aug 4, 2014

@bfirsh that's more than I was imagining, but would be excellent.

A container shouldn't be considered started until the link it exposes has opened.

I think that's exactly what people need.

For now, I'll be using a variation on https://github.com/aanand/docker-wait

dancrumb commented Aug 4, 2014

@bfirsh that's more than I was imagining, but would be excellent.

A container shouldn't be considered started until the link it exposes has opened.

I think that's exactly what people need.

For now, I'll be using a variation on https://github.com/aanand/docker-wait

@silarsis

This comment has been minimized.

Show comment
Hide comment
@silarsis

silarsis Aug 4, 2014

Yeah, I'd be interested in something like this - meant to post about it earlier.

The smallest impact pattern I can think of that would fix this usecase for us would be to be the following:

Add "wait" as a new key in fig.yml, with similar value semantics as link. Docker would treat this as a pre-requisite and wait until this container has exited prior to carrying on.

So, my docker file would look something like:

db:
  image: tutum/mysql:5.6

initdb:
  build: /path/to/db
  link:
    - db:db
  command: /usr/local/bin/init_db

app:
  link:
    - db:db
  wait:
    - initdb

On running app, it will start up all the link containers, then run the wait container and only progress to the actual app container once the wait container (initdb) has exited. initdb would run a script that waits for the database to be available, then runs any initialisations/migrations/whatever, then exits.

That's my thoughts, anyway.

silarsis commented Aug 4, 2014

Yeah, I'd be interested in something like this - meant to post about it earlier.

The smallest impact pattern I can think of that would fix this usecase for us would be to be the following:

Add "wait" as a new key in fig.yml, with similar value semantics as link. Docker would treat this as a pre-requisite and wait until this container has exited prior to carrying on.

So, my docker file would look something like:

db:
  image: tutum/mysql:5.6

initdb:
  build: /path/to/db
  link:
    - db:db
  command: /usr/local/bin/init_db

app:
  link:
    - db:db
  wait:
    - initdb

On running app, it will start up all the link containers, then run the wait container and only progress to the actual app container once the wait container (initdb) has exited. initdb would run a script that waits for the database to be available, then runs any initialisations/migrations/whatever, then exits.

That's my thoughts, anyway.

@dnephin

This comment has been minimized.

Show comment
Hide comment
@dnephin

dnephin Aug 5, 2014

Contributor

(revised, see below)

Contributor

dnephin commented Aug 5, 2014

(revised, see below)

@dsyer

This comment has been minimized.

Show comment
Hide comment
@dsyer

dsyer Aug 14, 2014

+1 here too. It's not very appealing to have to do this in the commands themselves.

dsyer commented Aug 14, 2014

+1 here too. It's not very appealing to have to do this in the commands themselves.

@jcalazan

This comment has been minimized.

Show comment
Hide comment
@jcalazan

jcalazan Aug 15, 2014

+1 as well. Just ran into this issue. Great tool btw, makes my life so much easier!

jcalazan commented Aug 15, 2014

+1 as well. Just ran into this issue. Great tool btw, makes my life so much easier!

@arruda

This comment has been minimized.

Show comment
Hide comment
@arruda

arruda Aug 16, 2014

+1 would be great to have this.

arruda commented Aug 16, 2014

+1 would be great to have this.

@prologic

This comment has been minimized.

Show comment
Hide comment
@prologic

prologic Aug 19, 2014

+1 also. Recently run into the same set of problems

prologic commented Aug 19, 2014

+1 also. Recently run into the same set of problems

@chymian

This comment has been minimized.

Show comment
Hide comment
@chymian

chymian Aug 19, 2014

+1 also. any statement from dockerguys?

chymian commented Aug 19, 2014

+1 also. any statement from dockerguys?

@codeitagile

This comment has been minimized.

Show comment
Hide comment
@codeitagile

codeitagile Aug 22, 2014

I am writing wrapper scripts as entrypoints to synchronise at the moment, not sure if having a mechanism in fig is wise if you have other targets for your containers that perform orchestration a different way. Seems very application specific to me, as such the responsibility of the containers doing the work.

codeitagile commented Aug 22, 2014

I am writing wrapper scripts as entrypoints to synchronise at the moment, not sure if having a mechanism in fig is wise if you have other targets for your containers that perform orchestration a different way. Seems very application specific to me, as such the responsibility of the containers doing the work.

@prologic

This comment has been minimized.

Show comment
Hide comment
@prologic

prologic Aug 22, 2014

After some thought and experimentation I do kind of agree with this.

As such an application I'm building basically has a synchronous
waitfor(host, port) function that lets me waits for services the application
is depending on (either detected via environment or explicitly
configuration via cli options).

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

On Fri, Aug 22, 2014 at 6:34 PM, Mark Stuart notifications@github.com
wrote:

I am writing wrapper scripts as entrypoints to synchronise at the moment,
not sure if having a mechanism in fig is wise if you have other targets for
your containers that perform orchestration a different way. Seems very
application specific to me as such the responsibility of the containers
doing the work.


Reply to this email directly or view it on GitHub
#374 (comment).

prologic commented Aug 22, 2014

After some thought and experimentation I do kind of agree with this.

As such an application I'm building basically has a synchronous
waitfor(host, port) function that lets me waits for services the application
is depending on (either detected via environment or explicitly
configuration via cli options).

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

On Fri, Aug 22, 2014 at 6:34 PM, Mark Stuart notifications@github.com
wrote:

I am writing wrapper scripts as entrypoints to synchronise at the moment,
not sure if having a mechanism in fig is wise if you have other targets for
your containers that perform orchestration a different way. Seems very
application specific to me as such the responsibility of the containers
doing the work.


Reply to this email directly or view it on GitHub
#374 (comment).

@shuron

This comment has been minimized.

Show comment
Hide comment
@shuron

shuron Aug 31, 2014

Contributor

Yes some basic "depend's on" neeeded here...
so if you have 20 container, you just wan't to run fig up and everything starts with correct order...
However it also have some timeout option or other failure catching mechanisms

Contributor

shuron commented Aug 31, 2014

Yes some basic "depend's on" neeeded here...
so if you have 20 container, you just wan't to run fig up and everything starts with correct order...
However it also have some timeout option or other failure catching mechanisms

@ahknight

This comment has been minimized.

Show comment
Hide comment
@ahknight

ahknight Oct 23, 2014

Another +1 here. I have Postgres taking longer than Django to start so the DB isn't there for the migration command without hackery.

ahknight commented Oct 23, 2014

Another +1 here. I have Postgres taking longer than Django to start so the DB isn't there for the migration command without hackery.

@dnephin

This comment has been minimized.

Show comment
Hide comment
@dnephin

dnephin Oct 23, 2014

Contributor

@ahknight interesting, why is migration running during run ?

Don't you want to actually run migrate during the build phase? That way you can startup fresh images much faster.

Contributor

dnephin commented Oct 23, 2014

@ahknight interesting, why is migration running during run ?

Don't you want to actually run migrate during the build phase? That way you can startup fresh images much faster.

@ahknight

This comment has been minimized.

Show comment
Hide comment
@ahknight

ahknight Oct 23, 2014

There's a larger startup script for the application in question, alas. For now, we're doing non-DB work first, using nc -w 1 in a loop to wait for the DB, then doing DB actions. It works, but it makes me feel dirty(er).

ahknight commented Oct 23, 2014

There's a larger startup script for the application in question, alas. For now, we're doing non-DB work first, using nc -w 1 in a loop to wait for the DB, then doing DB actions. It works, but it makes me feel dirty(er).

@dnephin

This comment has been minimized.

Show comment
Hide comment
@dnephin

dnephin Oct 23, 2014

Contributor

I've had a lot of success doing this work during the fig build phase. I have one example of this with a django project (still a work in progress through): https://github.com/dnephin/readthedocs.org/blob/fig-demo/dockerfiles/database/Dockerfile#L21

No need to poll for startup. Although I've done something similar with mysql, where I did have to poll for startup because the mysqld init script wasn't doing it already. This postgres init script seems to be much better.

Contributor

dnephin commented Oct 23, 2014

I've had a lot of success doing this work during the fig build phase. I have one example of this with a django project (still a work in progress through): https://github.com/dnephin/readthedocs.org/blob/fig-demo/dockerfiles/database/Dockerfile#L21

No need to poll for startup. Although I've done something similar with mysql, where I did have to poll for startup because the mysqld init script wasn't doing it already. This postgres init script seems to be much better.

@arruda

This comment has been minimized.

Show comment
Hide comment
@arruda

arruda Oct 24, 2014

Here is what I was thinking:

Using the idea of moby/moby#7445 we could implement this "wait_for_helth_check" attribute in fig?
So it would be a fig not a Docker issue?

is there anyway of making fig check the tcp status on the linked container, if so then I think this is the way to go. =)

arruda commented Oct 24, 2014

Here is what I was thinking:

Using the idea of moby/moby#7445 we could implement this "wait_for_helth_check" attribute in fig?
So it would be a fig not a Docker issue?

is there anyway of making fig check the tcp status on the linked container, if so then I think this is the way to go. =)

@docteurklein

This comment has been minimized.

Show comment
Hide comment
@docteurklein

docteurklein Nov 10, 2014

@dnephin can you explain a bit more what you're doing in Dockerfiles to help this ?
Isn't the build phase unable to influence the runtime?

docteurklein commented Nov 10, 2014

@dnephin can you explain a bit more what you're doing in Dockerfiles to help this ?
Isn't the build phase unable to influence the runtime?

@dnephin

This comment has been minimized.

Show comment
Hide comment
@dnephin

dnephin Nov 10, 2014

Contributor

@docteurklein I can. I fixed the link from above (https://github.com/dnephin/readthedocs.org/blob/fig-demo/dockerfiles/database/Dockerfile#L21)

The idea is that you do all the slower "setup" operations during the build, so you don't have to wait for anything during container startup. In the case of a database or search index, you would:

  1. start the service
  2. create the users, databases, tables, and fixture data
  3. shutdown the service

all as a single build step. Later when you fig up the database container it's ready to go basically immediately, and you also get to take advantage of the docker build cache for these slower operations.

Contributor

dnephin commented Nov 10, 2014

@docteurklein I can. I fixed the link from above (https://github.com/dnephin/readthedocs.org/blob/fig-demo/dockerfiles/database/Dockerfile#L21)

The idea is that you do all the slower "setup" operations during the build, so you don't have to wait for anything during container startup. In the case of a database or search index, you would:

  1. start the service
  2. create the users, databases, tables, and fixture data
  3. shutdown the service

all as a single build step. Later when you fig up the database container it's ready to go basically immediately, and you also get to take advantage of the docker build cache for these slower operations.

@docteurklein

This comment has been minimized.

Show comment
Hide comment
@docteurklein

docteurklein Nov 10, 2014

nice! thanks :)

docteurklein commented Nov 10, 2014

nice! thanks :)

@arruda

This comment has been minimized.

Show comment
Hide comment
@arruda

arruda Nov 11, 2014

@dnephin nice, hadn't thought of that .

arruda commented Nov 11, 2014

@dnephin nice, hadn't thought of that .

@oskarhane

This comment has been minimized.

Show comment
Hide comment
@oskarhane

oskarhane Dec 5, 2014

+1 This is definitely needed.
An ugly time delay hack would be enough in most cases, but a real solution would be welcome.

oskarhane commented Dec 5, 2014

+1 This is definitely needed.
An ugly time delay hack would be enough in most cases, but a real solution would be welcome.

@dnephin

This comment has been minimized.

Show comment
Hide comment
@dnephin

dnephin Dec 5, 2014

Contributor

Could you give an example of why/when it's needed?

Contributor

dnephin commented Dec 5, 2014

Could you give an example of why/when it's needed?

@dacort

This comment has been minimized.

Show comment
Hide comment
@dacort

dacort Dec 5, 2014

In the use case I have, I have an Elasticsearch server and then an application server that's connecting to Elasticsearch. Elasticsearch takes a few seconds to spin up, so I can't simply do a fig up -d because the application server will fail immediately when connecting to the Elasticsearch server.

dacort commented Dec 5, 2014

In the use case I have, I have an Elasticsearch server and then an application server that's connecting to Elasticsearch. Elasticsearch takes a few seconds to spin up, so I can't simply do a fig up -d because the application server will fail immediately when connecting to the Elasticsearch server.

@ddossot

This comment has been minimized.

Show comment
Hide comment
@ddossot

ddossot Dec 5, 2014

Say one container starts MySQL and the other starts an app that needs MySQL and it turns out the other app starts faster. We have transient fig up failures because of that.

ddossot commented Dec 5, 2014

Say one container starts MySQL and the other starts an app that needs MySQL and it turns out the other app starts faster. We have transient fig up failures because of that.

@oskarhane

This comment has been minimized.

Show comment
Hide comment
@oskarhane

oskarhane Dec 5, 2014

crane has a way around this by letting you create groups that can be started individually. So you can start the MySQL group, wait 5 secs and then start the other stuff that depends on it.
Works in a small scale, but not a real solution.

oskarhane commented Dec 5, 2014

crane has a way around this by letting you create groups that can be started individually. So you can start the MySQL group, wait 5 secs and then start the other stuff that depends on it.
Works in a small scale, but not a real solution.

@arruda

This comment has been minimized.

Show comment
Hide comment
@arruda

arruda Dec 6, 2014

@oskarhane not sure if this "wait 5 secs" helps, in some cases in might need to wait more (or just can't be sure it won't go over the 5 secs)... it's isn't much safe to rely on time waiting.
Also you would have to manually do this waiting and loading the other group, and that's kind of lame, fig should do that for you =/

arruda commented Dec 6, 2014

@oskarhane not sure if this "wait 5 secs" helps, in some cases in might need to wait more (or just can't be sure it won't go over the 5 secs)... it's isn't much safe to rely on time waiting.
Also you would have to manually do this waiting and loading the other group, and that's kind of lame, fig should do that for you =/

@djui

This comment has been minimized.

Show comment
Hide comment
@djui

djui Dec 1, 2016

@mixja Thanks for the detailed explanation. I think

I see more value in leveraging the native capabilities of the platform

is a good/the main point. Just waiting for Docker Compose to leverage the healthchecks natively either in depends_on or a new key, await. Just wonder if should/will go even a step further than that and basically brings down linked containers if e.g. --abort-on-container-exit is set and a health check during runtime sets the healthcheck label to unhealthy.

djui commented Dec 1, 2016

@mixja Thanks for the detailed explanation. I think

I see more value in leveraging the native capabilities of the platform

is a good/the main point. Just waiting for Docker Compose to leverage the healthchecks natively either in depends_on or a new key, await. Just wonder if should/will go even a step further than that and basically brings down linked containers if e.g. --abort-on-container-exit is set and a health check during runtime sets the healthcheck label to unhealthy.

@desprit

This comment has been minimized.

Show comment
Hide comment
@desprit

desprit Dec 9, 2016

Possible temporary workaround for those of you who's looking for delay functionallity to run tests:

I have two docker-compose yml files. One is for testing and another one for development. The difference is just in having sut container in docker-compose.test.yml. sut container runs pytest. My goal was to run test docker-compose and if pytest command in sut container fails, don't run development docker-compose. Here is what I came up with:

# launch test docker-compose; note: I'm starting it with -p argument
docker-compose -f docker-compose.test.yml -p ci up --build -d
# simply get ID of sut container
tests_container_id=$(docker-compose -f docker-compose.test.yml -p ci ps -q sut)
# wait for sut container to finish (pytest will return 0 if all tests passed)
docker wait $tests_container_id
# get exit code of sut container
tests_status=$(docker-compose -f docker-compose.test.yml -p ci ps -q sut | xargs docker inspect -f '{{ .State.ExitCode  }}' | grep -v 0 | wc -l | tr -d ' ')
# print logs if tests didn't pass and return exit code
if [ $tests_status = "1" ] ; then
    docker-compose -f docker-compose.test.yml -p ci logs sut
    return 1
else
    return 0
fi

Now you can use the code above in any function of your choice (mine is called test) and do smth like that:

test
test_result=$?
if [[ $test_result -eq 0 ]] ; then
    docker-compose -f docker-compose.yml up --build -d
fi

Works well for me but I'm still looking forward to see docker-compose support that kind of stuff natively :)

desprit commented Dec 9, 2016

Possible temporary workaround for those of you who's looking for delay functionallity to run tests:

I have two docker-compose yml files. One is for testing and another one for development. The difference is just in having sut container in docker-compose.test.yml. sut container runs pytest. My goal was to run test docker-compose and if pytest command in sut container fails, don't run development docker-compose. Here is what I came up with:

# launch test docker-compose; note: I'm starting it with -p argument
docker-compose -f docker-compose.test.yml -p ci up --build -d
# simply get ID of sut container
tests_container_id=$(docker-compose -f docker-compose.test.yml -p ci ps -q sut)
# wait for sut container to finish (pytest will return 0 if all tests passed)
docker wait $tests_container_id
# get exit code of sut container
tests_status=$(docker-compose -f docker-compose.test.yml -p ci ps -q sut | xargs docker inspect -f '{{ .State.ExitCode  }}' | grep -v 0 | wc -l | tr -d ' ')
# print logs if tests didn't pass and return exit code
if [ $tests_status = "1" ] ; then
    docker-compose -f docker-compose.test.yml -p ci logs sut
    return 1
else
    return 0
fi

Now you can use the code above in any function of your choice (mine is called test) and do smth like that:

test
test_result=$?
if [[ $test_result -eq 0 ]] ; then
    docker-compose -f docker-compose.yml up --build -d
fi

Works well for me but I'm still looking forward to see docker-compose support that kind of stuff natively :)

@blockjon

This comment has been minimized.

Show comment
Hide comment
@blockjon

blockjon commented Dec 18, 2016

+1

@electrofelix

This comment has been minimized.

Show comment
Hide comment
@electrofelix

electrofelix Dec 19, 2016

Perhaps things that are considered outside the core of docker-compose could be supported through allowing plugins? Similar to the request #1341 it seems there is additional functionality that some would find useful but doesn't necessarily fully align with the current vision. Perhaps supporting a plugin system such as proposed by #3905 would provide a way to allow compose focus on a core set of capabilities and if this isn't one then those that want it for their particular use case could write a plugin to handle performing up differently?

It would be nice to be able to have docker-compose act as the entrypoint to all projects we have locally around docker env setup, rather than needing to add a script sitting in front of all to act as the default entrypoint instead of people needing to remember to run the script for the odd cases.

electrofelix commented Dec 19, 2016

Perhaps things that are considered outside the core of docker-compose could be supported through allowing plugins? Similar to the request #1341 it seems there is additional functionality that some would find useful but doesn't necessarily fully align with the current vision. Perhaps supporting a plugin system such as proposed by #3905 would provide a way to allow compose focus on a core set of capabilities and if this isn't one then those that want it for their particular use case could write a plugin to handle performing up differently?

It would be nice to be able to have docker-compose act as the entrypoint to all projects we have locally around docker env setup, rather than needing to add a script sitting in front of all to act as the default entrypoint instead of people needing to remember to run the script for the odd cases.

xulike666 pushed a commit to xulike666/compose that referenced this issue Jan 19, 2017

@Silex

This comment has been minimized.

Show comment
Hide comment
@Silex

Silex Mar 7, 2017

Here's a way to do it with healthcheck and docker-compose 2.1+:

version: "2.1"
services:
  db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: password
    healthcheck:
      test: mysqladmin -uroot -ppassword ping
      interval: 2s
      timeout: 5s
      retries: 30
  web:
    image: nginx:latest # your image
    depends_on:
      db:
        condition: service_healthy

Here docker-compose up will start the web container only after the db container is considered healthy.

Sorry if it was mentionned already, but I don't think a full solution was posted.

Silex commented Mar 7, 2017

Here's a way to do it with healthcheck and docker-compose 2.1+:

version: "2.1"
services:
  db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: password
    healthcheck:
      test: mysqladmin -uroot -ppassword ping
      interval: 2s
      timeout: 5s
      retries: 30
  web:
    image: nginx:latest # your image
    depends_on:
      db:
        condition: service_healthy

Here docker-compose up will start the web container only after the db container is considered healthy.

Sorry if it was mentionned already, but I don't think a full solution was posted.

@raccoonyy

This comment has been minimized.

Show comment
Hide comment
@raccoonyy

raccoonyy Mar 8, 2017

Here's a way for PostgreSQL.

Thanks @Silex 👍

version: '2.1'
services:
  db:
    image: postgres:9.6.1
    healthcheck:
      test: "pg_isready -h localhost -p 5432 -q -U postgres"
      interval: 3s
      timeout: 5s
      retries: 5

raccoonyy commented Mar 8, 2017

Here's a way for PostgreSQL.

Thanks @Silex 👍

version: '2.1'
services:
  db:
    image: postgres:9.6.1
    healthcheck:
      test: "pg_isready -h localhost -p 5432 -q -U postgres"
      interval: 3s
      timeout: 5s
      retries: 5
@vladikoff

This comment has been minimized.

Show comment
Hide comment
@vladikoff

vladikoff Mar 8, 2017

@Silex sadly with version "3" and this format:

    image: nginx:latest # your image
    depends_on:
      db:
        condition: service_healthy

I get ERROR: The Compose file './docker-compose.yml' is invalid because: depends_on contains an invalid type, it should be an array

vladikoff commented Mar 8, 2017

@Silex sadly with version "3" and this format:

    image: nginx:latest # your image
    depends_on:
      db:
        condition: service_healthy

I get ERROR: The Compose file './docker-compose.yml' is invalid because: depends_on contains an invalid type, it should be an array

@mbdas

This comment has been minimized.

Show comment
Hide comment
@mbdas

mbdas Mar 8, 2017

mbdas commented Mar 8, 2017

@vladikoff

This comment has been minimized.

Show comment
Hide comment
@vladikoff

vladikoff Mar 8, 2017

2.1 continues to support it and will not be deprecated. 3.x is mainly for swarm services mode (non local).

Thanks!

vladikoff commented Mar 8, 2017

2.1 continues to support it and will not be deprecated. 3.x is mainly for swarm services mode (non local).

Thanks!

@Silex

This comment has been minimized.

Show comment
Hide comment
@Silex

Silex Mar 8, 2017

@vladikoff: more info about version 3 at #4305

Basically, it won't be supported, you have to make your containers fault-tolerant instead of relying on docker-compose.

Silex commented Mar 8, 2017

@vladikoff: more info about version 3 at #4305

Basically, it won't be supported, you have to make your containers fault-tolerant instead of relying on docker-compose.

@shin-

This comment has been minimized.

Show comment
Hide comment
@shin-

shin- Mar 21, 2017

Member

I believe this can be closed now.

Member

shin- commented Mar 21, 2017

I believe this can be closed now.

@slava-nikulin

This comment has been minimized.

Show comment
Hide comment
@slava-nikulin

slava-nikulin May 11, 2017

Unfortunatelly, condition is not supported anymore in v3. Here is workaround, that I've found:

website:
    depends_on:
      - 'postgres'
    build: .
    ports:
      - '3000'
    volumes:
      - '.:/news_app'
      - 'bundle_data:/bundle'
    entrypoint: ./wait-for-postgres.sh postgres 5432

  postgres:
    image: 'postgres:9.6.2'
    ports:
      - '5432'

wait-for-postgres.sh:

#!/bin/sh

postgres_host=$1
postgres_port=$2
shift 2
cmd="$@"

# wait for the postgres docker to be running
while ! pg_isready -h $postgres_host -p $postgres_port -q -U postgres; do
  >&2 echo "Postgres is unavailable - sleeping"
  sleep 1
done

>&2 echo "Postgres is up - executing command"

# run the command
exec $cmd

slava-nikulin commented May 11, 2017

Unfortunatelly, condition is not supported anymore in v3. Here is workaround, that I've found:

website:
    depends_on:
      - 'postgres'
    build: .
    ports:
      - '3000'
    volumes:
      - '.:/news_app'
      - 'bundle_data:/bundle'
    entrypoint: ./wait-for-postgres.sh postgres 5432

  postgres:
    image: 'postgres:9.6.2'
    ports:
      - '5432'

wait-for-postgres.sh:

#!/bin/sh

postgres_host=$1
postgres_port=$2
shift 2
cmd="$@"

# wait for the postgres docker to be running
while ! pg_isready -h $postgres_host -p $postgres_port -q -U postgres; do
  >&2 echo "Postgres is unavailable - sleeping"
  sleep 1
done

>&2 echo "Postgres is up - executing command"

# run the command
exec $cmd
@riuvshin

This comment has been minimized.

Show comment
Hide comment
@riuvshin

riuvshin May 11, 2017

@slava-nikulin custom entrypoint is a common practice, it is almost the only (docker native) way how you can define and check all conditions you need before staring your app in a container.

riuvshin commented May 11, 2017

@slava-nikulin custom entrypoint is a common practice, it is almost the only (docker native) way how you can define and check all conditions you need before staring your app in a container.

@mbdas

This comment has been minimized.

Show comment
Hide comment
@mbdas

mbdas May 11, 2017

mbdas commented May 11, 2017

@patrickml

This comment has been minimized.

Show comment
Hide comment
@patrickml

patrickml Jun 22, 2017

I was able to do something like this
// start.sh

#!/bin/sh
set -eu

docker volume create --name=gql-sync
echo "Building docker containers"
docker-compose build
echo "Running tests inside docker container"
docker-compose up -d pubsub
docker-compose up -d mongo
docker-compose up -d botms
docker-compose up -d events
docker-compose up -d identity
docker-compose up -d importer
docker-compose run status
docker-compose run testing

exit $?

// status.sh

#!/bin/sh

set -eu pipefail

echo "Attempting to connect to bots"
until $(nc -zv botms 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to events"
until $(nc -zv events 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to identity"
until $(nc -zv identity 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to importer"
until $(nc -zv importer 8080); do
    printf '.'
    sleep 5
done
echo "Was able to connect to all"

exit 0

// in my docker compose file

  status:
    image: yikaus/alpine-bash
    volumes:
      - "./internals/scripts:/scripts"
    command: "sh /scripts/status.sh"
    depends_on:
      - "mongo"
      - "importer"
      - "events"
      - "identity"
      - "botms"

patrickml commented Jun 22, 2017

I was able to do something like this
// start.sh

#!/bin/sh
set -eu

docker volume create --name=gql-sync
echo "Building docker containers"
docker-compose build
echo "Running tests inside docker container"
docker-compose up -d pubsub
docker-compose up -d mongo
docker-compose up -d botms
docker-compose up -d events
docker-compose up -d identity
docker-compose up -d importer
docker-compose run status
docker-compose run testing

exit $?

// status.sh

#!/bin/sh

set -eu pipefail

echo "Attempting to connect to bots"
until $(nc -zv botms 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to events"
until $(nc -zv events 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to identity"
until $(nc -zv identity 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to importer"
until $(nc -zv importer 8080); do
    printf '.'
    sleep 5
done
echo "Was able to connect to all"

exit 0

// in my docker compose file

  status:
    image: yikaus/alpine-bash
    volumes:
      - "./internals/scripts:/scripts"
    command: "sh /scripts/status.sh"
    depends_on:
      - "mongo"
      - "importer"
      - "events"
      - "identity"
      - "botms"
@usamaB

This comment has been minimized.

Show comment
Hide comment
@usamaB

usamaB Oct 30, 2017

I have a similar problem but a bit different. I have to wait for MongoDB to start and initialize a replica set.
Im doing all of the procedure in docker. i.e. creating and authentication replica set. But I have another python script in which I have to connect to the primary node of the replica set. I'm getting an error there.

docker-compose.txt
Dockerfile.txt
and in the python script im trying to do something like this
for x in range(1, 4): client = MongoClient(host='node' + str(x), port=27017, username='admin', password='password') if client.is_primary: print('the client.address is: ' + str(client.address)) print(dbName) print(collectionName) break

Am having difficulty in doing so, anyone has any idea?

usamaB commented Oct 30, 2017

I have a similar problem but a bit different. I have to wait for MongoDB to start and initialize a replica set.
Im doing all of the procedure in docker. i.e. creating and authentication replica set. But I have another python script in which I have to connect to the primary node of the replica set. I'm getting an error there.

docker-compose.txt
Dockerfile.txt
and in the python script im trying to do something like this
for x in range(1, 4): client = MongoClient(host='node' + str(x), port=27017, username='admin', password='password') if client.is_primary: print('the client.address is: ' + str(client.address)) print(dbName) print(collectionName) break

Am having difficulty in doing so, anyone has any idea?

@chaicode88

This comment has been minimized.

Show comment
Hide comment
@chaicode88

chaicode88 Apr 26, 2018

@patrickml If I don't use docker compose, How I you do it with Dockerfile?
I need 'cqlsh' to execute my build_all.cql. However, 'cqlsh' is not ready...have to wait for 60 seconds to be ready.

cat Dockerfile

FROM store/datastax/dse-server:5.1.8

USER root

RUN apt-get update
RUN apt-get install -y vim

ADD db-scripts-2.1.33.2-RFT-01.tar /docker/cms/
COPY entrypoint.sh /entrypoint.sh

WORKDIR /docker/cms/db-scripts-2.1.33.2/
RUN cqlsh -f build_all.cql

USER dse

=============

Step 8/9 : RUN cqlsh -f build_all.cql
---> Running in 08c8a854ebf4
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
The command '/bin/sh -c cqlsh -f build_all.cql' returned a non-zero code: 1

chaicode88 commented Apr 26, 2018

@patrickml If I don't use docker compose, How I you do it with Dockerfile?
I need 'cqlsh' to execute my build_all.cql. However, 'cqlsh' is not ready...have to wait for 60 seconds to be ready.

cat Dockerfile

FROM store/datastax/dse-server:5.1.8

USER root

RUN apt-get update
RUN apt-get install -y vim

ADD db-scripts-2.1.33.2-RFT-01.tar /docker/cms/
COPY entrypoint.sh /entrypoint.sh

WORKDIR /docker/cms/db-scripts-2.1.33.2/
RUN cqlsh -f build_all.cql

USER dse

=============

Step 8/9 : RUN cqlsh -f build_all.cql
---> Running in 08c8a854ebf4
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
The command '/bin/sh -c cqlsh -f build_all.cql' returned a non-zero code: 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment