Skip to content
This repository has been archived by the owner on Jun 12, 2018. It is now read-only.

Step 6: Update ECS Service times out #4

Closed
niiamon opened this issue Oct 7, 2015 · 6 comments
Closed

Step 6: Update ECS Service times out #4

niiamon opened this issue Oct 7, 2015 · 6 comments

Comments

@niiamon
Copy link

niiamon commented Oct 7, 2015

First got this:

Command timed out after no response

Then this after I increased no-response-timeout:

Step 6: Update ECS Service

✖ Waiter ServicesStable failed: Max attempts exceeded

@TomFrost
Copy link
Contributor

TomFrost commented Oct 8, 2015

Hey @niiamon, I ran into this same thing. For me, the issue turned out to be that the EC2 node running the ECS agent didn't have the appropriate Dockerhub authentication set up on it to download my image. If you SSH into your EC2 node(s) and cat /etc/ecs/ecs.config doesn't show you the credentials you need for your Docker registry, you're likely having the same problem.

Here's Amazon's page on how to correct that: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html

(Though I agree, error reporting on this one could be far better)

@niiamon
Copy link
Author

niiamon commented Oct 8, 2015

@TomFrost Fortunately, I have the required setup and my ecs.config is pulled in from an S3 bucket. I've been tailing the ecs-agent logs at /var/log/ecs and Iv'e spotted this:

unable to place a task because the resources could not be found.

The question is what resource is it talking about? A port? Debugging some more and will report back here with what I find.

@TomFrost
Copy link
Contributor

TomFrost commented Oct 8, 2015

Unfortunately, ECS does not have the ability to put two of the same tasks on the same machine if they expose the same port -- or for that matter, two different tasks that expose the same port. You'll also see that message if you don't have a machine in the cluster with enough RAM or CPU according to your task definition.

@niiamon
Copy link
Author

niiamon commented Oct 8, 2015

@TomFrost You're indeed right. The error wasn't a consistent but more sporadic in nature. So I had some previous tasks which had failed booting up and the agent kept trying to bring them up and since all the tasks were using the same port, that happened.

Got that sorted. I am now deciding how best to expose environment variables from wercker to the docker image that gets pushed to Docker Hub. Any ideas?

@TomFrost
Copy link
Contributor

TomFrost commented Oct 8, 2015

@niiamon Ideally the internal/docker-push step would let you specify environment variables, but in lieu of that there are two options that I see:

  1. If your entrypoint is a shell interpreter like sh -c, then you can prefix your command with either a chain of VAR_NAME="$WERCKER_VAR" or export VAR_NAME="$WERCKER_VAR" &&.
  2. You can add an array of environment variables to the task definition json that you use in the aws-ecs deployment step. Normally these would be static, such as:
"environment": [
  { "name": "APP_ENVIRONMENT", "value": "production" },
  { "name": "LOG_LEVEL", "value": "warn" }
]

but you could replace those values with placeholders like this:

"environment": [
  { "name": "APP_ENVIRONMENT", "value": "%APP_ENVIRONMENT%" },
  { "name": "LOG_LEVEL", "value": "%LOG_LEVEL%" }
]

and then, before your ecs deployment, add a script step to your deployment pipeline that uses sed to write a new json file, replacing those values with the environment variables in Wercker.

The latter is more complex, but certainly cleaner from the standpoint of being able to use the same container in your docker repository in different environments.

@niiamon
Copy link
Author

niiamon commented Oct 8, 2015

I chose the second approach and it works quite well. Many thanks for the help @TomFrost 😄

@niiamon niiamon closed this as completed Oct 8, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants