New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Launching specified container per instance. #159

Closed
hirokiky opened this Issue Aug 7, 2015 · 21 comments

Comments

Projects
None yet
@hirokiky

hirokiky commented Aug 7, 2015

Is there some way to launch specified container per ECS instances?

My case:

  • nginx-proxy container per ECS instance.
  • Web application containers on ECS instances.
           |                       +----- [Web app]  # Tasks including one container.
elb --- 80 | 80 --- [nginx-proxy] -+----- [Web app]
           |                       +----- [Web app]
           +-> ECS instance

If Web application containers eat all of CPU/Memory resources of an ECS instance, before nginx-proxy starting up, nginx-proxy instance won't be up. and Of cause, we can't access any applications in that instance.

Include both containers in one TaskDefiniton?

You may think I should create TaskDefinition containing both nginx-proxy and Web application containers. But there's some problems.

  • nginx-proxy grab the port 80, so We can't upgrade them gracefully.
  • We can't add another Web application containers, without restarting Tasks.

Feature Proposal

How about to add new property to specify Task which should be launched on each Instances?

@kyangmove

This comment has been minimized.

kyangmove commented Nov 4, 2015

I have the similar issue. In my case, I want to have one logstash forwarder container on each instance to monitor log files.

@mthenw

This comment has been minimized.

mthenw commented Nov 5, 2015

+1 for that. I need that badly for registrator.

@aldarund

This comment has been minimized.

aldarund commented Jan 7, 2016

+1

@ghaering

This comment has been minimized.

ghaering commented Feb 26, 2016

This wouly be the equivalent of fleet's https://coreos.com/fleet/docs/latest/unit-files-and-scheduling.html Global=true scheduling. This is useful for all kinds of stuff like your own service registry agent, log shippers etc.

One current workaround is to launch the required containers on instance start, i. e. do the required work outside of ECS. But then there's no easy way to upgrade these. Alternatively deploy an ECS service with a count of 1000 and bind it to a host port, even if none is needed. This will ensure the service is deployed on each node of the cluster.

@aripringle

This comment has been minimized.

aripringle commented Mar 15, 2016

One thing to be aware of if running containers on instance start: be sure to put this in something that will happen on every system boot (not just in userdata, which is processed on first boot). We had an ECS instance mysteriously reboot once, and containers that we had been running from userdata did not restart on their own. To mitigate this, userdata is now writing the docker run commands to /etc/rc.local instead of running them directly.

Anyway, this feature would be very helpful for us for nginx-proxy, splunk forwarder, registrator, and new relic. One addendum - to make use of nginx-proxy in this situation, other services would have to be able to forward their linked ELB to one of these "global" containers (in our configuration, we need multiple ELBs forwarding to a single nginx-proxy container instance).

@egoldblum

This comment has been minimized.

egoldblum commented Mar 23, 2016

Would like this too for stale image cleanup (or a fix for #118), monitoring agent, and log shipping.

The technique presented in https://aws.amazon.com/blogs/compute/running-an-amazon-ecs-task-on-every-instance/ has the no-easy-upgrade shortcoming described by @ghaering. Would love a better way to schedule these types of tasks instead of adding fake requirements (host port, etc) to force the ECS scheduler to run it on every instance in the cluster.

@bfolkens

This comment has been minimized.

bfolkens commented Apr 17, 2016

+1, needed for consul and registrator

@perrefe

This comment has been minimized.

perrefe commented May 3, 2016

+1

1 similar comment
@vkhatri

This comment has been minimized.

vkhatri commented May 28, 2016

+1

@jedi4ever

This comment has been minimized.

jedi4ever commented Jul 31, 2016

pretty please 👍

@cch0

This comment has been minimized.

cch0 commented Aug 19, 2016

+1

@crania

This comment has been minimized.

crania commented Sep 15, 2016

+1, needed for consul and registrator

@nasskach

This comment has been minimized.

nasskach commented Sep 24, 2016

+1 for that feature which is already available in docker swarm-mode and fleet.

@reinhrst

This comment has been minimized.

@nasskach

This comment has been minimized.

nasskach commented Oct 11, 2016

@reinhrst this is not what I personally expect as a solution. We should have that option in the "ECS Service" settings (and be able to use the same deployment mechanism).

@adambiggs

This comment has been minimized.

adambiggs commented Nov 7, 2016

Alternatively deploy an ECS service with a count of 1000 and bind it to a host port, even if none is needed. This will ensure the service is deployed on each node of the cluster.

I tried this approach mentioned by @ghaering, but it's not possible to achieve this with CloudFormation because the service will never stabilize... Causing the CF update to always roll back :(

@miketheman

This comment has been minimized.

miketheman commented Jan 1, 2017

Happy New Year! Before it came around, ECS released a feature set that allows for one-per-host deployment strategies.

https://aws.amazon.com/blogs/compute/introducing-amazon-ecs-task-placement-policies/

@hirokiky

This comment has been minimized.

hirokiky commented Jan 25, 2017

It seems nice and be used for both Services and Tasks.
Thanks a lot !
If somebody think it's not solved by #159 (comment). please re-open this issue.

@hirokiky hirokiky closed this Jan 25, 2017

@miketheman

This comment has been minimized.

miketheman commented Jan 25, 2017

@hirokiky I think that the placement strategies are almost good enough, but as I explain in my blog post here there's still key functionality missing, that I managed to work around with my own solution, but the ECS platform should provide a built-in mechanism to accomplish this.

@simonvanderveldt

This comment has been minimized.

simonvanderveldt commented Apr 19, 2018

@hirokiky IMHO this issue isn't solved by the current implementation of placement policies since a policy to schedule tasks for a service on all instances of a cluster is missing (say a "allInstances" placement policy).
Can you reopen this issue? I am unable to do so.

@jamessoubry

This comment has been minimized.

jamessoubry commented May 11, 2018

Hi all, I have created a Docker container that will update a service count to match a cluster instance count. So far it is working well. https://github.com/jamessoubry/ecs-service-count

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment