AWS ECS Container management services and Coral Apps

Eugene de Fikh edited this page Mar 19, 2016 · 1 revision

We are using AWS ECS container management services to run our Staging Cluster. Here is a good overview of ECS services offered by Amazon:

We wanted to make our applications that run in docker containers to be highly available and utilize auto scaling groups. We also wanted ability to version our releases and be able to spin up testing application quickly in the ECS staging cluster. We decided to use Jenkins to integrate our gitrepo and have commits to various branches of Github trigger automated updates to be sent to DockerHub and have ECS cluster pull Docker Hub images as they get updated.

The result was creation of Stage cluster on ECS , have it live in VPC and auto scale from 2 to 4 instances as Cloudwatch metrics trigger auto groups to scale up / down.

Here’s a quick overview of terminology from the AWS Documentation. Amazon ECS contains the following components:

Cluster: a logical grouping of container instances that you can place tasks on. Container Instance: an Amazon EC2 instance that is running the Amazon ECS agent and has been registered into a cluster. Task Definition: a description of an application that contains one or more container definitions. Task: an instantiation of a task definition that is running on a container instance. Container: a Linux container that was created as part of a task.

First one needs to create ECS cluster, call it Stage. While a cluster is created, they do not add any machines to it. Additionally you’ll want an IAM Role, Security Group and Instance profile for the cluster instances.

Next create Tasks and add machines to the cluster. A good overview of the process is described here :

We end up with 4 task definitions : Xenia Cay Pillar Atoll

those definitions have links to docker Hub images for each app and parameters defined as env variables like so:

Host Port Container Port Protocol External Link 8080 8080 tcp x.x.x.x:8080

Environment variables Key Value PILLAR_ADDRESS :8080 PILLAR_HOME /opt/pillar MONGODB_URL mongodb://db:password@x.x.x.x:27017/coral

Each of the tasks is managed by corresponding task in Jenkins that watches for commits to master repo and triggers a docker hub pull to get fresh updated image, then updates the cluster with the new image automatically.

in front of the applications there is a load balancer that exposes the Cay application on port 443 using SSL certificate. At this point we can point A DNS record to a ELB to get the application working.

Each task definition can be versioned, we can have multiple versions of the same app , but only one can be active on the ECS cluster at a given time. We can setup QA cluster to handle commits to different branch for example and be able to do testing against that cluster.

Each cluster starts off with a t2.small ECS2 instance that get 1 cpu and 2 GB of RAM. As we get close to 75% cpu usage or 75% memory usage or 80% disk space filled , we spin up a new instance to manage cluster load. None of the instances are committed to one app, they run all 4 apps as docker images.

Since we utilize cloudwatch we can get alarms sent to sysadmin email as we size the cluster up and down.

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.