Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ECS] [EC2]: better integration between service and instance autoscaling #76

Open
matthewcummings opened this issue Dec 20, 2018 · 47 comments

Comments

@matthewcummings
Copy link

@matthewcummings matthewcummings commented Dec 20, 2018

Tell us about your request
Blog posts like this exist because it is difficult to coordinate service autoscaling with instance autoscaling:
https://engineering.depop.com/ahead-of-time-scheduling-on-ecs-ec2-d4ef124b1d9e
https://garbe.io/blog/2017/04/12/a-better-solution-to-ecs-autoscaling/
https://www.unicon.net/about/blogs/aws-ecs-auto-scaling

Which service(s) is this request for?
ECS and EC2

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
I would love for ECS to provide a simple/easy way to tell a supporting EC2 ASG to scale up when a task cannot be placed on its cluster. I'd also love to see this concern addressed: #42

Are you currently working around this issue?
I'm doing something similar to this: https://garbe.io/blog/2017/04/12/a-better-solution-to-ecs-autoscaling/

Additional context
Yes, please note that I love Lambda and Fargate but sometimes regular old ECS is a better fit and fwiw, Google Cloud has had cluster autoscaling for a long time now: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler. Also, I haven't tried EKS yet but cluster autoscaling would be super helpful there.

@idubinskiy

This comment has been minimized.

Copy link

@idubinskiy idubinskiy commented Dec 21, 2018

We're doing the same thing, with a period Lambda that combines a strategy similar to the garbe.io blog post with detection of tasks that are pending. We've continued to fine-tune the logic to try to strike a good balance between availability and cost, but it would be very convenient if ECS provided the functionality or at least published metrics to allow scaling the cluster out and especially in on actual service/task capacity.

@matthewcummings

This comment has been minimized.

Copy link
Author

@matthewcummings matthewcummings commented Dec 21, 2018

Actually, it would be great if the cluster supported an "n + 1" configuration, always keeping at least one instance running for new tasks to be placed when no other instances have enough resources.

@jespersoderlund

This comment has been minimized.

Copy link

@jespersoderlund jespersoderlund commented Dec 27, 2018

@matthewcummings I would like to extend this with requiring a stand-by per-AZ that the cluster is active in. The current behavior of ECS scheduling is quite dangerous in my mind in the case where a single node is available with plenty of space. Even with a placement strategy of spread across AZ it will put ALL the tasks on the single instance with available space.

@abby-fuller abby-fuller added this to Researching in containers-roadmap Jan 10, 2019
@jamiegs

This comment has been minimized.

Copy link

@jamiegs jamiegs commented Jan 10, 2019

I just finished implementing this available container count scaling for our ECS clusters and would be happy to chat with someone from AWS if they've got questions. I was just now working on a public repo + blog post with my implementation.

UPDATE: Since AWS is working on a solution for this I'll probably just abandon the blog post.. Here's some brief notes I had taken on the solution I've implemented: https://gist.github.com/jamiegs/296943b1b6ab4bdcd2a9d28e54bc3de0

@pgarbe

This comment has been minimized.

Copy link

@pgarbe pgarbe commented Jan 11, 2019

It's good to see that this topic becomes awareness. Actually, I was thinking to change the metric, I described in my blog post, so when the value increase, also the cluster size should increase (like a ContainerBufferFillRate). This would help to use target tracking and makes the configuration easier.

@zbintliff

This comment has been minimized.

Copy link

@zbintliff zbintliff commented Jan 30, 2019

We currently scale out in and on reservation. We are starting to run into scenarios where very large tasks (16gb of M) are no longer placed after a scale in. There is enough total space in cluster to fit it, and its below our 90% reservation but not enough space on an single node to place task.

Events are published and only way to know if pending task is because lack of space vs bad task definition is by parsing the service events per service.

@tabern tabern added the ECS label Jan 30, 2019
@hlarsen

This comment has been minimized.

Copy link

@hlarsen hlarsen commented Feb 6, 2019

UPDATE: Since AWS is working on a solution for this

@jamiegs are they?

i'm planning/testing an ec2 cluster implementation that i would like to eventually autoscale, however everything i'm reading still suggests the type of workarounds described in posts linked from this issue - i can't find anything official.

@jamiegs

This comment has been minimized.

Copy link

@jamiegs jamiegs commented Feb 6, 2019

@jamiegs are they?

@hlarsen well, I guess I assume they are since they have this ticket to improve autoscaling under researching on their roadmap.

@hlarsen

This comment has been minimized.

Copy link

@hlarsen hlarsen commented Feb 6, 2019

ahh sorry, i missed it - just checking if you had any inside info =)

for anyone else who missed it, this is currently in the Research phase on the roadmap, so if you're trying to do this now it appears lambda-based cluster scaling is the way to go.

@matthewcummings

This comment has been minimized.

Copy link
Author

@matthewcummings matthewcummings commented Feb 19, 2019

I just ran across #121 which is similar to my request if not a duplicate. At the end of the day we all want a reliable way to ensure that there are always enough instances running to add additional tasks when they are needed.

@gabegorelick

This comment has been minimized.

Copy link

@gabegorelick gabegorelick commented Apr 1, 2019

You can work around this by using DAEMON tasks (instead of REPLICA) and doing all scaling at the ASG level (instead of application auto scaling). Works OK if you only have one service per cluster, but it is kind of an abuse of daemonsets.

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Apr 19, 2019

Hi everyone, we are actively researching this and have a proposed solution in mind. This solution would work as follows:

  • ECS will compute a new CloudWatch metric for each ECS cluster. The new metric is a measure of how "full" your cluster is relative to the tasks you are running (and want to run). The metric will only be less than 100% if you are guaranteed to have space for at least 1 more task of each service and RunTask group already running in your cluster. It will be greater than or equal to 100% only if you have at least one service or RunTask group that can NOT place any additional tasks in the cluster. The metric accounts not only for tasks already running, but new tasks that have not been placed yet. This means that, for example, if a service is trying to scale out and needs 8 times as many instances as the cluster currently has, the metric will be 800%.
  • ECS will automatically set up a target tracking scaling policy on your ECS cluster using this new metric. You can set a target value for the metric less than or equal to 100%. A target value of 100% means the cluster will only scale out if there is no more space in your cluster for at least one service or RunTask group. A target value of less than 100% means that the cluster will keep some space in reserve for additional tasks to run. This will give you faster task scaling performance, with some additional cost due to unused capacity. The target tracking policy will scale your cluster out and in with the goal of maintaining the metric at or near your target value. When scaling out, the target tracking scaling policy can scale to the correct cluster size in one step, because the metric reflects all of the tasks that you want to run but aren't yet running. It can even scale out from zero running instances.
  • When scaling in, automated instance protection will ensure that ECS is more intelligent about which instances get terminated, and automated instance draining (also for Spot) will ensure that your tasks have the opportunity to shut down cleanly.

Thoughts on this proposal? Please let us know!

@kalpik

This comment has been minimized.

Copy link

@kalpik kalpik commented Apr 20, 2019

This would be awesome!

@samdammers

This comment has been minimized.

Copy link

@samdammers samdammers commented Apr 20, 2019

Sign me up

@matthewcummings

This comment has been minimized.

Copy link
Author

@matthewcummings matthewcummings commented Apr 20, 2019

I love it.

@geethaRam

This comment has been minimized.

Copy link

@geethaRam geethaRam commented Apr 21, 2019

This is a much needed feature for ECS. Right now, users have to over-provision their cluster instances or implement custom engg solutions using lambdas/cloudwatch for scale out and in scenarios. The cluster autoscale-aware feature with respect to the services/tasks is absolutely becessary. While this may not be applicable for Fargate, this is still needed for ECS use cases. I hope this gets prioritized and delivered, we have been waiting for this.

@masneyb

This comment has been minimized.

Copy link

@masneyb masneyb commented Apr 21, 2019

@coultn: I think your proposal will work just fine for clusters that start their tasks using ECS services. I have a few thoughts to keep in mind:

  • Maybe this is out of scope, but since you brought up the automated EC2 instance protection bit, I think that you should also take into consideration changes to the EC2 launch configuration (i.e. new AMI, instance type, etc) to help make the management of the ECS clusters easier. I link at the bottom of my comment to a CloudFormation template that does this for a cluster that runs batch jobs. For the clusters that run web applications, we wouldn't want the instance protection bit to be in the way when the fleet is rolled by autoscaling.

  • We have some QA clusters that run ECS services with development git branches of work that is currently in progress. These environments usually stick around for 8 hours after the last commit. Most of these environments hardly receive any traffic unless automated performance testing is in progress. Let's assume that we currently have X ECS services, and that all X of them have the same requirements from ECS (memory/CPU) for simplicity. Will the new CloudWatch metric tell us that we can start one copy of a task on just one of those services? So if the metric says we can start one, and if two ECS services try to scale out at the same time, then we'll encounter a stall scaling out the second service? Or, will the new metric tell us if we can start one copy of every ECS service that is currently configured? Hopefully it is the former since scaling policies can be configured to handle the latter case if needed.

  • This proposal won't work for ECS scheduled tasks. We have a cluster that runs over 200 cron-style jobs as ECS scheduled tasks for a legacy application. It's a mix of small and large jobs and our ECS cluster typically doubles the number of EC2 instances during parts of the day when more larger jobs are running. These jobs aren't setup as ECS services. Initially we started using the CloudWatch event rules to start an ECS task however we had a large number of jobs that wouldn't start during some parts of the day due to the run-task API call failing due to insufficient capacity in the cluster. To fix this, we still use CloudWatch event rules, however it sends a message to a SQS queue and a Lambda function is subscribed to it. The function will try to start the task, and if it fails due to insufficient capacity, then it will increase the desired number in the autoscaling group, and try again later. The tasks are bin packed to help make scaling in easier. The jobs have a finite duration, so scaling in involves looking for empty instances, draining them, and then terminating them. I have a CloudFormation template that implements this use case at https://github.com/MoveInc/ecs-cloudformation-templates/blob/master/ECS-Batch-Cluster.template and it's fairly well commented at the top with more details, including how we handle launch configuration changes (for AMI updates, new EC2 instance types, etc).

I can beta test some of your proposed changes at my work if you're interested.

@waffleshop

This comment has been minimized.

Copy link

@waffleshop waffleshop commented Apr 22, 2019

@coultn Looks good. My organization implemented a system very similar to this. Do you have any more details regarding how your solution computes the new scaling metric? We currently base it on the largest container (cpu and memory) in the ECS cluster -- similar to this solution.

@zbintliff

This comment has been minimized.

Copy link

@zbintliff zbintliff commented Apr 22, 2019

Can you please clarify:

The metric will only be less than 100% if you are guaranteed to have space for at least 1 more task of each service and RunTask group already running in your cluster.

Does this mean that if I have a cluster with 10 services then the new metric will be over 100% if it can't not fit 1 task for each service combined (additive) for a total overhead of the combined requirements of the 10 tasks? Or is it a "shared" overhead that will essentially guarantee you're service/task with the largest deployment can add one more?

@rothgar

This comment has been minimized.

Copy link

@rothgar rothgar commented Apr 22, 2019

Is the "full" metric CPU, Memory, connections, disk, something else? I feel like this type of metric makes sense for

  • Single or similar constraint services. (e.g. all CPU bound or all Memory bound)
  • Similar workload types (services vs batch)
  • Many small clusters vs large (mixed workload) clusters
  • Single ASG/instance type in the cluster

Can someone explain how the metric would work for large, mixed workload, multi-ASG clusters? If that's an anti-pattern for ECS it would also be good to know where the product roadmap is headed.

@sithmein

This comment has been minimized.

Copy link

@sithmein sithmein commented Apr 23, 2019

I second @masneyb third point. We use ECS in combination with Jenkins ECS plug-in to start containers (tasks) for every Jenkins job. The ECS plug-in is smart enough to retry tasks that failed due to insufficient resources. But I don't see how this new metric could be of much help in this case since it still only looks at the current resource usage and not the required resources. Settings a threshold < 100% is only a heuristics.
Ideally - and I get that this is a more fundamental change - ECS has a queue of pending tasks (like any other "traditional" queueing system) instead if immediately rejecting them. The length of the queue and its item's resource requirements can then easily be used to scale in and out.

@vimmis

This comment has been minimized.

Copy link

@vimmis vimmis commented Apr 23, 2019

This sounds good. Will the scaling out policy also takes care of scaling with respect to AZ spread? As in, will the scaling activity start new instance based on the AZ spread its task is looking for to scale or will it be random?

@talawahtech

This comment has been minimized.

Copy link

@talawahtech talawahtech commented Apr 23, 2019

@coultn sounds good overall, with the exception of one thing (which I may be misunderstanding).

  • ECS will automatically set up a target tracking scaling policy on your ECS cluster using this new metric. You can set a target value for the metric less than or equal to 100%. A target value of 100% means the cluster will only scale out if there is no more space in your cluster for at least one service or RunTask group. A target value of less than 100% means that the cluster will keep some space in reserve for additional tasks to run.

To me the statement in bold implies that when the cluster "fullness" metric is at 100% then there is still space for at least one more task, which is not what I would expect, especially since you are not allowed to set a target tracking metric of greater than 100%. What do you do if you actually want your cluster to be fully (efficiently) allocated?

As an example lets say my cluster consists of 5 nodes, each with 2 vCPUs running a single service where each task requires 1 vCPU of capacity.

My understanding of the current proposal is

  • 9 tasks -> 100%
  • 10 tasks -> more than 100%

My expectation of what the metric would be:

  • 9 tasks -> 90%
  • 10 tasks -> 100%

So ideally for me, at 10 tasks with 100% target tracking the ASG would be at steady state. If the ECS service tries to allocate an 11th task then the metric would go to 110% and target tracking would cause the ASG to start a 6th node. Now if I decide instead that I do want hot spare behavior, then I would set my target fullness to 90%.

To expound further on my use case, my intention would be set target tracking at the ASG level to 100% allocation and then separately set target tracking at the ECS service level to a specific CPU utilization (30% for example). So rather that having a spare node not doing anything, I would have all nodes active, but with sufficient CPU capacity to handle temporary spikes. If traffic gradually starts to climb and average CPU usage goes above 30%, then ECS would attempt to start more tasks and the ASG would start more nodes, and while the new nodes are starting up, there is still sufficient CPU headroom.

I definitely think you guys should make easy for end users to determine appropriate percentage for having one, two or three hot spares, since the math won't always be as simple as my example. But I think 100% utilization should be an option, even if you don't think it should be the default. Perhaps in the console you could auto-calculate and pre-fill the "1 hot spare" percentage for users, or at least pre-calculate some examples.

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Apr 23, 2019

Thanks for the comments/questions everyone! Some clarifications and additional details:

  • The metric will account for both service tasks, and tasks run directly via RunTask. Right now, tasks started with RunTask either start or they don't. So, for example, if you try to run 10 copies of a task and there is only capacity for 5, then 5 will start and 5 will not. The remaining 5 will not be retried unless you call RunTask again. Would it be helpful to have an option for RunTask where it would keep trying until capacity is available? If this option were available with RunTask, then the new metric would scale appropriately to all tasks, both service tasks and RunTask tasks, and scheduled tasks.
  • The metric DOES account for not only already-running tasks, but tasks that are not yet running (via the desired count of each service, and the count of the RunTask invocation, assuming the retry logic mentioned above). It is a precise measurement of what your cluster can run relative to what you want to run.
  • Equal to 100% means 'exactly full' for at least one service or RunTask invocation, and less than full for the rest. In other words, there is at least one service, or one set of tasks started with RunTask, for which the cluster has no additional capacity, but is exactly at capacity. If you set your target value to 100%, then the cluster might not scale until it completely runs out of resources for a service, and there are additional tasks that cannot be run.
  • Greater than 100% means that there is at least one service or RunTask invocation that wants to run more tasks than there is capacity in the cluster. This will always trigger a scale-out event regardless of the target value used in the target tracking scaling policy (assuming the target value is between 0-100).
  • Less than 100% means that each service or RunTask invocation has room for at least one more task. This does not mean that they all could add one more task at the same time. If your services tend to scale up or down at the same time, then you would want to account for that when configuring the target value for the metric. If you set the target value to less than 100%, then you always have spare capacity; however, depending on how quickly your services scale (or how quickly you call RunTask) you may still fill up the cluster. You are less likely to do so if you use a smaller target value. (Because target tracking scaling effectively rounds up to the next largest integer value, any target value less than 100% means you have capacity for at least 1 extra task, regardless of the task sizes).
  • The metric will accommodate both single-service and multi-service clusters. It looks at the capacity across all services (and RunTask invocations) and computes the maximum value. The services and tasks do not need to have the same resource requirements or constraints.
    -The metric is not explicitly aiming to solve the rebalancing problem. That is a separate feature.
@lattwood

This comment has been minimized.

Copy link

@lattwood lattwood commented May 8, 2019

@coultn Would it be helpful to have an option for RunTask where it would keep trying until capacity is available? If this option were available with RunTask, then the new metric would scale appropriately to all tasks, both service tasks and RunTask tasks, and scheduled tasks

YES. Currently we're investigating K8S because of this and other reasons.

@coultn coultn moved this from Researching to We're Working On It in containers-roadmap May 21, 2019
@pahud

This comment has been minimized.

Copy link

@pahud pahud commented May 21, 2019

@coultn

Less than 100% means that each service or RunTask invocation has room for at least one more task.

What if I have 5 nodes and each node has remaining 300MB memory which sums up to 1.5GB, however, any extra service task will reserve 1.0GB of memory, which can't be fulfilled in any existing node in the cluster. Will the metrics display less than 100% or greater than 100%? Obviously, we need to scale out the cluster for an extra node to make sure there's always enough room on a single node to run an extra service task.

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented May 21, 2019

@coultn

Less than 100% means that each service or RunTask invocation has room for at least one more task.

What if I have 5 nodes and each node has remaining 300MB memory which sums up to 1.5GB, however, any extra service task will reserve 1.0GB of memory, which can't be fulfilled in any existing node in the cluster. Will the metrics display less than 100% or greater than 100%? Obviously, we need to scale out the cluster for an extra node to make sure there's always enough room on a single node to run an extra service task.

If there is not sufficient space on any instance for at least one additional service task, and the desired count is greater than the running count for that service, then the metric will be greater than 100%.

If there is not sufficient space on any instance for at least one additional service task, and the desired count is equal to the running count for that service, then the metric will be equal to 100%.

@simonvanderveldt

This comment has been minimized.

Copy link

@simonvanderveldt simonvanderveldt commented May 27, 2019

Would it be helpful to have an option for RunTask where it would keep trying until capacity is available? If this option were available with RunTask, then the new metric would scale appropriately to all tasks, both service tasks and RunTask tasks, and scheduled tasks.

This would indeed be very helpful!

@pahud

This comment has been minimized.

Copy link

@pahud pahud commented May 29, 2019

If there is not sufficient space on any instance for at least one additional service task, and the desired count is greater than the running count for that service, then the metric will be greater than 100%.

If there is not sufficient space on any instance for at least one additional service task, and the desired count is equal to the running count for that service, then the metric will be equal to 100%.

Appreciated your clarifying this issue. That would be super helpful!

@dsouzajude

This comment has been minimized.

Copy link

@dsouzajude dsouzajude commented Jun 6, 2019

  • Equal to 100% means 'exactly full' for at least one service or RunTask invocation, and less than full for the rest. In other words, there is at least one service, or one set of tasks started with RunTask, for which the cluster has no additional capacity, but is exactly at capacity. If you set your target value to 100%, then the cluster might not scale until it completely runs out of resources for a service, and there are additional tasks that cannot be run.
  • The metric will accommodate both single-service and multi-service clusters. It looks at the capacity across all services (and RunTask invocations) and computes the maximum value. The services and tasks do not need to have the same resource requirements or constraints.

I need some confirmation on my understanding. Is this metric defined per service or per cluster? For example, if i have 5 services and each of these services are out of capacity and need to scale out and if i set the metric to less than 100%, would it scale 5 more ec2 instances in the ecs cluster for each of the 5 services that need scaling or intelligently scale just enough ec2 instances to allow all 5 services to properly scale out completely? Thanks!

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Jun 6, 2019

  • Equal to 100% means 'exactly full' for at least one service or RunTask invocation, and less than full for the rest. In other words, there is at least one service, or one set of tasks started with RunTask, for which the cluster has no additional capacity, but is exactly at capacity. If you set your target value to 100%, then the cluster might not scale until it completely runs out of resources for a service, and there are additional tasks that cannot be run.
  • The metric will accommodate both single-service and multi-service clusters. It looks at the capacity across all services (and RunTask invocations) and computes the maximum value. The services and tasks do not need to have the same resource requirements or constraints.

I need some confirmation on my understanding. Is this metric defined per service or per cluster? For example, if i have 5 services and each of these services are out of capacity and need to scale out and if i set the metric to less than 100%, would it scale 5 more ec2 instances in the ecs cluster for each of the 5 services that need scaling or intelligently scale just enough ec2 instances to allow all 5 services to properly scale out completely? Thanks!

There is a single metric for each EC2 auto scaling group in a cluster. It is computed for each service and standalone RunTask invocation currently active in that auto scaling group; the actual metric is then taken as the maximum value across all of the computed values. So, for each auto scaling group you have a single metric that accounts for all services and tasks. In your example, with 5 services running, as long as at least one of those services needs more capacity, the cluster will scale out.

@dsouzajude

This comment has been minimized.

Copy link

@dsouzajude dsouzajude commented Jun 6, 2019

There is a single metric for each EC2 auto scaling group in a cluster. It is computed for each service and standalone RunTask invocation currently active in that auto scaling group; the actual metric is then taken as the maximum value across all of the computed values. So, for each auto scaling group you have a single metric that accounts for all services and tasks. In your example, with 5 services running, as long as at least one of those services needs more capacity, the cluster will scale out.

Thanks for the confirmation and it makes sense. This would mean that i would have the freedom to decide how much ec2 instances i want to scale up/down by using a ScalingPolicy? So then my question is, would it be compatible with this ScalingPolicy feature? and hence automatically use it in CloudFormation as well?

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Jun 7, 2019

There is a single metric for each EC2 auto scaling group in a cluster. It is computed for each service and standalone RunTask invocation currently active in that auto scaling group; the actual metric is then taken as the maximum value across all of the computed values. So, for each auto scaling group you have a single metric that accounts for all services and tasks. In your example, with 5 services running, as long as at least one of those services needs more capacity, the cluster will scale out.

Thanks for the confirmation and it makes sense. This would mean that i would have the freedom to decide how much ec2 instances i want to scale up/down by using a ScalingPolicy? So then my question is, would it be compatible with this ScalingPolicy feature? and hence automatically use it in CloudFormation as well?

Along with the new metric, ECS will actually automatically set up a Target Tracking scaling policy on the auto scaling group on your behalf. You will be able to set the target value for the scaling policy. You will also be able to add other scaling policies in addition to the ECS-managed scaling policy.

@rdawemsys

This comment has been minimized.

Copy link

@rdawemsys rdawemsys commented Jun 10, 2019

@coultn , I have a couple of follow-on questions (thanks in advance):

  1. How often would the new metric be published? Every minute?

  2. Will it be possible to use the new metric without any target tracking being set up automatically?

My understanding is that target tracking needs the metric to be in breach for 5 minutes before it will take any action, and that's not configurable. (This is based on what I've seen of target tracking on ALB RequestCountPerTarget.) 5 minutes is too long for the use-case I'm working on, where we want to scale out our latency-sensitive service ASAP, where the trade off in ending up with slightly too much capacity is currently acceptable. We're currently using a modified version of https://garbe.io/blog/2017/04/12/a-better-solution-to-ecs-autoscaling/ where the "tasks that fit" metric can go negative when the desired count of tasks won't fit in the available space, combined with step scaling.

@zbintliff

This comment has been minimized.

Copy link

@zbintliff zbintliff commented Jun 10, 2019

Thanks so much for the information @coultn. The metric makes sense and will definitely make our lives easier. Earlier you mentioned that the metric will live on each ASG in the cluster. Can you clarify how that will work?

For example, lets say you have two ASGs that are exactly the same AMI, instance size, available ENIs, etc deployed with 10 nodes each for a total of twenty nodes. Can you help me understand what the metric will be for each ASG in the following scenarios:

  1. Enough tasks are deployed to completely fill 10 nodes, spread evenly across both ASGs. I would expect the metric to be 50% for both ASGs.
  2. Enough tasks are deployed to completely fill 10 nodes, but they are bin packed on nodes in the first ASG. With the previous discussions I'm not sure if each ASG will still be 50% since the cluster is only half full or if ASG one will be 100% (all 10 nodes taken) and ASG two will be 0% (no space taken)

Obviously, the above gets more complicated when bringing in placement constraints. You could argue the metric should be per Container Instance Attribute per Cluster since you will then be able to scale the required resources better (can explain this more if needed but don't want this to distract from above question)

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Jun 10, 2019

@coultn , I have a couple of follow-on questions (thanks in advance):

1. How often would the new metric be published? Every minute?

2. Will it be possible to use the new metric without any target tracking being set up automatically?

My understanding is that target tracking needs the metric to be in breach for 5 minutes before it will take any action, and that's not configurable.

@rdawemsys Thanks for your feedback. The metric will likely be published once per minute (the same as other existing ECS metrics). In our current design, ECS will either publish the metric AND configure target tracking scaling, or not publish the metric. Keep in mind that you can configure additional scaling policies that work with target tracking.

Regarding the five minute delay you mentioned, that is not correct in general. The timing and latency of target tracking scaling alarms depends on the frequency of the metric being published. What is the maximum scaling latency that you would find acceptable?

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Jun 10, 2019

Thanks so much for the information @coultn. The metric makes sense and will definitely make our lives easier. Earlier you mentioned that the metric will live on each ASG in the cluster. Can you clarify how that will work?

For example, lets say you have two ASGs that are exactly the same AMI, instance size, available ENIs, etc deployed with 10 nodes each for a total of twenty nodes. Can you help me understand what the metric will be for each ASG in the following scenarios:

1. Enough tasks are deployed to completely fill 10 nodes, spread evenly across both ASGs. I would expect the metric to be 50% for both ASGs.

2. Enough tasks are deployed to completely fill 10 nodes, but they are bin packed on nodes in the first ASG. With the previous discussions I'm not sure if each ASG will still be 50% since the cluster is only half full or if ASG one will be 100% (all 10 nodes taken) and ASG two will be 0% (no space taken)

Obviously, the above gets more complicated when bringing in placement constraints. You could argue the metric should be per Container Instance Attribute per Cluster since you will then be able to scale the required resources better (can explain this more if needed but don't want this to distract from above question)

@zbintliff There are some additional details that are probably too involved to get into here. I'll do my best to describe it things will work in a general sense, without having to write a 50 page treatise on scaling and placement :) The core concept is that each ASG in a cluster has its own value for the metric, based on the tasks and services that the ECS control plane wants to run in that ASG. So, to address your specific examples:

  1. The metric will be 50% in both ASGs, because each ASG is half full relative to what it could run.
  2. The metric will be 100% in ASG 1 and 0% in ASG 2, since ASG 1 cannot have any more tasks placed without running additional instances, and ASG 2 has no tasks running or desired.
@zbintliff

This comment has been minimized.

Copy link

@zbintliff zbintliff commented Jun 11, 2019

Great that makes sense! I know it can get complicated so thank you for the examples.

@rdawemsys

This comment has been minimized.

Copy link

@rdawemsys rdawemsys commented Jun 26, 2019

@coultn Thanks for your replies. In answer to your question:

Regarding the five minute delay you mentioned, that is not correct in general. The timing and latency of target tracking scaling alarms depends on the frequency of the metric being published. What is the maximum scaling latency that you would find acceptable?

We're running a latency-sensitive service in ECS, which has big spikes in traffic, and we need to scale out ECS pretty rapidly. Ideally within 1-2 minutes for scale out. Scale in can be slower.

We're currently using ECS average CPU for triggering scale-out, which seems to happen reliably within 1-2 minutes of CPU spiking. We're using ALB RequestCountPerTarget for scale-in, which is observed to have a ~5 minute lag for triggering an alarm.

I understand it may not be possible to scale out EC2 within that 1-2 minute timescale, which is why we're leaving some headroom on our ECS cluster, and then scaling out EC2 to maintain that headroom.

I should also add that we've introduced autoscaling relatively recently into this particular service, so we're gaining some understanding of how autoscaling and our traffic patterns interact. We're going to be tweaking things as we go along. It may turn out that our autoscaling rules do not need to be so aggressive -- we're being cautious with our roll-out to maintain service levels.

@talawahtech

This comment has been minimized.

Copy link

@talawahtech talawahtech commented Sep 2, 2019

@coultn will this feature depend on Container Insights, and therefore require us to pay for Container Insights in order to take advantage of integrated autoscaling?

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Sep 4, 2019

@talawahtech No, it will not require container insights.

@dhakad-rakesh

This comment has been minimized.

Copy link

@dhakad-rakesh dhakad-rakesh commented Oct 4, 2019

It would be great feature. Is there any estimate on when it will be available?

@coultn coultn moved this from We're Working On It to Coming Soon in containers-roadmap Oct 9, 2019
@jaredjstewart

This comment has been minimized.

Copy link

@jaredjstewart jaredjstewart commented Oct 9, 2019

I have found this suggested mechanism for managing ECS + EC2 updates to be more complex than seems reasonable: https://aws.amazon.com/blogs/compute/automatically-update-instances-in-an-amazon-ecs-cluster-using-the-ami-id-parameter/

Will this feature have any impact to the necessity of that approach for safely deploying things like an AMI update together with an ECS task change?

Edit: It looks like this is a separate issue tracked at #256

@coultn

This comment has been minimized.

Copy link

@coultn coultn commented Oct 27, 2019

@jaredjstewart The feature outlined in this issue isn't specifically targeted at making AMI updates easier, although it may indirectly help with that. What would you like to see us do with AMI updates?

@wcoc

This comment has been minimized.

Copy link

@wcoc wcoc commented Oct 29, 2019

@coultn have any prevision for release this feature?

I am really very interested in using.

Thank you!

@ganeshmannamal

This comment has been minimized.

Copy link

@ganeshmannamal ganeshmannamal commented Oct 29, 2019

+1 on the proposal.
This is close to what we have implemented and would be great to have native integration with ECS. What we do currently is something like this:

  • First off we run 1 container per instance (apart from the sidecar containers for monitoring , logging etc). This allows to operate our application optimised for CPU usage and provides a buffer during spikes in traffic.
  • We scale up based on Service CPU utilisation but have 2 CloudWatch alarms, first one triggers at after 3 mins of CPU threshold and scales up the ASG. The second one triggers after 5 mins and scales up the ECS Service.
  • ECS service downscales again based on CPU utilisation. But for downscaling we use a custom metric called cluster utilisation metric. This is similar to what is proposed here, just that it looks for unused instances in the cluster, that is the metric value is difference between no of instances and no of task in service (simple!!). CloudWatch alarm is triggered if metric value is greater than or equal to 1 for more than 10mins.
  • This metric also helps us to maintain 100% minimumHealthyPercent during deployments. A pre-deployment hook scales up ASG by a fixed % (max 100%). ECS will deploy new version to the new instances, register to ALB and retire the old tasks. The CPU utilisation metric will scale down the ASG in a few minutes, ensuring that we over provision the cluster for only a few minutes every deployment but continue serving traffic even during peak loads.
@jaredjstewart

This comment has been minimized.

Copy link

@jaredjstewart jaredjstewart commented Nov 1, 2019

@coultn In a nutshell, I would like ECS to manage the graceful termination of ec2 instances in an autoscaling group that backs an ECS cluster. (I.e. make sure to drain any tasks and deregister the instance from associated load balancers before proceeding with termination.)

This would include updates to the ec2 AMI, changes to different instance types, etc.

I believe this request is already captured at #256.

Thanks again,
Jared

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
You can’t perform that action at this time.