t2 type auto scaling by cpuCredit @ aws elastic beanstalk
in aws EC2, 't2' type is fantastic because of the most cheapest & BURST POWER!!!
and, Elastic Beanstalk is great for manage(deploy,scaling,..) application.
I was trying to combine both, but one challenge - for auto scaling, what metric should I use?
't2' use cpu power by 'credit', and limited by that. so we can't use auto scaling by cpu usage.
aws people's suggestion was 'latency'.. but cure can't better than prevention.
So I made this - auto scaling by amount of cpu credit available
- auto scale your Elastic Beanstalk Environment (with t2 instance type only) by cpu credit
- average of each EC2's cpu credit - config multiple environment's scaling option
- run by lambda function (with scheduled event)
- (optional) put cloudwatch custom metric - current cpu credit(average), current EC2 scale
- index.js - main handler & flow source. using async
- tasks.js - Detail work sources. using AWS SDK
- config.js - scaling rule configuration
$ git clone https://github.com/rockeee/t2.autoscaling-beanstalk.git
or download zip$ npm install
to download npm modules- modify
config.js
for your configuration.
- 1 credit provides the performance of full cpu power for 1 minute
- Initial credit is 30(
small) or 60(medium). for stable scaling, setcreditThreshold_lower
to lower than that.
module.exports = {
region : 'us-west-2',
envs :
[
{
nameApp : 'My First Elastic Beanstalk Application', // application name
nameEnv : 'myFirstElasticBeans-env', // environment name
creditThreshold_upper : 40, // credit is more than this, do scale in
creditThreshold_lower : 20, // credit is less than this, do scale out
scale_inc : 2, // scale out amount
scale_dec : 1, // scale in amount
scale_max : 5, // maximum scale
scale_min : 1, // minimum scale
putCloudwatch : true // if you want to see credit/scale info, set true
}
// additional enviroemnt...
]
};
- deploy to lamda function with your favorite method (just zip, or use tool like node-lambda)
- check lambda function's configuration
- memory - 128MB, timeout - 10sec
- set
Cloudwatch Event Rule
to run your lambda function. for detail, refer this - set & attach
role
to lambda function - example role policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1453906343000",
"Effect": "Allow",
"Action": [
"elasticbeanstalk:DescribeConfigurationSettings",
"elasticbeanstalk:DescribeEnvironments",
"elasticbeanstalk:DescribeInstancesHealth",
"elasticbeanstalk:UpdateEnvironment"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1453906416000",
"Effect": "Allow",
"Action": [
"cloudwatch:GetMetricStatistics",
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1453907067000",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}
- if you config to put cloudwatch data,
you can seeBeanstalk_t2_autoscaling
custom metrics @cloudwatch console.
- add SNS noti when scaled/failed