Libraries, samples, and tools to help AWS customers onboard with custom resource auto scaling.
Switch branches/tags
Nothing to show
Clone or download
Latest commit fffdc70 Oct 19, 2018

In this aws-auto-scaling-custom-resource repository, we demonstrate how to set up automatic scaling for custom resources using AWS services. In this context, a custom resource is an object that allows you to introduce your own application or service to the automatic scaling features of AWS.

The included AWS CloudFormation template launches a collection of AWS resources, including a new Amazon API Gateway endpoint. The API Gateway endpoint allows secure access to scalable resources in the application or service that you want automatic scaling to work with.

Once everything is deployed and configured, you'll have the following environment in your AWS account.

Image of Application Auto Scaling Custom Resource Environment

You can use this repository and the deployment steps below as the starting point for your customizations. More information about this approach to custom resource auto scaling is detailed in this blog post.

If you find this information useful, feel free to spread the word about custom resource auto scaling. Also, we welcome all feedback, pull requests, and other contributions!


  • Recommended for a technical audience looking to use AWS Application Auto Scaling to configure automatic scaling for in-house applications and services.
  • Assumes experience with AWS, including configuring auto scaling with target tracking and custom metrics.
  • Assumes fair knowledge of Amazon API Gateway, CloudWatch, Lambda, and Open API Specification (aka Swagger 2.0 specs).

AWS Services Used

The core AWS components used by this deployment include the following AWS services.

Regional Availability

Custom resource auto scaling is available in Canada (Central), US West (N. California), US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and China (Beijing).


Deployment Steps

Follow the step-by-step instructions in this section to build and test the custom resource auto scaling environment in your AWS account. The CloudFormation template provided with this repository creates the core AWS components from scratch.

1. Test your REST Endpoint URL

Before running the CloudFormation template, you need an HTTP/HTTPS endpoint to expose your REST resources. Make sure that your application conforms to the REST API specification in the custom-resource-stack.yaml CloudFormation template.

Note: If you need a test environment and are familiar with Docker, a sample REST endpoint is provided as a Dockerized Apache Python CGI. For more information, see sample-api-server.

After you create an endpoint that contains the required REST resources, you can verify that the endpoint URL works by issuing GET and PATCH requests to it, for example:

$ curl -i -X GET --header 'Accept: application/json' ''

If the endpoint is set up properly, it should return a standard 200 OK response message and a payload that represents the requested resource and its status.

The response for GET and PATCH will look something like:

  "actualCapacity": 2.0,
  "desiredCapacity": 2.0,
  "dimensionName": "MyDimension",
  "resourceName": "MyService",
  "scalableTargetDimensionId": "1-23456789",
  "scalingStatus": "Successful",
  "version": "MyVersion"

2. Launch the Stack

Download the custom-resource-stack.yaml CloudFormation template from GitHub.

Run the following create-stack command, adding your details to the following parameters:

  1. SNSSubscriptionEmail: Replace email-address with an email address to send certificate expiry notifications to.
  2. IntegrationHttpEndpoint: Replace endpoint-url with your REST endpoint URL, for example,{scalableTargetDimensionId} where {scalableTargetDimensionId} is replaced with the dimension in your backend API, which might look something like:

Make a note of the AWS region where you created this stack. You need it later. Note: The examples in this repository use us-west-2, but the steps will be the same if you deploy into a different region.

$ aws cloudformation create-stack \
    --stack-name CustomResourceAPIGatewayStack \
    --template-body file://~/custom-resource-stack.yaml \
    --region us-west-2 \
    --parameters \         
        ParameterKey=SNSSubscriptionEmail,ParameterValue="email-address" \

The stack takes only a few minutes to deploy. It creates a new REST API in API Gateway with two stages: “PreProd” and “Prod”. A stage defines the path through which an API deployment is accessible. Each stage is deployed with its own client-side certificate.

When the deployment has completed successfully, you’ll receive an email to confirm a subscription to the Amazon SNS topic created by the template. Choose the Confirm subscription link in the message to subscribe to emails that are sent whenever there is an expiring certificate. A Lambda function checks once a day to see if the client certificate is expiring in 7, 3, or 1 days.

3. Get the Resource ID & API Gateway client certificate IDs

To continue with the deployment steps, you need the HTTPS link (aka Resource ID) for your API Gateway endpoint.

After the stack launches, run the describe-stacks command and copy the output.

$ aws cloudformation describe-stacks --region us-west-2 --stack-name CustomResourceAPIGatewayStack  | jq '.Stacks[0]["Outputs"]'

This returns the following response:

    "Description": "Application Auto Scaling Resource ID prefix for Preprod",
    "OutputValue": "",
    "OutputKey": "PreProdResourceIdPrefix"
    "OutputValue": "customresourceapigatewaystack-s3bucket-ha8id2l1wpo6",
    "OutputKey": "S3BucketName"
    "Description": "Application Auto Scaling Resource ID prefix for Prod",
    "OutputValue": "",
    "OutputKey": "ProdResourceIdPrefix"
   "Description": "API Gateway Client Cert",
    "OutputKey": "PreProdClientCertificate",
    "OutputValue": "MIIDoTCCAwqgAwIBAgIMCRkox...tt3rdw"
    "Description": "API Gateway Client Cert",
    "OutputKey": "ProdClientCertificate",
    "OutputValue": "MIIDVDCCAr0CAQAweTEeMBwG...frw3tnx"

The Resource ID has the following syntax: [OutputValue][identifier]

The OutputValue is one of the HTTPS prefixes ("Prod" or "Preprod") from the describe-stacks output.

The identifier is a string that identifies a scalable resource in your backend system (the value for scalableTargetDimensionId in step 1).

Example: Resource ID where “1-23456789” is the identifier in your backend system

4. Configure SSL/HTTPS

To configure the SSL/HTTPS connection between the API Gateway and your backend system, you need to download the ProdClientCertificate and PreProdClientCertificate from API Gateway.

Using the describe-stacks command from the previous step, get the API Gateway Client Cert output values.

Once you have the certificates, you can run the following AWS CLI commands, replacing the client-certificate-id with your own values, and save the certificate output.

aws apigateway get-client-certificate --client-certificate-id MIIDVDCCAr0CAQAweTEeMBwG...frw3tnx --output text

aws apigateway get-client-certificate --client-certificate-id MIIDoTCCAwqgAwIBAgIMCRkox...tt3rdw --output text

For more information, see Use Client-Side SSL Certificates for Authentication by the Backend in the Amazon API Gateway Developer Guide.

5. Test the API Gateway Integration

The next step is to verify that the API in API Gateway is integrated with your application. The Postman app is a convenient testing tool for this because it provides fields for adding your signing information to the HTTPS request.

Follow the instructions in Use Postman to Call an API to send a test request in Postman. You can convert the response to CURL using the code snippet generator to view the headers and body, if desired. The responses for GET and PATCH requests should be similar to the response displayed in step 1.

For a GET request, the Postman CURL response will look something like:

curl -X GET \ \
  -H 'Authorization: AWS4-HMAC-SHA256 Credential=example/20180704/us-west-2/execute-api/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-security-token, Signature=SIGNATURE' \
  -H 'Cache-Control: no-cache' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -H 'Host:' \
  -H 'Postman-Token: POSTMANTOKEN' \
  -H 'X-Amz-Date: 20180704T023500Z' \
  -H 'X-Amz-Security-Token: SESSIONTOKEN'
  "actualCapacity": 2.0,
  "desiredCapacity": 2.0,
  "dimensionName": "MyDimension",
  "resourceName": "MyService",
  "scalableTargetDimensionId": "1-23456789",
  "scalingStatus": "Successful",
  "version": "MyVersion"

6. Register a Scalable Target

You will now register your resource's capacity as a scalable target with Application Auto Scaling. A scalable target is a resource that Application Auto Scaling can scale out or scale in.

Note: Be sure to use the correct permissions when registering a scalable target, so that the service-linked role is automatically created. Otherwise, the scaling function will not work.

Before you register your scalable target, you'll need to run the following command to save the Resource ID in a txt file (with no newline character at the end of the file). Provide the Resource ID from step 4.

The command will look like this, but with your Resource ID:

$ echo -n "" > ~/custom-resource-id.txt

This saves the file as custom-resource-id.txt in your home directory. You can now use the register-scalable-target command to register your scalable target:

$ aws application-autoscaling register-scalable-target --service-namespace custom-resource --scalable-dimension custom-resource:ResourceType:Property --resource-id file://~/custom-resource-id.txt --min-capacity 0 --max-capacity 10

This registers your scalable target with Application Auto Scaling, and allows it to manage capacity, but only within the range of 0 to 10 capacity units.

7. Create a Scaling Policy

In this step, you create a sample scaling policy for your custom resource that specifies how the scalable target should be scaled when CloudWatch alarms are triggered.

For example, for target tracking, you define a target tracking scaling policy that meets your resource's specific requirements by creating a custom metric. You can define a custom metric based on any metric that changes in proportion to scaling.

Not all metrics work for target tracking. The metric must be a valid utilization metric, and it must describe how busy your custom resource is. The value of the metric must increase or decrease in inverse proportion to the number of capacity units. That is, the value of the metric should decrease when capacity increases.

The following cat command creates a sample metric for your scalable target in a config.json file in your home directory:

$ cat ~/config.json

Use the following put-scaling-policy command, along with the config.json file you created previously, to create a scaling policy named custom-tt-scaling-policy that keeps the average utilization of your custom resource at 50 percent:

$ aws application-autoscaling put-scaling-policy \
--policy-name custom-tt-scaling-policy \
--policy-type TargetTrackingScaling \
--service-namespace custom-resource \
--scalable-dimension custom-resource:ResourceType:Property \
--resource-id file://~/custom-resource-id.txt \
--target-tracking-scaling-policy-configuration file://~/config.json
   "Alarms": [
            "AlarmName": "TargetTracking-",
            "AlarmARN": "arn:aws:cloudwatch:us-west-2:544955126770:alarm:TargetTracking-"
            "AlarmName": "TargetTracking-",
            "AlarmARN": "arn:aws:cloudwatch:us-west-2:544955126770:alarm:TargetTracking-"
    "PolicyARN": "arn:aws:autoscaling:us-west-2:544955126770:scalingPolicy:ac852aff-b04f-427d-a80a-3e7ef31d492d:resource/custom-resource/"

This creates two alarms: one for scaling out and one for scaling in. It also returns the Amazon Resource Name (ARN) of the policy that is registered with CloudWatch, which CloudWatch uses to invoke scaling whenever the metric is in breach.

You can find additional information about custom metrics in the CloudWatch documentation under Publish Custom Metrics.

8. Test the Scaling Policy

Now you can test your scaling policy by publishing sample metric data to CloudWatch. CloudWatch alarms will trigger the scaling policy and calculate the scaling adjustment based on the metric and the target value. To do this, you will run a bash script.

Type the following command to run the bash script:

// Command to put metric data that breaches AlarmHigh
$ while sleep 3
  aws cloudwatch put-metric-data --metric-name MyAverageUtilizationMetric --namespace MyNamespace --value 70 --unit Percent --dimensions MyMetricDimensionName=MyMetricDimensionValue
  echo -n "."

It may take a few minutes before your scaling policy is invoked. When the target ratio exceeds 50 percent for a sustained period of time, Application Auto Scaling notifies your custom resource to adjust capacity upward, so that the 50 percent target utilization can be maintained.

9. View Application Auto Scaling Actions

In this step, you view the Application Auto Scaling actions that are initiated on your behalf.

Run the describe-scaling-activities command:

$ aws application-autoscaling describe-scaling-activities --service-namespace custom-resource --resource-id file://~/custom-resource-id.txt --max-results 20

You should eventually see output that looks like this:

    "ScalingActivities": [
            "ScalableDimension": "custom-resource:ResourceType:Property",
            "Description": "Setting desired capacity to 6.",
            "ResourceId": "",
            "ActivityId": "2fca0873-3e4d-4c05-a83d-40c6394e6b9b",
            "StartTime": 1530744698.087,
            "ServiceNamespace": "custom-resource",
            "EndTime": 1530744730.766,
            "Cause": "monitor alarm TargetTracking- in state ALARM triggered policy custom-tt-scaling-policy",
            "StatusMessage": "Successfully set desired capacity to 6. Change successfully fulfilled by custom-resource.",
            "StatusCode": "Successful"

Note: If you are using the sample-api-server that is provided in this project, you can also see the scaling events in the API log.

Once you've viewed the scaling activity and verified scaling works, you can press Ctrl+C to stop the bash script.

License Summary

This sample code is made available under a modified MIT-0 license. See the LICENSE file.