Skip to content

ecom-street/Terraform_Aws_Autoscaling_Group_And_Aws_Load_Balancer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

Terraform_Aws_Autoscaling_Group_And_Aws_Load_Balancer

We are a preferred AWS consultant and offers the best cloud AWS consulting service. Our AWS-certified expert consultants conduct a thorough review and evaluation of your existing IT infrastructure and service interaction model to provide top-notch solutions.


As soon as you learn how to manage basic network infrastructure in AWS using Terraform (see “Terraform recipe – Managing AWS VPC – Creating Public Subnet” and “Terraform recipe – Managing AWS VPC – Creating Private Subnets”), you want to start creating auto-scalable infrastructures.

Auto Scaling Groups

Usually, Auto Scaling Groups are used to control the number of instances executing the same task like rendering dynamic web pages for your website, decoding videos and images, or calculating machine learning models.

Auto Scaling Groups also allows you to dynamically control your server pool size – increase it when your web servers are processing more traffic or tasks than usual, or decrease it when it becomes quieter.

In any case, this feature allows you to save your budget and make your infrastructure more fault-tolerant significantly.

Let’s build a simple infrastructure, which consists of several web servers for serving website traffic. In the following article, we’ll add RDS DB to our infrastructure.

Our infrastructure will be the following:

c1

Setting up VPC.

Let’s assemble it in a new main.tf file. First of all, let’s declare VPC, two Public Subnets, Internet Gateway and Route Table (we may take this example as a base):

1

2

Next, we need to describe the Security Group for our web-servers, which will allow HTTP connections to our instances:

9

Launch configuration

As soon as we have SecurityGroup, we may describe a Launch Configuration. Think of it like a template containing all instance settings to apply to each new launched by Auto Scaling Group instance. We’re using aws_launch_configuration resource in Terraform to describe it

Most of the parameters should be familiar to you, as we already used them in aws_instance resource.

10

The new ones are a user_data and a lifecycle:

user_data :-

is a special interface created by AWS for EC2 instances automation. Usually this option is filled with scripted instructions to the instance, which need to be executed at the instance boot time. For most of the OS this is done by cloud-init.

lifecycle :

special instruction, which is declaring how new launch configuration rules applied during update. We’re using create_before_destroy here to create new instances from a new launch configuration before destroying the old ones. This option commonly used during rolling deployments.

The user-data option is filled with a simple bash-script, which installs the Nginx web server and puts the instance’s local IP address to the index.html file, so we can see it after the instance is up and running

Load Balancer

Before we create an Auto Scaling Group, we need to declare a Load Balancer. There are three Load Balances available for you in AWS right now:

Elastic or Classic Load Balancer (ELB):

previous generation of Load Balancers in AWS.

Application Load Balancer (ALB):

operates on application network layer and provides reach feature set to manage HTTP and HTTPS traffic for your web applications.

Network Load Balancer (NLB):

operates on connection layer and capable for handling millions of requests per second.

For simplicity, let’s create an Elastic Load Balancer in front of our EC2 instances (I’ll show how to use other types of them in future articles). To do that, we need to declare aws_elb resource.

11

12

Here we’re setting up Load Balancer name, it’s own Security Group, so we could make traffic rules more restrictive later if we want to.

We’re specifying 2 subnets, where our Load Balancer will look for (listener configuration) launched instances and turned on cross_zone_load_balancing feature, so we could have our instances in different Availability Zones.

And finally, we’ve specified health_check configuration, which determines when Load Balancer should transition instances from healthy to unhealthy state and back depending on its ability to reach HTTP port 80 on the target instance.

If ELB can not reach the instance on the specified port, it will stop sending traffic.

Auto scaling group

Now we’re ready to create an Auto Scaling Group by describing it using aws_autoscaling_group resource:

13

Here we have the following configuration:

  1. There will be minimum one instance to serve the traffic.
  2. Auto Scaling Group will be launched with 2 instances and put each of them in separate Availability Zones in different Subnets.
  3. Auto Scaling Group will get information about instance availability from the ELB.
  4. We’re set up collection for some Cloud Watch metrics to monitor our Auto Scaling Group state.
  5. Each instance launched from this Auto Scaling Group will have Name tag set to web.

Now we are almost ready, let’s get the Load Balancer DNS name as an output from the Terraform infrastructure description:

14

And try to deploy our infrastructure:

15

16

Starting from this point, you can open provided ELB URL in your browser and refresh the page several times to see different local IP addresses of your just launched instances.

In a couple of minutes, you’ll see a fired alarm in CloudWatch:

17

This will cause one of two instances termination process:

18

Summary

In this article, you’ve learned how to set up a dynamic Auto Scaling Group and Load Balancer to distribute traffic to your instances in several Availability Zones. I hope this article was helpful. If so, please, help us to spread it to the world! Stay tuned!

Releases

No releases published

Packages

No packages published

Languages