This is the capstone project of my 4-part portfolio, demonstrating skills in traditional infrastructure automation. Where Projects 1-3 (IDP Flagship, FinOps, Observability) focused on a modern, containerized EKS platform, this project proves breadth by building and managing a highly-available, auto-scaling fleet of traditional web servers.
This "ops toolkit" uses Terraform to provision the production-grade infrastructure (ALB + ASG) and Ansible to configure and manage the fleet in real-time.
- Infrastructure as Code (Terraform): Deploys a production-ready AWS stack, including an Application Load Balancer (ALB), an Auto Scaling Group (ASG) maintaining a fleet of 3 servers and all necessary networking and security.
- Dynamic Inventory (Ansible): Uses the
aws_ec2Ansible plugin to discover and manage servers in the ASG dynamically. No static IP addresses are used, which is critical for a self-healing fleet. - "Day 1" Fleet Configuration: A single playbook (
playbook-setup-webserver.yml) configures all servers in parallel, installing and hardening Nginx, UFW, and Fail2ban. - "Day 2" Rolling Restart: A professional, "break-glass" playbook (
playbook-rolling-restart.yml) that safely restarts services one server at a time (a "rolling restart") to simulate a production-safe maintenance operation without downtime.
- A user accesses the Application Load Balancer (ALB) DNS Name.
- The ALB forwards the request to one of the three EC2 Instances in the Auto Scaling Group.
- Ansible, running on our local machine, uses the Dynamic Inventory script to query the AWS API, find the IPs of all three instances, and apply configurations via SSH.
This repository contains all the Terraform and Ansible code needed to build and manage the fleet.
/Ansible-Fleet-Automation
├── README.md
├── terraform/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── ansible/
├── ansible.cfg \# Configures Ansible to use dynamic inventory
├── aws\_ec2.yml \# The dynamic inventory config file
├── playbook-setup-webserver.yml \# "Day 1" build playbook
├── playbook-rolling-restart.yml \# "Day 2" operations playbook
└── templates/
└── index.html.j2 \# Jinja2 template for the custom homepage
A step-by-step guide to provision the infrastructure and configure the fleet.
- Terraform & AWS CLI configured.
- Ansible installed (
pip install ansible). - AWS SDK for Python (
pip install boto3). This is required by the Ansibleaws_ec2dynamic inventory plugin.
This is the key Ansible will use to connect. Our Terraform script will automatically find and provision this key.
# Create a new SSH key at ~/.ssh/ansible-key
# (Press Enter for no passphrase)
ssh-keygen -t rsa -b 2048 -f ~/.ssh/ansible-key(This command creates the ~/.ssh/ansible-key private key and ~/.ssh/ansible-key.pub public key.)
This builds the ALB, ASG, Launch Template and Security Groups.
- Navigate to the Terraform directory:
cd Ansible-Fleet-Automation/terraform - Initialize and apply Terraform:
terraform init terraform apply
- Terraform will run for 3-5 minutes. When finished, it will output the DNS name of your load balancer. Copy this URL.
Outputs: alb_dns_name = "project4-alb-1234567890.us-east-1.elb.amazonaws.com"
- Navigate to the Ansible directory:
cd ../ansible - Wait 60 seconds for the new EC2 instances to boot and register with the AWS API.
- Run the dynamic inventory script to "ping" your new fleet:
You should see an output showing your three new servers grouped under the
# This command lists all "hosts" (servers) that Ansible found # It proves the dynamic inventory is working. ansible-inventory -i aws_ec2.yml --graph
ansible-project_project4-fleettag.
This command will configure all 3 servers in parallel.
# This will take 2-3 minutes
ansible-playbook -i aws_ec2.yml playbook-setup-webserver.ymlVerification:
Open your web browser and paste in the alb_dns_name from Step 2.
You will see "Hello from server: ip-10-x-x-x".
Hit refresh several times. You will see the hostname change as the ALB sends you to the different servers in the fleet.
Now, simulate a safe, operational task.
ansible-playbook -i aws_ec2.yml playbook-rolling-restart.ymlWatch the terminal. You will see it restart Nginx on one server at a time (due to serial: 1), proving a safe, rolling update with zero downtime.
This is important! This infrastructure has 3 servers and an ALB.
cd ../terraform
terraform destroyAjay Kumar