Skip to content

Extraordinarytechy/ansible-fleet-automation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Satellite Project 4: Ansible Fleet Automation

This is the capstone project of my 4-part portfolio, demonstrating skills in traditional infrastructure automation. Where Projects 1-3 (IDP Flagship, FinOps, Observability) focused on a modern, containerized EKS platform, this project proves breadth by building and managing a highly-available, auto-scaling fleet of traditional web servers.

This "ops toolkit" uses Terraform to provision the production-grade infrastructure (ALB + ASG) and Ansible to configure and manage the fleet in real-time.

Key Features

  • Infrastructure as Code (Terraform): Deploys a production-ready AWS stack, including an Application Load Balancer (ALB), an Auto Scaling Group (ASG) maintaining a fleet of 3 servers and all necessary networking and security.
  • Dynamic Inventory (Ansible): Uses the aws_ec2 Ansible plugin to discover and manage servers in the ASG dynamically. No static IP addresses are used, which is critical for a self-healing fleet.
  • "Day 1" Fleet Configuration: A single playbook (playbook-setup-webserver.yml) configures all servers in parallel, installing and hardening Nginx, UFW, and Fail2ban.
  • "Day 2" Rolling Restart: A professional, "break-glass" playbook (playbook-rolling-restart.yml) that safely restarts services one server at a time (a "rolling restart") to simulate a production-safe maintenance operation without downtime.

Architecture Overview

  1. A user accesses the Application Load Balancer (ALB) DNS Name.
  2. The ALB forwards the request to one of the three EC2 Instances in the Auto Scaling Group.
  3. Ansible, running on our local machine, uses the Dynamic Inventory script to query the AWS API, find the IPs of all three instances, and apply configurations via SSH.

Project Structure

This repository contains all the Terraform and Ansible code needed to build and manage the fleet.


/Ansible-Fleet-Automation
├── README.md
├── terraform/
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
└── ansible/
├── ansible.cfg                 \# Configures Ansible to use dynamic inventory
├── aws\_ec2.yml                 \# The dynamic inventory config file
├── playbook-setup-webserver.yml  \# "Day 1" build playbook
├── playbook-rolling-restart.yml  \# "Day 2" operations playbook
└── templates/
└── index.html.j2           \# Jinja2 template for the custom homepage

How to Deploy and Run

A step-by-step guide to provision the infrastructure and configure the fleet.

Prerequisites

  • Terraform & AWS CLI configured.
  • Ansible installed (pip install ansible).
  • AWS SDK for Python (pip install boto3). This is required by the Ansible aws_ec2 dynamic inventory plugin.

Step 1: Create Your SSH Key Pair

This is the key Ansible will use to connect. Our Terraform script will automatically find and provision this key.

# Create a new SSH key at ~/.ssh/ansible-key
# (Press Enter for no passphrase)
ssh-keygen -t rsa -b 2048 -f ~/.ssh/ansible-key

(This command creates the ~/.ssh/ansible-key private key and ~/.ssh/ansible-key.pub public key.)

Step 2: Provision Infrastructure (Terraform)

This builds the ALB, ASG, Launch Template and Security Groups.

  1. Navigate to the Terraform directory:
    cd Ansible-Fleet-Automation/terraform
  2. Initialize and apply Terraform:
    terraform init
    terraform apply
  3. Terraform will run for 3-5 minutes. When finished, it will output the DNS name of your load balancer. Copy this URL.
    Outputs:
    
    alb_dns_name = "project4-alb-1234567890.us-east-1.elb.amazonaws.com"
    

Step 3: Verify Dynamic Inventory (Ansible)

  1. Navigate to the Ansible directory:
    cd ../ansible
  2. Wait 60 seconds for the new EC2 instances to boot and register with the AWS API.
  3. Run the dynamic inventory script to "ping" your new fleet:
    # This command lists all "hosts" (servers) that Ansible found
    # It proves the dynamic inventory is working.
    ansible-inventory -i aws_ec2.yml --graph
    You should see an output showing your three new servers grouped under the ansible-project_project4-fleet tag.

Step 4: Run "Day 1" Setup Playbook

This command will configure all 3 servers in parallel.

# This will take 2-3 minutes
ansible-playbook -i aws_ec2.yml playbook-setup-webserver.yml

Verification: Open your web browser and paste in the alb_dns_name from Step 2. You will see "Hello from server: ip-10-x-x-x". Hit refresh several times. You will see the hostname change as the ALB sends you to the different servers in the fleet.

Step 5: Run "Day 2" Rolling Restart Playbook

Now, simulate a safe, operational task.

ansible-playbook -i aws_ec2.yml playbook-rolling-restart.yml

Watch the terminal. You will see it restart Nginx on one server at a time (due to serial: 1), proving a safe, rolling update with zero downtime.

Step 6: Clean Up

This is important! This infrastructure has 3 servers and an ALB.

cd ../terraform
terraform destroy

Author

Ajay Kumar

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published