This repository contains Terragrunt configurations for managing AWS infrastructure across multiple environments (dev and prod).
The infrastructure includes:
- VPC: Multi-AZ VPC with public and private subnets
- EKS Cluster: Kubernetes 1.28 with ARM64 (Graviton) node groups
- Bastion Host: Secure SSH access point with SSM support
- S3 Buckets: Separate buckets for static content and user uploads
- CloudFront: CDN distribution for static content delivery
- Route53: DNS management with custom records
terragrunt/
├── terragrunt.hcl # Root configuration with S3 backend
├── modules/ # Reusable Terraform modules
│ ├── vpc/
│ ├── eks/
│ ├── bastion/
│ ├── s3/
│ ├── cloudfront/
│ └── route53/
├── dev/ # Development environment
│ ├── terragrunt.hcl
│ ├── vpc/
│ ├── eks/
│ ├── bastion/
│ ├── s3/
│ ├── cloudfront/
│ └── route53/
└── prod/ # Production environment
├── terragrunt.hcl
├── vpc/
├── eks/
├── bastion/
├── s3/
├── cloudfront/
└── route53/
-
AWS CLI: Configure with appropriate credentials
aws configure
-
Terraform: Version >= 1.5
terraform version
-
Terragrunt: Latest version
terragrunt --version
-
Environment Variables: Set your AWS account ID
export AWS_ACCOUNT_ID="123456789012"
Edit terragrunt.hcl to set:
- AWS region
- Account ID
- Backend bucket name
For each environment (dev/prod):
- VPC CIDR: Update
vpc_cidrin{env}/vpc/terragrunt.hcl - Domain Names: Update domain names in
{env}/route53/terragrunt.hcl - SSH Access: Add your SSH public key in
{env}/bastion/terragrunt.hcl - IP Restrictions: Configure
allowed_cidr_blocksfor bastion access
First deployment creates the S3 bucket and DynamoDB table automatically:
cd dev/vpc
terragrunt initDeploy all resources in dependency order:
# Deploy VPC first
cd dev/vpc
terragrunt apply
# Deploy EKS cluster
cd ../eks
terragrunt apply
# Deploy remaining resources
cd ../bastion && terragrunt apply
cd ../s3 && terragrunt apply
cd ../cloudfront && terragrunt apply
cd ../route53 && terragrunt applyUse run-all to deploy everything:
cd dev
terragrunt run-all apply- Creates VPC with configurable CIDR
- 3 public and 3 private subnets across AZs
- NAT gateways (single for dev, multi-AZ for prod)
- Internet gateway and route tables
- Tagged for EKS integration
- Kubernetes version 1.28
- ARM64 (Graviton) node groups
- OIDC provider for IRSA
- Essential add-ons (VPC CNI, CoreDNS, kube-proxy)
- CloudWatch logging
- Environment-specific scaling
Dev Configuration:
- SPOT instances
- 2 min, 3 max nodes
- t4g.medium instances
Prod Configuration:
- ON_DEMAND instances
- 3 min, 10 max nodes
- t4g.large/xlarge instances
- Private API endpoint
- ARM-based Amazon Linux 2
- Elastic IP for consistent access
- SSM Session Manager support
- Security group with SSH access
- Optional SSH key authentication
Access via SSM:
aws ssm start-session --target <instance-id>Access via SSH:
ssh -i ~/.ssh/id_rsa ec2-user@<bastion-ip>Two buckets created:
-
Static Content Bucket
- Server-side encryption
- CORS configuration
- CloudFront OAI access
- Public access blocked
-
User Content Bucket
- Versioning (prod only)
- Lifecycle rules (prod only)
- Multipart upload cleanup
- Glacier archival after 180 days
- HTTPS-only distribution
- Custom caching behaviors
- CORS support
- Custom error pages
- Optional custom domains
- Geo-restriction support
Cache Behaviors:
- Default: 1 hour TTL
- Static assets: 1 day TTL
- Images: 1 day TTL
- Hosted zone management
- CloudFront A/AAAA records
- Bastion host record
- Custom DNS records
- Domain verification TXT records
- CAA records for SSL
Update kubeconfig:
aws eks update-kubeconfig --name dev-eks-cluster --region us-west-2
kubectl get nodes- Single NAT gateway
- SPOT instances for EKS nodes
- Smaller instance types
- Reduced log retention (7 days)
- Versioning disabled
- Lifecycle rules disabled
- Multi-AZ NAT gateways for HA
- ON_DEMAND instances
- Larger instance types
- Extended log retention (30 days)
- Versioning enabled
- Lifecycle rules enabled
cd dev/vpc
terragrunt init -upgrade
terragrunt applyEdit kubernetes_version in {env}/eks/terragrunt.hcl and apply:
cd dev/eks
terragrunt applyDestroy in reverse dependency order:
cd dev
terragrunt run-all destroy- Bastion Access: Restrict
allowed_cidr_blocksto known IPs - SSH Keys: Use SSH keys instead of passwords
- EKS API: Use private endpoint for production
- S3 Encryption: Server-side encryption enabled by default
- CloudFront: HTTPS-only with modern TLS versions
- IAM Roles: Use IRSA for pod-level permissions
- VPC: Private subnets for EKS nodes
If S3 bucket exists:
terragrunt init -reconfigureEnsure dependencies are deployed first:
terragrunt graph-dependenciesRelease DynamoDB lock:
terragrunt force-unlock <lock-id>Check node status:
kubectl get nodes
kubectl describe node <node-name>- Single NAT gateway
- SPOT instances
- Smaller resources
- Public EKS endpoint
- Minimal log retention
- Multi-AZ NAT gateways
- ON_DEMAND instances
- Larger resources
- Private EKS endpoint
- Extended log retention
- Versioning enabled
For issues or questions:
- Check Terragrunt logs:
terragrunt apply --terragrunt-log-level debug - Review AWS CloudWatch logs
- Consult AWS documentation
This infrastructure code is provided as-is for reference and customization.