A Django web application deployed on AWS EKS using Terraform for infrastructure management. Features a one-click deployment script that provisions infrastructure with Terraform, builds and pushes Docker images from your local machine, and deploys the application to Kubernetes.
I chose Django as the front-end because it is a relatively batteries-included framework and is in Python since I likely know that best. I have worked with Vue, Next, and React with TypeScript in the past but that would require a lot more upfront work in a situation where I do not need to be extensible.
The oneclick.sh script was written to be as easy to use as possible, with safeguards around pre-run checks and validation. It could very easily be ported to run in CI/CD on any major platform and broken up in to seperate steps. From my testing, the script is able to recover from failures in execution mainly due to TerraForms resilience; the rest of the steps are mostly idempotent or are simply re-run such as the Docker image build.
Integrating with ECR was a no-brainer, if this were production I would use a better tagging scheme but in this case I simply want the latest to run.
For the EKS cluster, I manually configured the basic required addons from what I've seen and used in my production clusters. There are more configurations I would want to do around the metrics, logging, and observability but since we're not integrating it with anything else at this stage it's unnecessary.
- Django 5.2 - Modern Python web framework
- AWS EKS - Managed Kubernetes cluster with auto-scaling
- Terraform IaC - Complete infrastructure as code
- One-Click Deployment - Automated build, push, and deploy
- ECR Integration - Private Docker registry with lifecycle policies
- ALB Ingress - AWS Load Balancer Controller for external access
- Docker - For building container images
- Terraform (>= 1.13.3) - Infrastructure provisioning
- AWS CLI - AWS authentication and operations
-
AWS account with appropriate permissions:
- VPC, EC2, EKS
- ECR (Elastic Container Registry)
- IAM roles and policies
- Application Load Balancer
-
AWS CLI configured with credentials:
aws configure
The oneclick.sh script handles everything from infrastructure provisioning to application deployment.
./oneclick.shWhat it does:
- Validates prerequisites (Docker, Terraform, AWS CLI)
- Provisions AWS infrastructure (VPC, EKS cluster, ECR repository)
- Builds Docker image for linux/amd64
- Pushes image to ECR with unique timestamp tag
- Deploys application to Kubernetes
- Waits for ALB to be ready (up to 5 minutes)
- Outputs the application URL
Expected output:
Checking prerequisites...
Verifying AWS credentials...
Stage 1: Provisioning infrastructure...
[Terraform output showing infrastructure changes]
Building and pushing Docker image...
Deploying application to Kubernetes...
[Terraform output showing deployment changes]
Waiting for ALB (up to 5 minutes)...
Application URL: http://k8s-default-interview-xxxxx.us-west-2.elb.amazonaws.com
./oneclick.sh --deleteWhat it does:
- Removes Kubernetes workloads (deployment, service, ingress)
- Empties ECR repository (deletes all images)
- Destroys all AWS infrastructure (EKS, VPC, subnets, etc.)
Alternative flags:
./oneclick.sh -d
./oneclick.sh delete
./oneclick.sh --destroy
./oneclick.sh destroycd django-site
./run-local-django.shThis script:
- Creates a Python virtual environment
- Installs dependencies from
requirements.txt - Collects static files
- Starts Django development server on
http://127.0.0.1:8000
cd django-site/interview_challenge
python3 -m venv venv
source venv/bin/activate
pip install -r ../requirements.txt
python manage.py collectstatic --noinput
python manage.py runservercd django-site
docker build -t interview-challenge:latest .
docker run -p 8000:8000 interview-challenge:latestAccess at http://localhost:8000
- VPC - 10.0.0.0/16 with public/private subnets across 3 AZs
- EKS Cluster - Kubernetes 1.33 with managed node group
- ECR Repository - Private Docker registry with lifecycle policies
- ALB - Application Load Balancer via AWS Load Balancer Controller
- IAM Roles - Service accounts and node permissions
- Security Groups - Network access controls
Default configuration (in terraform/terraform.tfvars):
- Region: us-west-2
- Node Instance Type: t3.medium
- Node Group Size: 1-3 nodes (desired: 2)
- EKS Version: 1.33
To customize, edit terraform/terraform.tfvars before running oneclick.sh.
- Deployment: 1 replica, 256Mi-512Mi memory, 250m-500m CPU
- Service: ClusterIP on port 80 → container port 8000
- Ingress: ALB with internet-facing scheme, HTTP on port 80
- Probes: Liveness and readiness checks on
/
aws eks update-kubeconfig --region us-west-2 --name interview-challenge-eks
kubectl get pods
kubectl get ingresskubectl logs -l app=interview-challengekubectl scale deployment interview-challenge --replicas=3After making code changes:
./oneclick.shThe script automatically:
- Builds a new image with a unique timestamp tag
- Pushes to ECR
- Updates the Kubernetes deployment (triggers rollout)
kubectl get deployment interview-challenge
kubectl rollout status deployment/interview-challenge- Home (
/) - Welcome page with feature cards - About (
/about/) - Project information and technology stack
cd django-site/interview_challenge
source venv/bin/activate
python manage.py testcd terraform
terraform validate
terraform fmt -check- Django 5.2.7 - Web framework
- Gunicorn 23.0.0 - WSGI server
- WhiteNoise 6.11.0 - Static file serving
- Terraform ~> 1.13.3 - Infrastructure as Code
- AWS EKS - Managed Kubernetes
- AWS ECR - Container registry
- AWS VPC - Network isolation
- AWS ALB - Load balancing
- Deployment - Application workload
- Service - Internal networking
- Ingress - External access via ALB
- Helm - AWS Load Balancer Controller