This project demonstrates a production-grade, highly available containerized web application deployed on AWS Elastic Kubernetes Service (EKS). It features a fully automated CI/CD pipeline using GitHub Actions, implements dynamic scaling using Kubernetes Horizontal Pod Autoscaler (HPA), and establishes full-stack observability with Prometheus, Grafana, and Telegram alerting. All underlying AWS infrastructure is strictly provisioned as code (IaC) using AWS CloudFormation.
make deploy-allprovisions VPC, EKS, ECR, and HPA configuration.- GitHub Actions builds and pushes Docker images to ECR, then updates Kubernetes deployment with
github.shatags. - Helm deploys kube-prometheus-stack (Prometheus + Grafana) for observability.
- HPA scales pods based on CPU utilization (>50%).
- Alertmanager routes warnings to a Telegram bot for rapid incident response.
- Cloud Provider: Amazon Web Services (EKS, ECR, Elastic Load Balancing, VPC, IAM)
- Infrastructure as Code (IaC): AWS CloudFormation, Helm
- Containerization: Docker (Multi-stage builds, Alpine Linux)
- Orchestration: Kubernetes (Deployments, Services, HPA, Metrics Server)
- Monitoring & Alerting: Prometheus, Grafana, Alertmanager
- CI/CD Automation: GitHub Actions
- Operations & Notifications: aws cli, kubectl, Makefile, Telegram Bot API
Designed and deployed custom VPCs, subnets, security groups, and the EKS cluster purely through CloudFormation templates, completely eliminating manual AWS Console configuration.
Engineered a robust GitHub Actions workflow that automatically builds, tags, and pushes Docker images to Amazon ECR upon every main branch commit.
Utilized the unique Git Commit SHA (github.sha) as the immutable image tag, forcing Kubernetes to reliably trigger seamless Rolling Updates without service interruption.
Decoupled sensitive AWS Account IDs from Kubernetes manifests using dynamic image injection.
Successfully deployed the Kubernetes Metrics Server, implementing a custom TLS bypass patch (--kubelet-insecure-tls) to resolve native AWS VPC CNI communication bottlenecks.
Configured HPA to monitor CPU utilization, automatically scaling Pod replicas up during traffic spikes (threshold > 50%) and scaling down during idle periods to minimize AWS EC2 costs.
Deployed the kube-prometheus-stack via Helm to monitor cluster health, node metrics, and application performance.
Troubleshooting Highlight: Resolved a critical configuration sync issue between Helm and Alertmanager by implementing a Direct Secret Injection strategy. By bypassing the Helm engine and injecting pure YAML directly into the Kubernetes Secret layer, I successfully routed critical alerts to a Telegram Bot with <10s latency.
Image Security: Integrated security patching directly into the Dockerfile layer, actively resolving high-severity vulnerabilities (e.g., CVE-2022-37434 zlib) identified by ECR vulnerability scanning via automated apk upgrade.
Credential Protection: Addressed Copilot Security Scan warnings by implementing an Environment Variable (.env) isolation model, preventing the hardcoding of sensitive Telegram API tokens in the codebase.
- AWS CLI configured with administrative access.
- kubectl, helm, and docker installed locally.
- .env file created in the root directory containing your TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID.
- GitHub Repository Secrets configured (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION).
Provision the entire underlying infrastructure (VPC, EKS Cluster, ECR Repository, Metrics Server, and Prometheus Stack) in a strict dependency order:
make deploy-allNote: CloudFormation and Helm orchestration typically take 20-25 minutes. Cloud automation is at work! ☕
Once the infrastructure and metrics are ready, push your code. GitHub Actions will automatically build the image, push it to ECR, and deploy it to EKS:
make git-push m="feat: initial application deployment via CI/CD"(If you are reviving an existing repository, simply navigate to the GitHub Actions tab, select the latest workflow run, and click "Re-run all jobs" to deploy the application into the fresh cluster.)
Retrieve the public-facing URL of your web application:
make get-urlRetrieve your Grafana admin credentials and the public dashboard URL to view real-time cluster metrics:
make get-grafana-password
make get-grafana-urlPaste your Grafana URL into the browser and log in:
- Username:
admin- Password: run
make get-grafana-passwordto fetch the current admin password.
Once logged into Grafana, do the following:
- Go to Kubernetes / Compute Resources / Cluster.
- In this dashboard, select the CPU quota panel and click the default tab to view the live pod usage breakdown.
- You will see the current number of running pods.
- When you run
make stress-testormake start-load-army-test, corresponding load generator pods are created and reflected here.
- Go to Alerting / Alert Rules, and search for
highpodcount.- You will see a default rule (normal rule).
- When the number of
sl-cicd-apppods exceeds 3, the rule changes fromnormaltofiringand sends an alert to Telegram.
To validate the auto-scaling architecture and the Telegram alerting pipeline, trigger a traffic load using two levels of intensity:
Level 1: Light Stress Test (Single Pod generator):
make stress-testLevel 2: Heavy "Load Army" Test (20-replica concurrent bombardment):
make start-load-army-testMonitor the reaction:
- Run
make statusto see the HPA triggering Pod scale-ups as CPU load crosses the 50% threshold. - Check your Telegram for "🚨 EKS Alert: HighPodCount" notifications.
Stop commands:
# 1. Light stress test (run in foreground)
# Stop by pressing Ctrl+C in the same terminal
make stress-test
# 2. Heavy load army test
# Stop via Make target:
make stop-load-army-testTo avoid incurring unnecessary AWS charges, tear down the entire application, monitoring stack, ECR, EKS cluster, and VPC networking:
make destroy-allFinOps Feature: The script intelligently uninstalls Helm charts and releases all AWS LoadBalancers before cluster termination to prevent "orphaned" billing resources.
If you found this project helpful and want to support my cloud engineering journey, feel free to buy me a coffee!

