A comprehensive example of a scalable backend architecture using Docker containers for learning purposes. This project demonstrates microservices architecture, containerization, load balancing, and caching strategies.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Nginx │ │ Node.js │ │ PostgreSQL │
│ Load Balancer │────│ API Server │────│ Database │
│ (Port 80) │ │ (Port 3000) │ │ (Port 5432) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ ┌─────────────────┐
│ └──────────────│ Redis │
│ │ Cache │
└──────────────────────────────────────│ (Port 6379) │
└─────────────────┘
- RESTful API with user authentication and task management
- PostgreSQL database with automatic migrations
- Redis caching for improved performance
- Nginx reverse proxy with load balancing
- JWT authentication with secure password hashing
- Rate limiting and security headers
- Health checks and monitoring endpoints
- Docker Compose orchestration
- Data persistence with Docker volumes
- Docker Desktop or Docker Engine with Docker Compose
- Minimum System Requirements:
- 4GB RAM (8GB recommended for auto-scaling)
- 2 CPU cores (4+ recommended)
- 10GB free disk space
- Available Ports: 80, 3000, 5432, 6379, 8080, 8090
- Operating System: Windows 10/11, macOS, or Linux
This project offers two deployment modes:
- ** Development Mode** - Single instances, easy Docker Desktop management
- ** Production Auto-scaling** - Docker Swarm with intelligent auto-scaling
Perfect for learning, testing, and development work.
# Copy environment template
Copy-Item env.example .env
# Edit .env file with your preferred text editor
notepad .env # Windows
# Set your database passwords and JWT secret# Deploy with automatic image building
.\deploy-dev.ps1 -Build# Check all services are running
docker-compose -f docker-compose.dev.yml ps
# Test API health
curl http://localhost/api/health- ✅ Single API instance (easy to debug)
- ✅ PostgreSQL database with sample data
- ✅ Redis cache
- ✅ Nginx load balancer
- ✅ Monitoring services
- ✅ Clean container names in Docker Desktop
- ✅ All grouped under "autoscaling_" prefix
Experience real auto-scaling with Docker Swarm orchestration.
# Copy and configure environment
Copy-Item env.example .env
# Edit scaling parameters (optional)
# MIN_REPLICAS=2
# MAX_REPLICAS=10
# SCALE_UP_THRESHOLD=80# Deploy full auto-scaling system
.\deploy-autoscaling.ps1# Watch services scale
docker service ls
# Check autoscaler logs
docker service logs -f scalable-backend-production_autoscaler
# Test scaling with load
.\testing\stress-test-simple.ps1 -MaxConcurrentUsers 50 -TestDurationMinutes 3- ✅ 2-10 API instances (auto-scaling based on load)
- ✅ PostgreSQL with read replicas (auto-scaling)
- ✅ Redis clustering (auto-scaling)
- ✅ Advanced load balancing with service discovery
- ✅ Comprehensive monitoring and metrics
- ✅ Production-ready configuration
- ✅ Docker Desktop Grouping: All containers organized under "scalable-backend-production" project
After deployment, test your setup:
# 1. Test API health
curl http://localhost/api/health
# 2. Register a test user
$userData = @{
email = "test@example.com"
username = "testuser"
password = "password123"
} | ConvertTo-Json
Invoke-RestMethod -Uri "http://localhost/api/users/register" -Method POST -Headers @{"Content-Type"="application/json"} -Body $userData
# 3. Login and get token
$loginData = @{
email = "test@example.com"
password = "password123"
} | ConvertTo-Json
$loginResponse = Invoke-RestMethod -Uri "http://localhost/api/users/login" -Method POST -Headers @{"Content-Type"="application/json"} -Body $loginData
# 4. Create a task (save token from login)
$taskData = @{
title = "Learn Docker Auto-scaling"
description = "Master container orchestration"
priority = "high"
} | ConvertTo-Json
Invoke-RestMethod -Uri "http://localhost/api/tasks" -Method POST -Headers @{"Content-Type"="application/json"; "Authorization"="Bearer $($loginResponse.token)"} -Body $taskData# API Health
curl http://localhost/api/health/detailed
# Autoscaler Status (if using auto-scaling mode)
curl http://localhost:8080/health
# Metrics (if using auto-scaling mode)
curl http://localhost:8090/metrics- Open Docker Desktop
- Go to Containers tab
- Look for containers grouped by project:
- Development Mode:
auto-scaling-backendproject group (containers withautoscaling_prefix) - Production Mode:
scalable-backend-productionproject group (containers withscalable-backend_prefix)
- Development Mode:
- All containers should show "Running" status
- Services are organized by type: API (multiple instances), Database, Cache, Load Balancer, Monitoring
# Connect to PostgreSQL
docker-compose exec postgres psql -U postgres -d scalable_backend
# In PostgreSQL shell:
# \dt -- List tables
# SELECT * FROM users; -- View users
# \q -- Quit# All services
docker-compose -f docker-compose.dev.yml logs -f
# Specific service
docker-compose -f docker-compose.dev.yml logs -f api# Restart all
docker-compose -f docker-compose.dev.yml restart
# Restart specific service
docker-compose -f docker-compose.dev.yml restart api# Development mode
docker-compose -f docker-compose.dev.yml down
# Auto-scaling mode
docker stack rm scalable-backend-production-
** Read the Documentation:**
AUTOSCALING_GUIDE.md- Comprehensive auto-scaling guidetesting/TESTING_GUIDE.md- API testing guidetesting/STRESS_TEST_README.md- Load testing guide
-
** Try Load Testing:**
.\testing\stress-test-simple.ps1 -MaxConcurrentUsers 25 -TestDurationMinutes 1
-
** Explore the API:**
- Open http://localhost/ in your browser
- Use the API endpoints listed below
-
** Scale and Monitor:**
- Watch containers scale with load
- Monitor resource usage with
docker stats
-
** Docker Desktop Management:**
- Development Mode: Look for "auto-scaling-backend" project group
- Production Mode: Look for "scalable-backend-production" project group
- Use project filters to view only your containers
- Each service type is labeled for easy identification
POST /api/users/register- Register new userPOST /api/users/login- User loginGET /api/users/profile- Get user profile (auth required)POST /api/users/logout- Logout user (auth required)
GET /api/tasks- Get user tasks (with pagination and filters)POST /api/tasks- Create new task (auth required)GET /api/tasks/:id- Get specific task (auth required)PUT /api/tasks/:id- Update task (auth required)DELETE /api/tasks/:id- Delete task (auth required)GET /api/tasks/stats/summary- Get task statistics (auth required)
GET /api/health- Basic health checkGET /api/health/detailed- Detailed health with dependenciesGET /api/health/ready- Kubernetes readiness probeGET /api/health/live- Kubernetes liveness probe
curl -X POST http://localhost/api/users/register \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"username": "testuser",
"password": "password123"
}'curl -X POST http://localhost/api/users/login \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "password123"
}'curl -X POST http://localhost/api/tasks \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-d '{
"title": "Learn Docker",
"description": "Master Docker containerization",
"priority": "high"
}'# Start all services
docker-compose up -d
# View logs
docker-compose logs -f api
# Restart a specific service
docker-compose restart api
# Stop all services
docker-compose down
# Stop and remove volumes (deletes data)
docker-compose down -v
# Rebuild and start
docker-compose up --build -d# Scale API instances
docker-compose up -d --scale api=3
# Execute commands in running containers
docker-compose exec api npm run dev
docker-compose exec postgres psql -U postgres -d scalable_backend
docker-compose exec redis redis-cli# All services
docker-compose logs -f
# Specific service
docker-compose logs -f api
docker-compose logs -f postgres
docker-compose logs -f redis
docker-compose logs -f nginxdocker-compose psdocker stats- Email:
demo@example.com, Password:demo123 - Email:
test@example.com, Password:demo123
docker-compose exec postgres psql -U postgres -d scalable_backend-- View all users
SELECT id, email, username, created_at FROM users;
-- View task statistics
SELECT * FROM task_stats;
-- Check recent tasks
SELECT * FROM tasks ORDER BY created_at DESC LIMIT 5;- JWT Authentication with configurable secret
- Password Hashing using bcrypt
- Rate Limiting (100 requests/minute, 5 login attempts/minute)
- Security Headers via Nginx
- Input Validation using Joi
- Non-root Container execution
This project teaches:
-
Docker Fundamentals:
- Dockerfile best practices
- Multi-stage builds
- Container networking
- Volume management
-
Docker Compose:
- Service orchestration
- Environment variables
- Health checks
- Scaling strategies
-
Microservices Architecture:
- Service separation
- Database per service
- Inter-service communication
-
Production Readiness:
- Load balancing
- Caching strategies
- Monitoring & logging
- Security best practices
# Scale API servers
docker-compose up -d --scale api=3
# Scale with custom compose file
docker-compose -f docker-compose.yml -f docker-compose.scale.yml up -d# Use the included PowerShell stress test script
.\testing\stress-test-simple.ps1 -MaxConcurrentUsers 25 -TestDurationMinutes 2-
Port conflicts:
# Check what's using ports netstat -tulpn | grep :80 netstat -tulpn | grep :3000
-
Container won't start:
# Check logs docker-compose logs service_name # Check container status docker-compose ps
-
Database connection issues:
# Wait for PostgreSQL to be ready docker-compose exec postgres pg_isready -U postgres
-
Clear everything and restart:
docker-compose down -v docker system prune -f docker-compose up -d
If autoscaler is active, Docker Swarm will automatically recreate containers to maintain the desired replica count. Simply deleting individual containers won't work because Swarm will immediately recreate them. Here's how to properly stop the autoscaler:
If you deployed using deploy-autoscaling.ps1, you're running Docker Swarm mode:
# Stop the entire auto-scaling stack
docker stack rm scalable-backend-productionThis will:
- Stop all services (API, autoscaler, PostgreSQL, Redis, Nginx, metrics)
- Remove all containers
- Remove the stack network
- Keep your data volumes intact
If that doesn't work, let's see what deployment method you used:
# Check if Docker Swarm stack is running
docker stack ls
# Check Docker Swarm services
docker service ls
# Check regular Docker Compose containers
docker-compose ps# Stop everything
docker stack rm scalable-backend-production
# Verify it's stopped
docker service ls# Stop development environment
docker-compose -f docker-compose.dev.yml down
# Or stop original compose setup
docker-compose down# Stop ALL Docker containers
docker stop $(docker ps -q)
# Remove ALL containers
docker rm $(docker ps -aq)
# Leave Docker Swarm (if needed)
docker swarm leave --forceIf the autoscaler is creating too many containers:
# 1. Stop the autoscaler service immediately
docker service rm scalable-backend-production_autoscaler
# 2. Scale down API instances
docker service scale scalable-backend-production_api=1
# 3. Stop the entire stack
docker stack rm scalable-backend-productionAfter stopping, verify everything is clean:
# Check for any remaining services
docker service ls
# Check for any remaining containers
docker ps -a
# Check for any remaining stacks
docker stack ls
# Check Docker Swarm status
docker info | findstr "Swarm"While stopping, monitor your system:
# Monitor Docker resource usage
docker stats
# Check system processes
Get-Process | Where-Object {$_.ProcessName -like "*docker*"}To avoid runaway scaling in the future:
# Use this for testing instead of production Swarm
.\deploy-dev.ps1 -BuildCreate a .env file with:
MIN_REPLICAS=1
MAX_REPLICAS=3
SCALE_UP_THRESHOLD=90
CHECK_INTERVAL=60
COOLDOWN_PERIOD=300# Always monitor when load testing
docker statsIf you want to start fresh:
# 1. Stop everything
docker stack rm scalable-backend-production
docker-compose down
# 2. Wait for cleanup
Start-Sleep -Seconds 10
# 3. Clean images (optional)
docker image prune -f
# 4. Restart with development mode
.\deploy-dev.ps1 -Build# STOP EVERYTHING NOW:
docker stack rm scalable-backend-production
# If that doesn't work:
docker service rm $(docker service ls -q)
docker stop $(docker ps -q)
# Nuclear option:
docker system prune -af --volumesThe key is using docker stack rm scalable-backend-production instead of trying to delete individual containers. Docker Swarm will keep recreating them until you remove the entire stack service definition.
Copy env.example to .env and customize:
JWT_SECRET- Change for productionDB_PASSWORD- Use strong passwordNODE_ENV- Set toproductionfor production
The setup creates an isolated network for all services to communicate securely.
- PostgreSQL data:
postgres_datavolume - Redis data:
redis_datavolume