A high-performance, production-ready reverse proxy built with modern Go practices. Reverxy demonstrates advanced backend engineering concepts including load balancing, caching, rate limiting, observability, and graceful shutdown handling.
Reverxy showcases expertise in:
- Systems Programming: Low-level networking, connection pooling, and resource management
- Concurrent Architecture: Efficient goroutine usage, synchronization patterns, and context propagation
- Cloud-Native Design: Docker readiness, Kubernetes-friendly configuration, and observability patterns
- Production Concerns: Health checks, graceful shutdown, configuration management, and security best practices
- Performance Optimization: Minimizing allocations, efficient caching algorithms, and connection reuse
- Multiple Load Balancing Strategies: Round-robin, weighted round-robin, least-connections
- Intelligent Caching: LRU cache with TTL, max-age enforcement, and cache-control header compliance
- Rate Limiting: Fixed-window algorithm with X-Forwarded-For header support for trusted proxies
- Active Health Checking: Configurable intervals, timeouts, and concurrent checks
- Graceful Shutdown: Connection draining and proper resource cleanup on SIGTERM/SIGINT
- Dual-Server Architecture: Separate proxy (
:8080) and probe (:8085) servers for security isolation - Health Endpoints:
/live,/ready, and/metricsfor orchestration platform integration
- External Configuration: YAML-based with environment variable overrides (Kubernetes-style)
- Structured Logging: Timestamped, leveled output suitable for log aggregation systems
- Docker Multi-Stage Build: Production-ready minimal container images
- Comprehensive Testing: Unit test coverage with integration test scaffolding
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โ Client โโโโโถโ Reverxy Proxy โโโโโถโ Backend Pool โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโฌโโโโโโโโโ โโโโโโโโโโโฌโโโโโโโโโ
โ โ
โโโโโโโโโโโโโโผโโโโโโโโโโโโโ โโโโโโโโโผโโโโโโโโโ
โ Health Checker โ โ Cache Store โ
โ (Active Monitoring) โ โ (LRU + TTL) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโผโโโโโโโโโโโโโ
โ Rate Limiter โ
โ (Fixed Window) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Go 1.25+ (for source build)
- Docker 20.10+ (for containerized deployment)
- Make (optional, for convenience commands)
# Clone and enter directory
git clone https://github.com/Lucascluz/reverxy.git
cd reverxy
# Build binary
make build
# Run with default configuration
make run
# Alternative direct build
go build -o reverxy ./cmd/main.go
./reverxy# Build container image
make docker-build
# Run container (mounting local config)
docker run -d \
--name reverxy \
-p 8080:8080 \
-p 8085:8085 \
-v $(pwd)/config.yaml:/etc/config/config.yaml:ro \
lcluz/reverxy:latest
# Or use the convenience target
make docker-runReverxy uses a comprehensively commented config.yaml demonstrating:
- Externalized Configuration: All parameters externalized for environment-specific tuning
- Kubernetes Patterns: Environment variable override (
CONFIG_PATH) for config maps/secrets - Production Defaults: Sensible defaults with clear documentation for tuning
- Security Considerations: Trusted proxy configuration for secure header forwarding
Key sections:
proxy:
host: "0.0.0.0" # Bind to all interfaces (container best practice)
port: "8080" # Main traffic port
probe_port: "8085" # Separate port for health checks (security boundary)
default_ttl: 5m # Cache TTL when no backend headers
max_age: 24h # Maximum cache duration regardless of headers
load_balancer:
type: "round-robin" # Algorithm selection
pool:
health_checker:
interval: 10s # Backend health check frequency
timeout: 2s # Health check response timeout
backends: # Configure backend services
- name: "backend-1"
url: "http://localhost:8081"
health_url: "/health"- Worker Pools: Bounded goroutines for health checking to prevent resource exhaustion
- Context Propagation: Proper timeout and cancellation handling throughout request lifecycle
- Sync Primitives: WaitGroups for graceful shutdown, mutexes for shared state protection
- Channel Patterns: Buffered channels for error reporting and signal handling
- Connection Reuse: HTTP client with optimized transport settings (keep-alive, connection pooling)
- Allocation Minimization: Buffer reuse, pre-allocated slices where beneficial
- Efficient Data Structures: LRU cache implementation with O(1) operations
- Non-blocking I/O: Leveraging Go's netpoll for scalable connection handling
- Graceful Degradation: Continues serving cached responses during backend outages
- Resource Bounds: Maximum connection limits, memory usage controls
- Security Boundaries: Separate ports for traffic vs. management interfaces
- Observability: Structured logs, metrics endpoints, health checks for SRE teams
# Run unit tests
make test
# Format code according to Go standards
make fmt
# Lint for potential issues
make lint
# Tidy dependencies
make mod-tidyThis foundation could be extended with:
- TLS termination with automatic certificate management (Let's Encrypt)
- Advanced load balancing algorithms (consistent hashing, response-time based)
- Circuit breaker patterns for failure isolation
- Distributed tracing integration (OpenTelemetry)
- Prometheus metrics endpoint with detailed latency histograms
- WebSocket support with proper upgrade handling
- Admin API for runtime configuration changes
MIT License - see LICENSE file for details.
Lucas Cluz - Backend Engineer specializing in distributed systems, networking, and cloud-native infrastructure.
โญ Star this repository if you appreciate the technical depth and would like to see similar projects!