A simple, lightweight HTTP caching proxy server implemented in Python. It acts as an intermediary between clients and origin servers, caching GET and HEAD requests to improve performance and reduce load on upstream servers.
-
In-Memory Caching
- Fast lookups using an in-memory dictionary
- Caches GET and HEAD requests automatically
- Query parameter-aware cache keys for precise matching
-
Proxy Functionality
- Forwards requests to the origin server
- Returns cached responses on cache hits
- Supports multiple HTTP methods (GET, POST, PUT, PATCH, DELETE, HEAD)
-
Admin Controls
- Health check endpoint (
/health) - Cache clearing endpoint (
/__admin/clear-cache) - Real-time cache statistics endpoint (
/__admin/stats)
- Health check endpoint (
This project uses uv as the package manager.
# Clone the repository
git clone https://github.com/Rohit025005/caching-reverse-proxy.git
cd caching-proxy
# Install dependencies with uv
uv sync# Using uv
uv run python main.py --port 8080 --origin http://dummyjson.com
# Or with Python directly
python main.py --port 3000 --origin https://api.github.com# Make requests through the proxy
curl http://localhost:8080/products/1
# Check server health
curl http://localhost:8080/health
# Clear the cache
curl -X POST http://localhost:8080/__admin/clear-cache-
Cache Key Generation: Each request generates a unique cache key based on:
- HTTP method (GET, HEAD)
- Origin server URL
- Request path
- Sorted query parameters
-
Cache Flow:
- Cache HIT: Returns cached response immediately
- Cache MISS: Forwards to origin, caches response, returns to client
-
Non-Cacheable Requests: POST, PUT, PATCH, DELETE requests bypass the cache
caching-proxy/
├── proxy/
│ ├── __init__.py # Package exports
│ ├── cache.py # Cache and CachedResponse classes
│ ├── handler.py # HTTP request handler with caching logic
│ ├── server.py # Server initialization
│ └── constants.py # Configuration constants
├── main.py # Entry point
└── pyproject.toml # Project dependencies (uv)
- Memory Usage: Cache is stored in RAM, limited by available memory
- No Cache Eviction: Cache grows indefinitely without an LRU or TTL policy
- No Persistence: Cache is lost when the server stops
- Single-threaded: Uses Python's built-in HTTP server
- Add cache eviction policies (LRU, TTL)
- Implement cache persistence to disk
- Add configurable cache size limits
- Support for cache-control headers
- Multi-threaded request handling
- Redis/Memcached backend support
- Python 3.8+
- uv package manager
- Dependencies:
requests,colorama
This is a learning project demonstrating basic caching proxy concepts. Contributions and suggestions are welcome!
This project serves as a solution to the roadmap.sh Caching Server Challenge.
MIT License - feel free to use this code for learning and experimentation.

