Skip to content

Rohit025005/caching-reverse-proxy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Caching Proxy Server

A simple, lightweight HTTP caching proxy server implemented in Python. It acts as an intermediary between clients and origin servers, caching GET and HEAD requests to improve performance and reduce load on upstream servers.

Features

  • In-Memory Caching

    • Fast lookups using an in-memory dictionary
    • Caches GET and HEAD requests automatically
    • Query parameter-aware cache keys for precise matching
  • Proxy Functionality

    • Forwards requests to the origin server
    • Returns cached responses on cache hits
    • Supports multiple HTTP methods (GET, POST, PUT, PATCH, DELETE, HEAD)
  • Admin Controls

    • Health check endpoint (/health)
    • Cache clearing endpoint (/__admin/clear-cache)
    • Real-time cache statistics endpoint (/__admin/stats)

Installation

This project uses uv as the package manager.

# Clone the repository
git clone https://github.com/Rohit025005/caching-reverse-proxy.git
cd caching-proxy

# Install dependencies with uv
uv sync

Usage

Start the server

# Using uv
uv run python main.py --port 8080 --origin http://dummyjson.com

# Or with Python directly
python main.py --port 3000 --origin https://api.github.com

Test the proxy

# Make requests through the proxy
curl http://localhost:8080/products/1

# Check server health
curl http://localhost:8080/health

# Clear the cache
curl -X POST http://localhost:8080/__admin/clear-cache

How It Works

  1. Cache Key Generation: Each request generates a unique cache key based on:

    • HTTP method (GET, HEAD)
    • Origin server URL
    • Request path
    • Sorted query parameters
  2. Cache Flow:

    • Cache HIT: Returns cached response immediately
    • Cache MISS: Forwards to origin, caches response, returns to client
  3. Non-Cacheable Requests: POST, PUT, PATCH, DELETE requests bypass the cache

Sequence diagram


Project Structure

caching-proxy/
├── proxy/
│   ├── __init__.py      # Package exports
│   ├── cache.py         # Cache and CachedResponse classes
│   ├── handler.py       # HTTP request handler with caching logic
│   ├── server.py        # Server initialization
│   └── constants.py     # Configuration constants
├── main.py              # Entry point
└── pyproject.toml       # Project dependencies (uv)

Limitations

  • Memory Usage: Cache is stored in RAM, limited by available memory
  • No Cache Eviction: Cache grows indefinitely without an LRU or TTL policy
  • No Persistence: Cache is lost when the server stops
  • Single-threaded: Uses Python's built-in HTTP server

Future Improvements

  • Add cache eviction policies (LRU, TTL)
  • Implement cache persistence to disk
  • Add configurable cache size limits
  • Support for cache-control headers
  • Multi-threaded request handling
  • Redis/Memcached backend support

Requirements

  • Python 3.8+
  • uv package manager
  • Dependencies: requests, colorama

Contributing

This is a learning project demonstrating basic caching proxy concepts. Contributions and suggestions are welcome!

Credits

This project serves as a solution to the roadmap.sh Caching Server Challenge.

License

MIT License - feel free to use this code for learning and experimentation.

About

CLI-based HTTP reverse proxy with in-memory caching, canonical cache keys, and header filtering

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages