A multi-node in-memory cache system built in Go with LRU eviction, TTL expiration, and consistent hashing for efficient key distribution.
- Consistent Hashing: Efficiently distributes keys across multiple cache nodes
- TTL Support: Automatic expiration of cached items
- LRU Eviction: Least Recently Used eviction policy when cache is full
- Docker Support: Easy deployment with Docker Compose
- HTTP API: Simple REST API for cache operations
- Load Balancing: Nginx for distributing requests
- Multiple Nodes: Scalable multi-node architecture
The system consists of:
- Multiple cache server nodes
- Consistent hash ring for key distribution
- Nginx load balancer
- HTTP handlers for GET/SET/DELETE operations
- Go 1.16 or higher (for local development)
- Docker and Docker Compose (for containerized deployment)
- Clone the repository:
git clone https://github.com/jknate/distributed-cache.git
cd distributed-cache- Start the cache cluster:
docker-compose upThis will start multiple cache nodes and an Nginx load balancer.
- Build the project:
go build -o cache-server main.go- Run a cache node:
./cache-servercurl -X POST http://localhost/set \
-H "Content-Type: application/json" \
-d '{"key": "mykey", "value": "myvalue", "ttl": 3600}'curl http://localhost/get?key=mykeycurl -X DELETE http://localhost/delete?key=mykey- Modify
docker-compose.ymlto adjust the number of cache nodes - Update
nginx.conffor load balancer settings - Configure TTL and cache size in the cache initialization
main.go- Entry point and server setupcache/- Cache implementation with LRU and TTLhashring/- Consistent hashing implementationhandlers/- HTTP request handlersDockerfile- Container image definitiondocker-compose.yml- Multi-container orchestrationnginx.conf- Load balancer configuration