Open-source cache performance benchmark framework for comparing Cachee.ai, Redis, Memcached, ElastiCache, and other caching solutions.
| Metric | Cachee | Redis | Memcached | ElastiCache |
|---|---|---|---|---|
| GET Latency | 17 ns | 1.0 ms | 0.8 ms | 1.2 ms |
| SET Latency | 22 ns | 1.1 ms | 0.9 ms | 1.3 ms |
| P99 Latency | 24 ns | 2.5 ms | 1.8 ms | 3.0 ms |
| Throughput | 59M ops/s | ~250K ops/s | ~300K ops/s | ~200K ops/s |
| Hit Rate | 98.1% | ~95% | ~93% | ~95% |
Full methodology and reproduction steps: cachee.ai/benchmark-methodology
# Clone and run benchmarks
git clone https://github.com/HapPhi/cachee-benchmarks.git
cd cachee-benchmarks
docker-compose up -d
./run-benchmarks.sh- Single-key GET/SET — Median, P95, P99, P999
- Batch operations — MGET/MSET with 10, 100, 1000 keys
- Hot-key contention — Zipfian distribution, 80/20 access pattern
- Sustained load — 1, 10, 100, 1000 concurrent connections
- Burst capacity — 10x traffic spikes sustained for 60 seconds
- Mixed workload — 80% read / 20% write realistic traffic
- Order lifecycle — 14-lookup sequence simulating pre-trade risk checks
- Market data fan-out — 1:N subscriber reads from shared state
- Smart order routing — Venue selection with real-time latency tables
See trading solutions for how Cachee achieves 17ns reads in production trading environments.
- KV cache lookup — LLM inference serving with shared prefix optimization
- Embedding retrieval — RAG pipeline feature store access patterns
- Agent chain — Multi-model orchestration with 10-50 cache reads per request
See AI solutions for how Cachee eliminates the GPU memory wall.
| Backend | Config Key | Notes |
|---|---|---|
| Cachee.ai | cachee |
L1 in-process + L2 Redis |
| Redis | redis |
Standalone or Cluster |
| Memcached | memcached |
Single node |
| AWS ElastiCache | elasticache |
Redis-compatible |
| Azure Cache | azure |
Redis-compatible |
| GCP Memorystore | gcp |
Redis-compatible |
| Upstash | upstash |
Serverless Redis |
See all supported integrations
# benchmark-config.yaml
backends:
cachee:
host: cachee-proxy
port: 6380
redis:
host: redis
port: 6379
scenarios:
- name: single-key-get
operations: 1000000
concurrency: 100
key_distribution: zipfianResults are written to results/ as JSON and rendered as Markdown tables. See full benchmark results for production numbers.
- Fork the repo
- Add your backend adapter in
backends/ - Run
./run-benchmarks.sh --backend=yours - Open a PR with results
- Website: cachee.ai
- Documentation: cachee.ai/docs
- Benchmarks: cachee.ai/benchmarks
- How It Works: cachee.ai/how-it-works
- Pricing: cachee.ai/pricing
- Blog: cachee.ai/blog
Apache 2.0