A high-performance, distributed in-memory cache library for Go that synchronizes local LFU/LRU caches across multiple service instances using Redis as a backing store and pub/sub for set/invalidation events.
Version: v1.0.2
- Two-Level Caching: Fast local in-process LFU(Ristretto)/LRU(golang-lru) with Redis backing store (built-in) or custom your own implementation
- Automatic Synchronization: Redis Pub/Sub for cache set/invalidation across distributed services
- High Performance: LFU/LRU provides excellent hit ratios and throughput
- Kubernetes Ready: Designed for containerized environments with pod-aware invalidation
- Simple API: Easy-to-use interface for Get, Set, Delete, and Clear operations
- Metrics: Built-in statistics collection for monitoring cache performance
- Flexible Configuration: Environment variable support for easy deployment
The distributed-cache library uses a two-level caching architecture with local in-process caches synchronized via Redis pub/sub.
When a pod sets a value, it's stored locally and in Redis, then propagated to all other pods so they can update their local caches immediately without fetching from Redis.
sequenceDiagram
participant AppA as Application (Pod A)
participant CacheA as SyncedCache (Pod A)
participant LocalA as Local Cache (Pod A)
participant Redis as Redis Store
participant PubSub as Redis Pub/Sub
participant CacheB as SyncedCache (Pod B)
participant LocalB as Local Cache (Pod B)
AppA->>CacheA: Set(key, value)
CacheA->>LocalA: Set(key, value)
CacheA->>CacheA: Marshal(value) → data
CacheA->>Redis: SET key data
Redis-->>CacheA: OK
CacheA->>PubSub: PUBLISH {action:set, key, data}
CacheA-->>AppA: Success
PubSub->>CacheB: {action:set, key, data}
CacheB->>CacheB: Unmarshal(data) → value
CacheB->>LocalB: Set(key, value)
Note over LocalB: Ready to serve immediately
For large values or lazy loading scenarios, use SetWithInvalidate() - other pods only receive an invalidation event and will fetch from Redis when needed.
sequenceDiagram
participant AppA as Application (Pod A)
participant CacheA as SyncedCache (Pod A)
participant LocalA as Local Cache (Pod A)
participant Redis as Redis Store
participant PubSub as Redis Pub/Sub
participant CacheB as SyncedCache (Pod B)
participant LocalB as Local Cache (Pod B)
AppA->>CacheA: SetWithInvalidate(key, value)
CacheA->>LocalA: Set(key, value)
CacheA->>CacheA: Marshal(value) → data
CacheA->>Redis: SET key data
Redis-->>CacheA: OK
CacheA->>PubSub: PUBLISH {action:invalidate, key}
CacheA-->>AppA: Success
PubSub->>CacheB: {action:invalidate, key}
CacheB->>LocalB: Delete(key)
Note over LocalB: Will fetch from Redis on next Get
The fastest path: value is found in the local in-process cache (~100ns).
sequenceDiagram
participant App as Application
participant Cache as SyncedCache
participant Local as Local Cache
App->>Cache: Get(key)
Cache->>Local: Get(key)
Local-->>Cache: value ✓
Cache-->>App: value, true
Note over App: ~100ns response time
Value not in local cache but found in Redis. Fetched and stored locally for future requests.
sequenceDiagram
participant App as Application
participant Cache as SyncedCache
participant Local as Local Cache
participant Redis as Redis Store
App->>Cache: Get(key)
Cache->>Local: Get(key)
Local-->>Cache: nil (miss)
Cache->>Redis: GET key
Redis-->>Cache: data ✓
Cache->>Cache: Unmarshal(data) → value
Cache->>Local: Set(key, value)
Cache-->>App: value, true
Note over App: ~1-5ms response time
Value not found in either local cache or Redis.
sequenceDiagram
participant App as Application
participant Cache as SyncedCache
participant Local as Local Cache
participant Redis as Redis Store
App->>Cache: Get(key)
Cache->>Local: Get(key)
Local-->>Cache: nil (miss)
Cache->>Redis: GET key
Redis-->>Cache: nil (miss)
Cache-->>App: nil, false
Note over App: Application handles cache miss<br/>(e.g., fetch from DB and Set)
Removes value from local cache and Redis, then broadcasts deletion to all pods.
sequenceDiagram
participant AppA as Application (Pod A)
participant CacheA as SyncedCache (Pod A)
participant LocalA as Local Cache (Pod A)
participant Redis as Redis Store
participant PubSub as Redis Pub/Sub
participant CacheB as SyncedCache (Pod B)
participant LocalB as Local Cache (Pod B)
AppA->>CacheA: Delete(key)
CacheA->>LocalA: Delete(key)
CacheA->>Redis: DEL key
Redis-->>CacheA: OK
CacheA->>PubSub: PUBLISH {action:delete, key}
CacheA-->>AppA: Success
PubSub->>CacheB: {action:delete, key}
CacheB->>LocalB: Delete(key)
Note over LocalB: Cache invalidated
Clears all values from local cache and Redis, then broadcasts clear event to all pods.
sequenceDiagram
participant AppA as Application (Pod A)
participant CacheA as SyncedCache (Pod A)
participant LocalA as Local Cache (Pod A)
participant Redis as Redis Store
participant PubSub as Redis Pub/Sub
participant CacheB as SyncedCache (Pod B)
participant LocalB as Local Cache (Pod B)
AppA->>CacheA: Clear()
CacheA->>LocalA: Clear()
CacheA->>Redis: FLUSHDB
Redis-->>CacheA: OK
CacheA->>PubSub: PUBLISH {action:clear}
CacheA-->>AppA: Success
PubSub->>CacheB: {action:clear}
CacheB->>LocalB: Clear()
Note over LocalB: All caches cleared
- SyncedCache: Main API providing Get, Set, Delete, Clear operations
- Local Cache: In-process cache build-in(LFU via Ristretto or LRU via golang-lru) or custom implementation
- Redis Store: Persistent backing store for cache data
- Redis Pub/Sub: Synchronization channel for cache invalidation and value propagation
- Marshaller: Pluggable serialization (JSON, MessagePack, Protobuf, etc.)
- Logger: Pluggable logging interface (Console, Zap, Slog, etc.)
- Set Operation: Value stored in local cache → Redis → Pub/sub event sent to other service instances (pods)
- Get Operation (Local Hit): Value retrieved from local cache (~100ns)
- Get Operation (Remote Hit): Value fetched from Redis → Stored in local cache (~1-5ms)
- Synchronization: Other service instances (pods) receive pub/sub event → Update/invalidate local cache
go get github.com/huykn/distributed-cachepackage main
import (
"context"
"log"
"time"
"github.com/huykn/distributed-cache/cache"
)
func main() {
// Create cache with default options
opts := cache.DefaultOptions()
opts.PodID = "pod-1"
opts.RedisAddr = "localhost:6379"
c, err := cache.New(opts)
if err != nil {
log.Fatal(err)
}
defer c.Close()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Set a value
if err := c.Set(ctx, "user:123", map[string]string{
"name": "John Doe",
"email": "john@example.com",
}); err != nil {
log.Fatal(err)
}
// Get a value
value, found := c.Get(ctx, "user:123")
if found {
log.Printf("Found: %v", value)
}
// Delete a value
if err := c.Delete(ctx, "user:123"); err != nil {
log.Fatal(err)
}
// Get cache statistics
stats := c.Stats()
log.Printf("Stats: %+v", stats)
}opts := cache.Options{
PodID: "pod-1",
RedisAddr: "redis.default.svc.cluster.local:6379",
RedisPassword: "secret",
RedisDB: 0,
InvalidationChannel: "cache:invalidate",
SerializationFormat: "json",
ContextTimeout: 5 * time.Second,
EnableMetrics: true,
LocalCacheConfig: cache.LocalCacheConfig{
NumCounters: 1e7,
MaxCost: 1 << 30,
BufferItems: 64,
IgnoreInternalCost: false,
},
}
c, err := cache.New(opts)type Cache interface {
Get(ctx context.Context, key string) (any, bool)
Set(ctx context.Context, key string, value any) error
Delete(ctx context.Context, key string) error
Clear(ctx context.Context) error
Close() error
Stats() Stats
}- Local Cache Hit: ~100ns (in-process)
- Remote Cache Hit: ~1-5ms (Redis round-trip)
- Cache Miss: ~1-5ms (Redis lookup)
- Set Operation: ~1-5ms (Redis + Pub/Sub)
The library includes comprehensive examples demonstrating various features and use cases:
- Basic Example - Core functionality, multi-pod synchronization, and value propagation
- Debug Mode - Detailed logging and troubleshooting techniques
- LFU Cache - Least Frequently Used cache (default) for varying access patterns
- LRU Cache - Least Recently Used cache for sequential access patterns
- LFU vs LRU Comparison - Side-by-side comparison to help choose the right strategy
- Custom Logger - Integrate with Zap, Slog, Logrus, or custom logging systems
- Custom Marshaller - Use MessagePack, Protobuf, compression, or encryption
- Custom Local Cache - Implement custom eviction strategies and storage
- Custom Configuration - Advanced tuning for production environments
- Kubernetes - Multi-pod scenario deployment
- Heavy-Read API - High-performance demo with 1M+ req/s, APISIX gateway, Prometheus/Grafana monitoring
Each example includes:
- Detailed README with explanation
- Runnable code demonstrating the feature
- Expected output and behavior
- Configuration options and best practices
- Troubleshooting tips
Quick Start: Begin with the Basic Example to understand core concepts, then explore other examples based on your needs.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
MIT License - see LICENSE file for details.