Abstract caching layer for the dg-framework. Provides a unified API for various cache drivers with support for serialization, tagging, atomic operations, and the "Remember" pattern.
go get github.com/donnigundala/dg-cache@v1.0.0- ๐ Unified API - Simple, consistent interface across all drivers
- ๐ Multiple Stores - Use different drivers for different purposes
- ๐พ Built-in Drivers - Memory (testing) and Redis (production) included
- ๐ง Extensible - Easy to add custom drivers
- ๐ฆ Serialization - Automatic marshaling/unmarshaling with JSON or Msgpack
- ๐๏ธ Compression - Transparent Gzip compression for large values
- ๐ Observability - Standardized metrics and Prometheus exporter
- ๐ก๏ธ Reliability - Circuit breaker and enhanced retry logic
- ๐ท๏ธ Tagged Cache - Group related items with tags (Redis and Memory drivers)
- โก Performance - LRU eviction, metrics, and optimized serialization
dg-cache/
โโโ config.go # Configuration structures and validation
โโโ manager.go # Cache manager (multi-store orchestration)
โโโ store.go # Store, TaggedStore, and Driver interfaces
โโโ helpers.go # Typed retrieval helpers (GetString, GetInt, etc.)
โโโ errors.go # Custom error types
โโโ drivers/
โ โโโ memory/ # In-memory cache driver
โ โ โโโ memory.go # Core driver implementation
โ โ โโโ lru.go # LRU eviction policy
โ โ โโโ metrics.go # Metrics collection
โ โ โโโ config.go # Memory driver configuration
โ โโโ redis/ # Redis cache driver
โ โโโ redis.go # Core driver implementation
โ โโโ tagged.go # Tagged cache support
โ โโโ config.go # Redis driver configuration
โโโ serializer/
โ โโโ serializer.go # Serializer interface
โ โโโ json.go # JSON serializer
โ โโโ msgpack.go # Msgpack serializer
โโโ docs/
โโโ API.md # Complete API reference
โโโ SERIALIZATION.md # Serialization guide
โโโ MEMORY_DRIVER.md # Memory driver documentation
โโโ REDIS_DRIVER.md # Redis driver documentation
The Manager is the central orchestrator that manages multiple cache stores, provides a unified interface across all drivers, handles driver registration, and routes cache operations to the appropriate store.
Defines the contract that all cache drivers must implement:
- Basic operations:
Get,Put,Forget,Flush - Batch operations:
GetMultiple,PutMultiple - Atomic operations:
Increment,Decrement - TTL support:
Forever(no expiration) - Existence checks:
Has,Missing
Extends the Store interface with driver-specific functionality like Name() for identification and Close() for resource cleanup.
Optional interface for drivers that support cache tagging to group related cache items and flush them together. Supported by both Redis and Memory drivers.
- In-memory caching for development/testing
- LRU eviction with configurable size limits
- Tagged cache support (v1.6.1)
- Metrics tracking (hits, misses, evictions)
- Thread-safe operations
- Production-ready Redis caching
- JSON and Msgpack serialization
- Tagged cache support
- Shared client support
- Connection pooling
package main
import (
"context"
"log"
"time"
"github.com/donnigundala/dg-core/foundation"
"github.com/donnigundala/dg-cache"
)
func main() {
app := foundation.New(".")
// Register provider (uses 'cache' key in config)
app.Register(dgcache.NewCacheServiceProvider(nil))
if err := app.Boot(); err != nil {
log.Fatal(err)
}
// Usage
cacheMgr := dgcache.MustResolve(app)
ctx := context.Background()
cacheMgr.Put(ctx, "key", "value", 10*time.Minute)
val, _ := cacheMgr.Get(ctx, "key")
}In your bootstrap/app.go, you typically use the declarative suite pattern:
func InfrastructureSuite(workerMode bool) []foundation.ServiceProvider {
return []foundation.ServiceProvider{
dgcache.NewCacheServiceProvider(nil),
// ... other providers
}
}type User struct {
ID int
Name string
Email string
}
// Store any Go type - automatic serialization!
user := User{ID: 1, Name: "John", Email: "john@example.com"}
manager.Put(ctx, "user:1", user, 1*time.Hour)
// Retrieve with type assertion
val, _ := manager.Get(ctx, "user:1")
user = val.(User)
// Or use type-safe helper
var user User
manager.GetAs(ctx, "user:1", &user)// Type-safe retrieval methods
name, err := manager.GetString(ctx, "name")
age, err := manager.GetInt(ctx, "age")
score, err := manager.GetFloat64(ctx, "score")
active, err := manager.GetBool(ctx, "active")
// Generic type-safe method
var config map[string]interface{}
err := manager.GetAs(ctx, "config", &config)manager, _ := cache.NewManager(cache.Config{
DefaultStore: "memory",
Stores: map[string]cache.StoreConfig{
"memory": {
Driver: "memory",
Options: map[string]interface{}{
"max_items": 1000, // Max 1000 items
"max_bytes": 10 * 1024 * 1024, // Max 10MB
"eviction_policy": "lru", // LRU eviction
"cleanup_interval": 1 * time.Minute, // Cleanup every minute
"enable_metrics": true, // Enable statistics
},
},
},
})import (
"github.com/donnigundala/dg-cache"
"github.com/donnigundala/dg-cache/drivers/redis"
)
// Option 1: Create driver with config
redisDriver, _ := redis.NewDriver(cache.StoreConfig{
Options: map[string]interface{}{
"host": "localhost",
"port": 6379,
"serializer": "msgpack", // or "json"
},
})
// Option 2: Use shared Redis client
client := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
redisDriver := redis.NewDriverWithClient(client, "app")Retrieve an item from the cache, or execute the callback and store the result if it doesn't exist.
user, err := manager.Remember(ctx, "user:1", 1*time.Hour, func() (interface{}, error) {
return db.FindUser(1)
})// Increment
newVal, err := manager.Increment(ctx, "hits", 1)
// Decrement
newVal, err := manager.Decrement(ctx, "hits", 1)// Access specific store
redisStore, err := manager.Store("redis")
redisStore.Put(ctx, "key", "value", 0)
// Access default store
manager.Put(ctx, "key", "value", 0)As of v1.6.0, dg-cache is fully integrated with the dg-core container system. Named cache stores are automatically registered in the container as cache.<name>.
You can resolve specific stores directly from the container:
// Resolve named stores
redisStore, _ := app.Make("cache.redis")
memStore, _ := app.Make("cache.memory")
// Resolve main cache manager
cacheManager, _ := app.Make("cache")Global helper functions provide a more convenient and type-safe way to resolve the cache:
import "github.com/donnigundala/dg-cache"
// Resolve main cache
mgr := cache.MustResolve(app)
// Resolve named store
redis := cache.MustResolveStore(app, "redis")The Injectable struct simplifies dependency injection in your services:
import (
"github.com/donnigundala/dg-core/foundation"
"github.com/donnigundala/dg-cache"
)
type UserService struct {
inject *cache.Injectable
}
func NewUserService(app foundation.Application) *UserService {
return &UserService{
inject: cache.NewInjectable(app),
}
}
func (s *UserService) CacheUser(ctx context.Context, user *User) {
// Use default cache store
s.inject.Cache().Put(ctx, "user:1", user, 0)
// Use specific store (e.g. redis)
s.inject.Store("redis").Put(ctx, "user:1", user, 0)
}The plugin uses the cache key in your configuration file.
| YAML Key | Environment Variable | Default | Description |
|---|---|---|---|
cache.default_store |
CACHE_DRIVER |
memory |
Default store name |
cache.prefix |
CACHE_PREFIX |
dg_cache |
Global key prefix |
cache.stores.<name>.driver |
- | - | redis, memory |
cache.stores.<name>.prefix |
- | - | Store-specific prefix |
cache.stores.<name>.connection |
- | default |
Redis connection name |
cache:
default_store: redis
prefix: app_
stores:
memory:
driver: memory
redis:
driver: redis
connection: defaultEnable transparent Gzip compression to save storage space for large values (Redis driver only):
Options: map[string]interface{}{
"compression": "gzip",
}Options: map[string]interface{}{
"max_items": 1000, // Maximum number of items
"max_bytes": 10 * 1024 * 1024, // Maximum total size (10MB)
}Automatically evicts least recently used items when limits are reached:
Options: map[string]interface{}{
"eviction_policy": "lru", // Least Recently Used
}Options: map[string]interface{}{
"enable_metrics": true,
}
// Get statistics
driver := manager.Store("memory").(*memory.Driver)
stats := driver.Stats()
fmt.Printf("Hit rate: %.2f%%\n", stats.HitRate*100)
fmt.Printf("Items: %d, Bytes: %d\n", stats.ItemCount, stats.BytesUsed)dg-cache is instrumented with OpenTelemetry metrics. As of v2.0.0, the legacy Prometheus collector has been replaced with native OpenTelemetry instruments.
The following metrics are automatically collected from all active cache stores via asynchronous observers:
cache_hits_total: Counter (labels:cache_store)cache_misses_total: Counter (labels:cache_store)cache_sets_total: Counter (labels:cache_store)cache_deletes_total: Counter (labels:cache_store)cache_evictions_total: Counter (labels:cache_store)cache_items: Gauge (labels:cache_store)cache_bytes: Gauge (labels:cache_store)
To enable observability, ensure the dg-observability plugin is registered and configured:
observability:
enabled: true
service_name: "my-app"The metrics are automatically registered on application boot. No manual collector registration is required.
Configure exponential backoff for Redis connections:
Options: map[string]interface{}{
"max_retries": 3,
"min_retry_backoff": 8 * time.Millisecond,
"max_retry_backoff": 512 * time.Millisecond,
}Protect your application from cascading cache failures. If the cache becomes unresponsive, the circuit breaker opens and fails fast.
Options: map[string]interface{}{
"circuit_breaker": map[string]interface{}{
"enabled": true,
"threshold": 5, // Fail after 5 errors
"timeout": 1 * time.Minute, // Reset after 1 minute
},
}Implement the cache.Driver interface:
type Driver interface {
Get(ctx context.Context, key string) (interface{}, error)
Put(ctx context.Context, key string, value interface{}, ttl time.Duration) error
Forget(ctx context.Context, key string) error
Flush(ctx context.Context) error
// ... other methods
}BenchmarkJSON_Marshal 7,542,747 152.6 ns/op 128 B/op 2 allocs/op
BenchmarkMsgpack_Marshal 5,384,852 210.9 ns/op 272 B/op 4 allocs/op
BenchmarkJSON_Unmarshal 2,329,303 443.5 ns/op 216 B/op 4 allocs/op
BenchmarkMsgpack_Unmarshal 6,601,837 172.1 ns/op 96 B/op 2 allocs/op
Msgpack is 2.6x faster for unmarshal operations!
For detailed information, see the comprehensive documentation in the docs/ directory:
- API Reference - Complete API documentation for all packages
- Serialization Guide - Deep dive into JSON and Msgpack serialization
- Memory Driver - In-memory cache with LRU eviction and metrics
- Redis Driver - Production-ready Redis caching with tagged cache support
- Current Version: v1.3.0
- Go Version: 1.21+
- Test Coverage: 88%+
- Status: Production Ready
- dg-core - Core framework for dg-framework
- dg-database - Database abstraction layer
Note: The
dg-redispackage has been merged into this package asdrivers/redisin v1.3.0. If you're using the olddg-redispackage, please migrate togithub.com/donnigundala/dg-cache/drivers/redis.
MIT License - see LICENSE file for details.