level cache provides solution for high concurrent query scenarios.
- Supports personalized cache components through interface constraints
- Elegantly supports single and batch queries through callbacks
- Supports hot data asynchronous and active update strategy, business does not need to read from source
- Prevent concurrent requests from falling into the source
- Perfect monitoring indicators
Multi-level cache layering
//1. create cache
cache = NewYCache("my_first_test",
WithCacheOptionCacheLevel(CacheL1,
NewMemCache("my_first_mem_cache", 10000000, -1),
)
)
//2. create business cache instance
bizCacheIns, err = cache.CreateInstance("my_instance_1",
[]CacheLevel{CacheL1, CacheL2},
WithInstanceOptionCacheTtl(60),
)
//3. use instance
//3.1 get
val, err := bizCacheIns.Get(context.Background(), "_abc_", key1, func(ctx context.Context, key string) ([]byte, error) {
//todo your business
})
//3.2 batch get
keys := []string{key1, key2}
mp, _ := bizCacheIns.BatchGet(context.Background(), "_abc_", keys, func(ctx context.Context, keys []string) (map[string][]byte, error) {
//todo your business
})
see demo,there are specific steps to run the demo application here and grafana dashboard json model config. Eventually, the metrics in grafana show as:
The MIT License