Accelerator is a pluggable, async-first cache runtime for high-concurrency Rust services. It provides a unified API over local cache (L1) and remote cache (L2), with miss load (source-of-truth load), batch loading, invalidation broadcast, and built-in observability.
- 🧭 Multi-level modes:
Local,Remote,Both - 🧱 Default backends:
moka(L1) +redis(L2) - 🔌 Pluggable backends via traits:
LocalBackend<V>RemoteBackend<V>InvalidationSubscriber
- 🍱 Unified runtime APIs:
get,mget,set,mset,del,mdel,warmup
- 📥 Loader contracts:
- single-key:
Loader<K, V> - batch:
MLoader<K, V>
- single-key:
- 🛡️ Miss handling:
- single-key miss can use singleflight dedup (
penetration_protect) - batch miss (
mget) usesMLoader::mloaddirectly
- single-key miss can use singleflight dedup (
- 🔄 Resilience and stability:
- negative cache (
cache_null_value,null_ttl) - TTL jitter (
ttl_jitter_ratio) - refresh-ahead (
refresh_ahead) - stale fallback (
stale_on_error)
- negative cache (
- 📡 Cross-instance local cache consistency:
- Redis Pub/Sub invalidation broadcast
- 👀 Observability:
- runtime counters (
metrics_snapshot) - diagnostic state (
diagnostic_snapshot) - OTel-friendly metric points (
otel_metric_points) - tracing spans on core paths
- runtime counters (
- 🪄 Procedural macros:
cacheable,cacheable_batch,cache_put,cache_evict,cache_evict_batch
- 📦 Installation
- 🤠 Quick Start
- 🍱 API Overview
- 🧩 Macro Usage
- 🏗️ Backend Extension
- 🪄 Examples
- 🏎️ Benchmark and Regression Gate
- 🧪 Integration Tests
- 🧰 Local Full Stack
- 📚 Documentation
Use from crates.io (recommended):
[dependencies]
accelerator = "0.1.0"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }use std::time::Duration;
use accelerator::builder::LevelCacheBuilder;
use accelerator::config::CacheMode;
use accelerator::{ReadOptions, local};
#[tokio::main]
async fn main() -> accelerator::CacheResult<()> {
let local_backend = local::moka::<String>().max_capacity(100_000).build()?;
let cache = LevelCacheBuilder::<u64, String>::new()
.area("user")
.mode(CacheMode::Local)
.local(local_backend)
.local_ttl(Duration::from_secs(60))
.null_ttl(Duration::from_secs(10))
.loader_fn(|id: u64| async move { Ok(Some(format!("user-{id}"))) })
.build()?;
let v = cache.get(&42, &ReadOptions::default()).await?;
assert_eq!(v, Some("user-42".to_string()));
Ok(())
}use std::time::Duration;
use accelerator::builder::LevelCacheBuilder;
use accelerator::config::CacheMode;
use accelerator::{local, remote};
let local_backend = local::moka::<String>().max_capacity(100_000).build()?;
let remote_backend = remote::redis::<String>()
.url("redis://127.0.0.1:6379")
.key_prefix("demo")
.build()?;
let cache = LevelCacheBuilder::<u64, String>::new()
.area("user")
.mode(CacheMode::Both)
.local(local_backend)
.remote(remote_backend)
.local_ttl(Duration::from_secs(60))
.remote_ttl(Duration::from_secs(300))
.broadcast_invalidation(true)
.build()?;Core runtime:
LevelCache<K, V, LD, LB, RB>ReadOptions { allow_stale, disable_load }CacheMode::{Local, Remote, Both}
Main methods:
- Read:
get,mget - Write:
set,mset - Invalidate:
del,mdel - Warmup:
warmup
Diagnostics and metrics:
metrics_snapshot() -> CacheMetricsSnapshotdiagnostic_snapshot() -> CacheDiagnosticSnapshototel_metric_points() -> Vec<OtelMetricPoint>
Loader traits:
Loader<K, V>::load(&K) -> Future<CacheResult<Option<V>>>MLoader<K, V>::mload(&[K]) -> Future<CacheResult<HashMap<K, Option<V>>>>
Import macros from:
use accelerator::macros::{cache_evict, cache_evict_batch, cache_put, cacheable, cacheable_batch};Macro behavior and constraints:
#[cacheable(...)]: cache-first read, miss executes function body, thensetwrite-back.#[cacheable_batch(...)]:mgetfirst, loads misses only, thenmsetwrite-back.#[cache_put(...)]: executes function first, thensetto cache on success.#[cache_evict(...)]/#[cache_evict_batch(...)]: invalidates after success by default (before = false).- Macros only support
async fnmethods onimplblocks (&self/&mut selfreceiver). on_cache_errorsupports"ignore"(default) or"propagate".
Minimal single-key example:
use accelerator::macros::{cache_evict, cache_put, cacheable};
impl UserService {
#[cacheable(cache = self.cache, key = user_id, cache_none = true, on_cache_error = "ignore")]
async fn get_user(&self, user_id: u64) -> accelerator::CacheResult<Option<User>> {
self.repo.find_by_id(user_id).await
}
#[cache_put(cache = self.cache, key = user.id, value = Some(user.clone()))]
async fn save_user(&self, user: User) -> accelerator::CacheResult<()> {
self.repo.upsert(user.clone()).await
}
#[cache_evict(cache = self.cache, key = user_id, before = false)]
async fn delete_user(&self, user_id: u64) -> accelerator::CacheResult<()> {
self.repo.delete(user_id).await
}
}Minimal batch example:
use accelerator::macros::{cache_evict_batch, cacheable_batch};
use std::collections::HashMap;
impl UserService {
#[cacheable_batch(cache = self.cache, keys = user_ids)]
async fn batch_get(&self, user_ids: Vec<u64>) -> accelerator::CacheResult<HashMap<u64, Option<User>>> {
self.repo.batch_find(&user_ids).await
}
#[cache_evict_batch(cache = self.cache, keys = user_ids, before = false)]
async fn batch_delete(&self, user_ids: Vec<u64>) -> accelerator::CacheResult<()> {
self.repo.batch_delete(&user_ids).await
}
}Runnable references:
examples/macro_best_practice.rsexamples/macro_batch_best_practice.rs
To replace default backends:
- Implement
LocalBackend<V>for your local cache. - Implement
RemoteBackend<V>andInvalidationSubscriberfor your remote cache. - Plug them into
LevelCacheBuilder::local(...)andLevelCacheBuilder::remote(...).
The runtime uses static dispatch (generics), not runtime dyn objects.
See examples/:
fixed_backend_best_practice.rs(moka + redis)macro_best_practice.rs(macro-based single-key flow)macro_batch_best_practice.rs(macro-based batch flow)clickstack_otlp.rs(optional OTLP bootstrap, featureotlp)
Run:
cargo run --example fixed_backend_best_practiceIf Redis is unavailable at ACCELERATOR_REDIS_URL (default redis://127.0.0.1:6379),
the example exits gracefully.
One-click script:
./scripts/bench.sh
./scripts/bench.sh bench-local --runs 3 --sample-size 60
./scripts/bench.sh bench-redis --runs 3 --sample-size 60 --redis-url redis://127.0.0.1:6379
./scripts/bench.sh regression --threshold 0.15Raw commands:
cargo bench --bench cache_path_bench -- --sample-size=60
ACCELERATOR_BENCH_REDIS_URL=redis://127.0.0.1:0 cargo bench --bench cache_path_bench -- --sample-size=60
cargo run --bin export_bench_baseline --
cargo run --bin check_bench_regression -- --threshold 0.15Detailed playbook: docs/performance-engineering-playbook.md
Redis integration tests are in tests/redis_integration.rs.
- They run with
cargo test. - If Redis is unavailable, tests skip gracefully where designed.
- Override endpoint with
ACCELERATOR_TEST_REDIS_URL.
Start local stack:
cd scripts
docker compose up -dRun end-to-end tests:
cargo test --test redis_integration
cargo test --test stack_integrationstack_integration uses real sqlx + Postgres loader flow.
ClickStack UI: http://127.0.0.1:8080
OTLP ingest ports: 4317 and 4318
English is the default documentation language. Chinese versions are maintained under docs/zh/.
| Topic | English | 中文(简体) |
|---|---|---|
| README | README.md |
README.zh-CN.md |
| Terminology Baseline | docs/terminology.md |
docs/zh/terminology.zh-CN.md |
| Capability Model | docs/multi-level-cache-capability-model.md |
docs/zh/multi-level-cache-capability-model.zh-CN.md |
| Performance Playbook | docs/performance-engineering-playbook.md |
docs/zh/performance-engineering-playbook.zh-CN.md |
| Cache Ops Runbook | docs/cache-ops-runbook.md |
docs/zh/cache-ops-runbook.zh-CN.md |
| Local Stack Guide | docs/local-stack-integration.md |
docs/zh/local-stack-integration.zh-CN.md |
| Code Flattening Guideline | docs/code-flattening-guideline.md |
docs/zh/code-flattening-guideline.zh-CN.md |