A high-performance, memory-efficient cache library for Rust supporting multiple eviction algorithms with O(1) operations.
- Multiple eviction algorithms: LRU, LFU, LFUDA, SLRU, GDSF
- High performance: All operations are O(1) with optimized data structures
- Memory efficient: Minimal overhead with careful memory layout
no_std
compatible: Works in embedded and resource-constrained environments- Thread-safe ready: Easy to wrap with
Mutex
/RwLock
for concurrent access - Well documented: Comprehensive documentation with usage examples
Add to your Cargo.toml
:
[dependencies]
cache-rs = "0.1.0"
Basic usage:
use cache_rs::LruCache;
use std::num::NonZeroUsize;
let mut cache = LruCache::new(NonZeroUsize::new(100).unwrap());
cache.put("key", "value");
assert_eq!(cache.get(&"key"), Some(&"value"));
Choose the right cache algorithm for your use case:
Best for: General-purpose caching with temporal locality
use cache_rs::LruCache;
use std::num::NonZeroUsize;
let mut cache = LruCache::new(NonZeroUsize::new(100).unwrap());
cache.put("recent", "data");
Best for: Workloads with scan resistance requirements
use cache_rs::SlruCache;
use std::num::NonZeroUsize;
// Total capacity: 100, Protected segment: 20
let mut cache = SlruCache::new(
NonZeroUsize::new(100).unwrap(),
NonZeroUsize::new(20).unwrap()
);
Best for: Workloads with strong frequency patterns
use cache_rs::LfuCache;
use std::num::NonZeroUsize;
let mut cache = LfuCache::new(NonZeroUsize::new(100).unwrap());
cache.put("frequent", "data");
Best for: Long-running applications where access patterns change
use cache_rs::LfudaCache;
use std::num::NonZeroUsize;
let mut cache = LfudaCache::new(NonZeroUsize::new(100).unwrap());
Best for: Variable-sized objects (images, files, documents)
use cache_rs::GdsfCache;
use std::num::NonZeroUsize;
let mut cache = GdsfCache::new(NonZeroUsize::new(1000).unwrap());
cache.put("image.jpg", image_data, 250); // key, value, size
Algorithm | Get Operation | Use Case | Memory Overhead |
---|---|---|---|
LRU | ~887ns | General purpose | Low |
SLRU | ~983ns | Scan resistance | Medium |
GDSF | ~7.5Β΅s | Size-aware | Medium |
LFUDA | ~20.5Β΅s | Aging workloads | Medium |
LFU | ~22.7Β΅s | Frequency-based | Medium |
Benchmarks run on mixed workloads with Zipf distribution
Works out of the box in no_std
environments:
#![no_std]
extern crate alloc;
use cache_rs::LruCache;
use core::num::NonZeroUsize;
use alloc::string::String;
let mut cache = LruCache::new(NonZeroUsize::new(10).unwrap());
cache.put(String::from("key"), "value");
hashbrown
(default): Use hashbrown HashMap for better performancenightly
: Enable nightly-only optimizationsstd
: Enable standard library features (disabled by default)
# Default: no_std + hashbrown (recommended for most use cases)
cache-rs = "0.1.0"
# std + hashbrown (recommended for std environments)
cache-rs = { version = "0.1.0", features = ["std"] }
# std + hashbrown + nightly optimizations
cache-rs = { version = "0.1.0", features = ["std", "nightly"] }
# no_std + nightly optimizations only
cache-rs = { version = "0.1.0", features = ["nightly"] }
# Only std::HashMap (not recommended - slower than hashbrown)
cache-rs = { version = "0.1.0", default-features = false, features = ["std"] }
The cache types are !Send
and !Sync
by design for performance. For concurrent access:
use cache_rs::LruCache;
use std::sync::{Arc, Mutex};
use std::num::NonZeroUsize;
let cache = Arc::new(Mutex::new(
LruCache::new(NonZeroUsize::new(100).unwrap())
));
// Clone Arc for use in other threads
let cache_clone = Arc::clone(&cache);
use cache_rs::LruCache;
use std::collections::hash_map::RandomState;
use std::num::NonZeroUsize;
let cache = LruCache::with_hasher(
NonZeroUsize::new(100).unwrap(),
RandomState::new()
);
use cache_rs::GdsfCache;
use std::num::NonZeroUsize;
let mut cache = GdsfCache::new(NonZeroUsize::new(1000).unwrap());
// Cache different sized objects
cache.put("small.txt", "content", 10);
cache.put("medium.jpg", image_bytes, 500);
cache.put("large.mp4", video_bytes, 2000);
// GDSF automatically considers size, frequency, and recency
Run the included benchmarks to compare performance:
cargo bench
Example results on modern hardware:
- LRU: Fastest for simple use cases (~887ns per operation)
- SLRU: Good balance of performance and scan resistance (~983ns)
- GDSF: Best for size-aware workloads (~7.5Β΅s)
- LFUDA/LFU: Best for frequency-based patterns (~20Β΅s)
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
# Run all tests
cargo test --all
# Check formatting
cargo fmt --all -- --check
# Run clippy
cargo clippy --all-targets -- -D warnings
# Test no_std compatibility
cargo build --target thumbv6m-none-eabi --no-default-features --features hashbrown
Licensed under the MIT License.
For security concerns, see SECURITY.md.