Multi-driver caching system with support for Redis, file-based, in-memory stores, and distributed locking. Automatically cache expensive computations and manage cache expiration.
Install the package via npm, yarn, or pnpm:
npm install @atlex/cacheyarn add @atlex/cachepnpm add @atlex/cacheFirst, configure your cache drivers in your application's config file:
// config/cache.ts
export const cacheConfig = {
default: 'memory',
prefix: 'app_cache',
stores: {
memory: {
// In-memory cache store
},
file: {
directory: './storage/cache',
},
redis: {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
password: process.env.REDIS_PASSWORD,
},
null: {
// Disables caching
},
},
}Get, set, and retrieve cached values:
import { Cache } from '@atlex/cache'
// Set a value
await Cache.set('user:1', userData, 3600) // expires in 1 hour
// Get a value
const user = await Cache.get('user:1')
// Get with default value
const user = await Cache.get('user:1', null)
// Check if key exists
const exists = await Cache.has('user:1')
// Forget a key
await Cache.forget('user:1')
// Flush all cache
await Cache.flush()Automatically cache expensive computations:
const user = await Cache.remember('user:1', 3600, async () => {
// This callback only runs if the key doesn't exist
return await User.find(1)
})
// Remember forever (no expiration)
const allUsers = await Cache.rememberForever('users:all', async () => {
return await User.all()
})Cache a value and retrieve it, creating if it doesn't exist:
const user = await Cache.sear('user:1', async () => {
return await User.find(1)
})- Multiple Drivers: Memory, File, Redis, and Null stores
- Fluent API: Simple and intuitive interface for cache operations
- Auto-expiration: Set TTL for automatic key expiration
- Tagging: Organize and flush cache by tags
- Rate Limiting: Built-in rate limiter for API protection
- Distributed Locking: Prevent cache stampedes with locks
- Remember Pattern: Automatically cache expensive computations
- Prefix Support: Namespace all cache keys automatically
- TypeScript: Full type safety throughout the API
The CacheManager is the main entry point for cache operations. It manages driver registration and provides the API for cache access.
import { CacheManager } from '@atlex/cache'
// Get the default store
const store = CacheManager.store()
// Switch to a specific driver
const redisStore = CacheManager.store('redis')
// Get a specific driver without creating repository
const driver = CacheManager.driver('file')
// Extend with custom drivers
CacheManager.extend('custom', new CustomCacheDriver())The Repository class provides the main caching API. It handles getting, setting, remembering, and managing cache data.
const cache = CacheManager.store()
// Set a value with TTL in seconds
await cache.set('config:app', appConfig, 3600)
// Get a value
const config = await cache.get('config:app')
// Put is an alias for set
await cache.put('config:app', appConfig, 3600)
// Add only if key doesn't exist
const added = await cache.add('config:app', appConfig, 3600)
// Remember: get or compute and cache
const data = await cache.remember('expensive:operation', 3600, async () => {
return await expensiveOperation()
})
// Remember forever
const data = await cache.rememberForever('static:data', async () => {
return await fetchStaticData()
})
// Sear: like remember but returns cached value after caching
const item = await cache.sear('item:1', async () => {
return await Item.find(1)
})
// Remove a key
await cache.forget('config:app')
// Flush all keys in store
await cache.flush()
// Increment numeric value
await cache.set('counter', 0)
await cache.increment('counter', 5) // Now 5
// Decrement numeric value
await cache.decrement('counter', 2) // Now 3
// Pull: get and delete
const value = await cache.pull('temp:data')Organize cache by tags for grouped operations:
const cache = CacheManager.store()
// Set with tags
await cache.tags(['user', 'user:1']).set('user:1:profile', userData, 3600)
// Get tagged value
const user = await cache.tags(['user', 'user:1']).get('user:1:profile')
// Flush all keys with specific tag
await cache.tags(['user']).flush()
// Multiple tags for filtering
await cache.tags(['user', 'active']).set('user:1', userData, 3600)
// Flush by multiple tags
await cache.tags(['user', 'active']).flush()Prevent cache stampedes with distributed locks:
const cache = CacheManager.store()
// Get a lock
const lock = cache.lock('expensive:operation')
// Acquire the lock
const acquired = await lock.acquire()
if (acquired) {
try {
// Do expensive operation
await expensiveOperation()
} finally {
// Release the lock
await lock.release()
}
}
// Force release a lock (use cautiously)
await lock.forceRelease()
// Block until lock is available
await lock.block(10, async () => {
// This code runs once the lock is acquired
// Lock is automatically released after this function completes
await expensiveOperation()
})
// Block with timeout
const completed = await lock.block(
5,
async () => {
await heavyProcessing()
},
{ timeout: 10 },
) // Wait max 10 seconds for lockLimit request rates to prevent abuse:
const cache = CacheManager.store()
const limiter = cache.rateLimiter('login-attempts', {
maxAttempts: 5,
decayMinutes: 15,
})
// Record a hit
await limiter.hit()
// Get remaining attempts
const remaining = await limiter.remaining()
// Check if too many attempts
if (remaining === 0) {
throw new Error('Too many login attempts')
}
// Reset the limiter
await limiter.reset()
// Get total hits
const hits = await limiter.hits()
// Retry after delay
const backoffSeconds = await limiter.retry()In-memory cache storage (data lost on restart):
{
memory: {
// No configuration needed
}
}Best for development and testing. Not suitable for production with multiple processes.
File-based cache storage:
{
file: {
directory: './storage/cache',
}
}Suitable for single-server deployments. Slower than memory but persists across restarts.
Distributed Redis cache:
{
redis: {
host: 'localhost',
port: 6379,
password: 'secret',
db: 0,
}
}Best for distributed systems and high-performance caching. Supports tagging and locking across processes.
Disables caching (useful for testing):
{
null: {
// No configuration
}
}All cache operations execute immediately without storing values.
// Get the default store
static store(name?: string): Repository
// Get a specific driver
static driver(name: string): CacheDriver
// Register a custom driver
static extend(name: string, driver: CacheDriver): void// Get a value from cache
async get(key: string, defaultValue?: any): Promise<any>
// Set a value in cache with TTL (seconds)
async set(key: string, value: any, seconds?: number): Promise<void>
// Put is an alias for set
async put(key: string, value: any, seconds?: number): Promise<void>
// Add only if key doesn't exist
async add(key: string, value: any, seconds?: number): Promise<boolean>
// Get or cache result of callback
async remember(key: string, seconds: number, callback: () => Promise<any>): Promise<any>
// Get or cache result forever
async rememberForever(key: string, callback: () => Promise<any>): Promise<any>
// Get and cache, return cached value immediately
async sear(key: string, callback: () => Promise<any>): Promise<any>
// Get and delete
async pull(key: string, defaultValue?: any): Promise<any>
// Delete a key
async forget(key: string): Promise<boolean>
// Delete all keys
async flush(): Promise<void>
// Increment numeric value
async increment(key: string, value: number = 1): Promise<number>
// Decrement numeric value
async decrement(key: string, value: number = 1): Promise<number>
// Get tagged cache instance
tags(...tags: string[]): TaggedCache
// Get a lock
lock(name: string, seconds?: number, owner?: string): Lock
// Get a rate limiter
rateLimiter(name: string, options: RateLimiterOptions): RateLimiterExtends Repository with tag-scoped operations:
// All Repository methods available, but scoped to tags
// Get tagged value
async get(key: string): Promise<any>
// Set with tags
async set(key: string, value: any, seconds?: number): Promise<void>
// Flush all keys with these tags
async flush(): Promise<void>// Acquire the lock
async acquire(): Promise<boolean>
// Release the lock
async release(): Promise<void>
// Force release (dangerous)
async forceRelease(): Promise<void>
// Block until available and execute callback
async block(seconds: number, callback: () => Promise<any>, options?: BlockOptions): Promise<any>// Record a hit
async hit(): Promise<number>
// Get number of hits
async hits(): Promise<number>
// Get remaining attempts
async remaining(): Promise<number>
// Reset the limiter
async reset(): Promise<void>
// Get retry backoff
async retry(): Promise<number>
// Check if limited
async tooManyAttempts(): Promise<boolean>
// Clear the limiter
async clear(): Promise<void>export class UserRepository {
async find(id: number): Promise<User> {
return await Cache.remember(`user:${id}`, 3600, async () => {
return await database.table('users').where('id', id).first()
})
}
async update(id: number, data: any): Promise<void> {
// Invalidate cache when updated
await Cache.forget(`user:${id}`)
await database.table('users').where('id', id).update(data)
}
}export class PostRepository {
async getByAuthor(authorId: number): Promise<Post[]> {
return await Cache.tags(['post', `author:${authorId}`]).remember(
`author:${authorId}:posts`,
3600,
async () => {
return await Post.where('author_id', authorId).get()
},
)
}
async publishPost(post: Post): Promise<void> {
// Save post...
// Invalidate all posts by this author
await Cache.tags([`author:${post.authorId}`]).flush()
}
}export async function handleLogin(request: Request): Promise<Response> {
const email = request.body.email
const limiter = Cache.store().rateLimiter(`login:${email}`, {
maxAttempts: 5,
decayMinutes: 15,
})
if (await limiter.tooManyAttempts()) {
const retrySeconds = await limiter.retry()
return response.status(429).json({
error: `Too many attempts. Try again in ${retrySeconds} seconds.`,
})
}
await limiter.hit()
try {
const user = await User.findByEmail(email)
if (!user || !(await user.verifyPassword(request.body.password))) {
return response.status(401).json({ error: 'Invalid credentials' })
}
await limiter.reset()
return response.json({ token: user.generateToken() })
} catch (error) {
return response.status(500).json({ error: 'Login failed' })
}
}export async function expensiveQuery(): Promise<any> {
const cacheKey = 'expensive:computation'
const lock = Cache.store().lock(cacheKey, 30)
// Check cache first
const cached = await Cache.get(cacheKey)
if (cached) return cached
// Block until lock available
try {
await lock.block(10, async () => {
// Double-check cache
const rechecked = await Cache.get(cacheKey)
if (rechecked) return rechecked
// Do expensive operation
const result = await performExpensiveQuery()
await Cache.set(cacheKey, result, 3600)
})
} catch (error) {
// Lock timeout - return error or use fallback
console.error('Cache lock timeout')
}
return await Cache.get(cacheKey)
}Disable caching in tests:
// config/cache.test.ts
export const cacheConfig = {
default: 'null',
stores: {
null: {},
},
}import { Cache } from '@atlex/cache'
import { vi } from 'vitest'
test('caches user lookup', async () => {
const getSpy = vi.spyOn(Cache, 'remember')
await userRepository.find(1)
expect(getSpy).toHaveBeenCalledWith('user:1', expect.any(Number), expect.any(Function))
})interface CacheConfig {
// Default cache store
default: string
// Key prefix for all cached values
prefix: string
// Store configurations
stores: {
memory?: {}
file?: { directory: string }
redis?: { host: string; port: number; password?: string }
null?: {}
}
}-
Use Appropriate TTLs: Set reasonable expiration times based on data freshness requirements.
-
Implement Cache Invalidation: Invalidate cache when underlying data changes.
-
Prevent Stampedes: Use locks to prevent multiple processes from computing the same expensive operation.
-
Monitor Cache Hit Rates: Track cache performance in production.
-
Use Tags for Related Data: Group related cache entries with tags for bulk operations.
-
Handle Cache Misses: Always provide fallback values or async computation.
-
Test with Null Driver: Use the null driver in tests to ensure code works without cache.
-
Separate Cache Layers: Use different stores for different data types (sessions in Redis, config in memory).
-
Set Prefix: Use cache prefix to avoid collisions in shared environments.
-
Rate Limit Strategically: Use rate limiters on expensive operations and user-facing APIs.
For more information and advanced usage, visit the Atlex documentation.
MIT © Karen Hamazaspyan
Part of Atlex — A modern framework for Node.js.