Laravel caching, redesigned around normalized models.
Most cache packages store full query results. Normcache stores IDs and models separately, then reconstructs results at read time — the same way normalized frontend stores (Redux, Apollo) work. This makes invalidation instant and storage efficient, regardless of how many different queries touch the same model.
// Traditional: cache the full result set
User::where('active', true)->get() → cache full User objects
// When any user changes, you must invalidate this key.
// But you also have:
User::where('role', 'admin')->get()
User::where('country', 'AU')->get()
User::orderBy('created_at')->paginate(20)
// ...and dozens more query shapes, all stale.
Tracking which cache keys to invalidate becomes a dependency graph problem. Most packages solve this with tags or scans — both expensive at scale.
// Layer 1 — query cache (stores only IDs, versioned and model-scoped)
User::where('active', true)->get()
→ query:user:v14:a3f9... = [1, 5, 9, 22]
// Layer 2 — model cache (stores model attributes by PK)
→ model:user:1 = { id: 1, name: "Kai", ... }
→ model:user:5 = { id: 5, name: "Alice", ... }
→ ...
The same model entry model:user:5 is reused across every query that includes user 5. There is no duplication.
Each model class has a Redis version counter:
ver:user = 14
When a query is cached, both the model name and its current version are embedded in the key:
query:user:v14:a3f9... → [1, 5, 9] ← User query, version from ver:user
When any user is written:
INCR ver:user → 15
All User query keys (v14) are now permanently bypassed. Stale keys expire naturally; the next User query writes fresh v15 keys.
┌─────────────────────────────────────────────────────┐
│ User::where('active', true)->get() │
│ │
│ 1. Check query:user:v15:a3f9... → cache miss │
│ 2. SELECT id FROM users WHERE active = 1 │
│ → [1, 5, 9] │
│ 3. MGET model:user:1, model:user:5, model:user:9 │
│ → hits: [1, 5] misses: [9] │
│ 4. SELECT * FROM users WHERE id IN (9) (miss only)│
│ 5. Return hydrated collection │
└─────────────────────────────────────────────────────┘
Individual model entries are reused across all query shapes. A cache hit on model:user:5 serves every query that includes user 5, regardless of how the query was structured.
- PHP 8.2+
- Laravel 11+
- Redis (PhpRedis or Predis)
composer require kai-init/laravel-normcachePublish the config:
php artisan vendor:publish --tag=normcache-configAdd the NormCacheable trait to any Eloquent model you want cached:
use NormCache\Traits\NormCacheable;
class User extends Model
{
use NormCacheable;
}That's it. All queries on that model now go through the two-layer cache automatically.
// Cached automatically
User::all();
User::where('active', true)->get();
User::find(1);
User::paginate(20);
User::cursorPaginate(20);User::withoutCache()->get();// Cache this result for 10 minutes regardless of global TTL
User::query()->remember(600)->get();// withCount, withSum, withAvg, withMin, withMax, withExists
User::cacheAggregates()->withCount('posts')->get();# Flush a specific model
php artisan normcache:flush --model="App\Models\User"
# Flush everything
php artisan normcache:flushOr programmatically:
use NormCache\Facades\NormCache;
NormCache::flushModel(User::class);
NormCache::flushAll();Normcache fires events on every cache operation with zero overhead when no listeners are registered:
use NormCache\Events\QueryCacheHit;
use NormCache\Events\QueryCacheMiss;
use NormCache\Events\ModelCacheHit;
use NormCache\Events\ModelCacheMiss;
// Wire into Pulse, Telescope, StatsD, or a simple log
Event::listen(QueryCacheMiss::class, function (QueryCacheMiss $e) {
Log::debug("Query miss: {$e->modelClass}", ['key' => $e->key]);
});
Event::listen(ModelCacheMiss::class, function (ModelCacheMiss $e) {
Pulse::record('model_cache_miss', $e->modelClass, count($e->ids));
});| Event | Fired when | Properties |
|---|---|---|
QueryCacheHit |
Query ID list served from Redis | modelClass, key |
QueryCacheMiss |
ID list not cached — DB queried | modelClass, key |
ModelCacheHit |
Model attributes served from Redis | modelClass, ids[] |
ModelCacheMiss |
Attributes not cached — DB queried | modelClass, ids[] |
// config/normcache.php
return [
'connection' => env('NORMCACHE_CONNECTION', 'cache'),
'enabled' => env('NORMCACHE_ENABLED', true),
'ttl' => env('NORMCACHE_TTL', 604800), // model keys: 7 days
'query_ttl' => env('NORMCACHE_QUERY_TTL', 3600), // query keys: 1 hour
'key_prefix' => env('NORMCACHE_PREFIX', ''),
'cooldown' => env('NORMCACHE_COOLDOWN', 0), // debounce rapid writes (seconds)
];cooldown — When set, consecutive writes to the same model within the cooldown window only bump the version once. Useful for write-heavy models where you want to avoid thrashing the version counter.
Normcache caches queries it can fully reconstruct from a list of primary keys. Queries with joins, GROUP BY, HAVING, UNION, raw ORDER BY, aggregate functions (unless opted-in), or pessimistic locks bypass the cache automatically and hit the database directly — no configuration needed.
Invalidations that happen inside a database transaction are deferred until the transaction commits. If the transaction rolls back, the cache is untouched — the version counter is never bumped and no model keys are evicted.
- Invalidation is O(1): one
INCRon a version key, regardless of how many cached queries exist for that model. - Bulk reads use
MGET: all model keys for a result set are fetched in a single Redis round-trip. - Writes use pipelining: cache warm-up for missed model keys is batched in one pipeline call.
- Bulk deletes use
UNLINK: non-blocking async deletion (Redis 4.0+) with 1000-key chunking. - No cache scanning on invalidation: version shift makes stale keys unreachable without touching them.
MIT