Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
WalkthroughThe Redis cache adapter now coalesces concurrent cache-miss reads within the same process using a 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ghost/core/core/server/adapters/lib/redis/AdapterCacheRedis.js`:
- Around line 234-245: The in-flight coalescing currently uses the external key,
causing race where reset() changes the prefix and a pre-reset fetch writes into
the new generation; change the logic in AdapterCacheRedis to key
currentlyExecutingReads by the internal cache key produced by the lookup
(internalKey) instead of the external key: when you perform the lookup/get flow,
obtain the internalKey and use that internalKey as the map key for
currentlyExecutingReads before starting fetchData(), and when the fetchPromise
resolves call this.set(internalKey, data) (or the internal-key-aware writeback)
and delete the entry by internalKey in finally; update any places that join the
in-flight promise to do so by internalKey as well (refer to
currentlyExecutingReads, fetchData(), set(), get()/lookup/internalKey).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e5c1b978-6c76-4f87-bb56-3127d4757a3e
📒 Files selected for processing (3)
ghost/core/core/server/adapters/lib/redis/AdapterCacheRedis.jsghost/core/test/integration/adapters/redis/adapter-cache-redis.test.jsghost/core/test/unit/server/adapters/lib/redis/adapter-cache-redis.test.js
a3ac98b to
5425684
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
ghost/core/core/server/adapters/lib/redis/AdapterCacheRedis.js (1)
234-259: Coalescing is skipped on the lookup-timeout / redis-failure path.
internalKeyisnullwhen#lookupWithTimeouttimes out or the underlyingcache.getrejects (lines 187 and 195). In that branch, the miss handler neither consults nor populatescurrentlyExecutingReads, so concurrent callers each fire their ownfetchData()and the fallbackthis.set(key, data)writeback is not deduplicated either.That is precisely the scenario coalescing is most valuable for (Redis slow/unhealthy ⇒ origin thundering-herd), so it's worth calling out. The current behavior is safe (no cross-generation pollution, which was the reason
internalKeyscoping was introduced), so this is an optional improvement, not a blocker.If you want to close the gap, you can compute an
internalKeyon demand in the miss branch (independently of the lookup result) and use that as the map key — e.g.:♻️ Optional: coalesce even when lookup failed/timed out
- } else { - if (internalKey && this.currentlyExecutingReads.has(internalKey)) { - return this.currentlyExecutingReads.get(internalKey); - } - const fetchPromise = fetchData().then(async (data) => { - if (internalKey) { - try { - debug('set', internalKey); - await this.cache.set(internalKey, data); - } catch (err) { - logging.error(err); - } - } else { - await this.set(key, data); - } - return data; - }).catch((err) => { - logging.error(err); - }).finally(() => { - if (internalKey) { - this.currentlyExecutingReads.delete(internalKey); - } - }); - if (internalKey) { - this.currentlyExecutingReads.set(internalKey, fetchPromise); - } - return fetchPromise; + } else { + // Derive a coalescing key even if the lookup itself failed/timed out, + // so concurrent misses don't all stampede the origin. + let coalesceKey = internalKey; + if (!coalesceKey) { + try { + coalesceKey = await this._buildKey(key); + } catch (e) { + // If we can't build a key, fall through without coalescing. + } + } + if (coalesceKey && this.currentlyExecutingReads.has(coalesceKey)) { + return this.currentlyExecutingReads.get(coalesceKey); + } + const fetchPromise = fetchData().then(async (data) => { + if (coalesceKey) { + try { + debug('set', coalesceKey); + await this.cache.set(coalesceKey, data); + } catch (err) { + logging.error(err); + } + } else { + await this.set(key, data); + } + return data; + }).catch((err) => { + logging.error(err); + }).finally(() => { + if (coalesceKey) { + this.currentlyExecutingReads.delete(coalesceKey); + } + }); + if (coalesceKey) { + this.currentlyExecutingReads.set(coalesceKey, fetchPromise); + } + return fetchPromise;One small nit regardless of the above: the
.finallydeletes by key unconditionally. SinceinternalKeyis generation-scoped this is safe today, but an identity guard (if (this.currentlyExecutingReads.get(internalKey) === fetchPromise) …) would be robust against any future change that re-seeds the entry mid-flight.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ghost/core/core/server/adapters/lib/redis/AdapterCacheRedis.js` around lines 234 - 259, The miss path skips coalescing when lookupWithTimeout/cache.get returns null by not using an internalKey, causing concurrent callers to each call fetchData() and write back via this.set; to fix, compute or derive an internalKey for the miss branch (e.g., based on key + generation) and use it when interacting with currentlyExecutingReads so concurrent misses share the same fetchPromise, ensure fetchPromise is stored with this.currentlyExecutingReads.set(internalKey, fetchPromise) for both hit and miss paths, and make the finally cleanup robust by only deleting if this.currentlyExecutingReads.get(internalKey) === fetchPromise to avoid accidentally removing a newer in-flight promise.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@ghost/core/core/server/adapters/lib/redis/AdapterCacheRedis.js`:
- Around line 234-259: The miss path skips coalescing when
lookupWithTimeout/cache.get returns null by not using an internalKey, causing
concurrent callers to each call fetchData() and write back via this.set; to fix,
compute or derive an internalKey for the miss branch (e.g., based on key +
generation) and use it when interacting with currentlyExecutingReads so
concurrent misses share the same fetchPromise, ensure fetchPromise is stored
with this.currentlyExecutingReads.set(internalKey, fetchPromise) for both hit
and miss paths, and make the finally cleanup robust by only deleting if
this.currentlyExecutingReads.get(internalKey) === fetchPromise to avoid
accidentally removing a newer in-flight promise.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d415b0b4-0a35-48a2-bfe0-7714b4553ed1
📒 Files selected for processing (3)
ghost/core/core/server/adapters/lib/redis/AdapterCacheRedis.jsghost/core/test/integration/adapters/redis/adapter-cache-redis.test.jsghost/core/test/unit/server/adapters/lib/redis/adapter-cache-redis.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
- ghost/core/test/unit/server/adapters/lib/redis/adapter-cache-redis.test.js
- When multiple requests hit the cache simultaneously for the same key and all encounter a miss, each one independently calls fetchData(), hammering the underlying data source with identical queries. This is the classic "thundering herd" problem. - This adds a currentlyExecutingReads Map that stores the in-flight fetch promise on a cache miss. Subsequent callers for the same key reuse that promise instead of spawning redundant fetches. The promise is cleaned up via .finally() so the next request after completion goes through normally. - The coalescing is scoped to cache misses only — cache hits continue to read from Redis independently, which is the fast path and doesn't benefit from serialization. The expensive part is fetchData() (the database or API call behind the cache), and that's what we protect.
- Unit tests cover: concurrent misses resolve from a single fetch, fetchData rejection propagates to all waiters, retry succeeds after a coalesced failure, and different keys fetch independently. - Integration test verifies the same coalescing behavior against a real Redis instance.
The coalescing Map was keyed by the external key (e.g. 'foo'),
and the writeback used this.set(key, data) which calls _buildKey
at write time to resolve the current prefix hash. This created a
race when reset() changed the prefix hash while a fetchData() was
in flight:
1. Caller A looks up 'foo', gets internalKey 'prefix:hashABC:foo',
misses, starts fetchData(), stores promise under 'foo'
2. reset() changes prefix hash from hashABC to hashXYZ
3. Caller B looks up 'foo', gets internalKey 'prefix:hashXYZ:foo',
misses, but finds the promise under 'foo' and joins it
4. Caller A's fetch resolves with stale data, this.set('foo', data)
re-derives the key using the NEW hash, writing stale data into
the new generation
Two bugs: Caller B incorrectly coalesces with a pre-reset fetch,
and the stale data pollutes the new cache generation.
Fix: key the coalescing Map by internalKey (which includes the
prefix hash) and write back directly to cache.set(internalKey, data)
using the key captured at lookup time. After a reset, a new caller
gets a different internalKey so it won't join the stale promise,
and the pre-reset write targets the old generation key (a harmless
orphan). When internalKey is null (timeout or Redis failure during
lookup), coalescing is skipped entirely — the degraded path doesn't
need this protection.
- Skip coalescing and Redis writeback when internalKey is null, just return fetchData() directly - Return data to the caller immediately instead of waiting for the Redis SET to complete
c4dcff7 to
a33f778
Compare
|



Concurrent requests for the same cache key that all encounter a miss each independently call
fetchData(), causing a "thundering herd" — N identical queries to the underlying data source when only 1 is needed. This adds request coalescing so thatfetchData()runs exactly once per cache miss, and all concurrent callers share the same result.What changed
currentlyExecutingReadsMap toAdapterCacheRedisthat tracks in-flight fetch promises per keyfetchData()promise in the Map; subsequent callers for the same key return the existing promise instead of spawning a new fetch.finally()so the next request after completion goes through normallyfetchData()are caught and logged within the promise chain (matching the existing error-swallowing behavior), so all concurrent callers receiveundefinedand the key is freed for retry on the next requestWhy coalescing is scoped to cache misses only
An earlier approach (#19627) coalesced the entire
get()flow — including the Redis read, TTL check, and refresh-ahead logic. This means even cache hits were serialized: if 10 requests arrived concurrently for a cached key, 9 of them would wait for the first one's full Redis round-trip + TTL check to complete before returning.Cache hits are the fast path and don't benefit from serialization — Redis handles concurrent reads efficiently. The expensive part is
fetchData()(the database or API call behind the cache), and that only runs on a miss. By scoping coalescing to the miss branch:fetchData()runs once, all waiters share the resultget()methodWhy the Map is keyed by
internalKey(not the externalkey)The cache uses prefix-hash rotation for invalidation —
reset()changes the prefix hash so all existing keys become invisible. If the coalescing Map were keyed by the external key (e.g.'foo'), areset()during an in-flightfetchData()would cause two bugs:'foo', gets a newinternalKey(prefix:newHash:foo), misses, but finds the pre-reset promise under'foo'and joins it instead of fetching fresh datathis.set(key, data)call re-derives the internal key at write time using the new prefix hash, so stale data from the pre-reset fetch lands in the new cache generationKeying by
internalKey(which includes the prefix hash) fixes both: after a reset, a new caller gets a differentinternalKeyso it won't join the stale promise, and the pre-reset writeback targets the old generation key directly viathis.cache.set(internalKey, data)— a harmless orphan that nobody will read.When
internalKeyis null (timeout or Redis failure during lookup), coalescing is skipped entirely — the degraded path doesn't need this protection.Test plan
Unit tests (
adapter-cache-redis.test.js)fetchDataonly once, all callers receive the valuefetchDatarejection propagates to all concurrent callers (all getundefined, error logged once)prefix_hashcycle (reset)Integration test (
adapter-cache-redis.test.js)fetchDataonly once, subsequent cached read also works