[Segment Cache] Always upsert on prefetch completion#91488
[Segment Cache] Always upsert on prefetch completion#91488acdlite merged 1 commit intovercel:canaryfrom
Conversation
2dc79b1 to
21ad401
Compare
21ad401 to
010c419
Compare
Failing test suitesCommit: 0de21b5 | About building and testing Next.js
Expand output● instant-nav-panel › should show client nav state after clicking Start and navigating ● instant-nav-panel › should show loading skeleton during SPA navigation after clicking Start
Expand output● Client navigation with URL hash › when hash changes with state › when passing state via hash change › should increment the shallow history state counter |
| // is set when the entry is fulfilled with data from the server response. | ||
| const staleAt = now + 30 * 1000 | ||
| const emptyEntry: EmptySegmentCacheEntry = { | ||
| // @ts-ignore debug |
90ab2a8 to
2458d4c
Compare
010c419 to
a1c7c7f
Compare
**Current:** 1. #91487 **Up next:** 2. #91488 3. #89297 --- Prefetch responses include metadata (in the Flight stream sense, not HTML document metadata) that describes properties of the overall response — things like the stale time and the set of params that were accessed during rendering. Conceptually these are like late HTTP headers: information that's only known once the response is complete. Since we can't rely on actual HTTP late headers being supported everywhere, we encode this metadata in the body of the Flight response. The mechanism works by including an unresolved thenable in the Flight payload, then resolving it just before closing the stream. On the client, after the stream is fully received, we unwrap the thenable synchronously. This synchronous unwrap relies on the assumption that the server resolved the thenable before closing the stream. The server already buffers prefetch responses before sending them, so the resolved thenable data is always present in the response. However, HTTP chunking in the browser layer can introduce taskiness when processing the response, which could prevent Flight from decoding the full payload synchronously. The existing code includes fallback behavior for this case (e.g. treating the vary params as unknown), so this doesn't fix a semantic issue — it strengthens the guarantee so that the fallback path is never reached. To do this, we buffer the full response on the client and concatenate it into a single chunk before passing it to Flight. A single chunk is necessary because Flight's `processBinaryChunk` processes all rows synchronously within one call. Multiple chunks would not be sufficient even if pre-enqueued: the `await` continuation from `createFromReadableStream` can interleave between chunks, causing promise value rows to be processed after the root model initializes, which leaves thenables in a pending state. Since the server already buffers these responses and they complete during a prefetch (not during a navigation), this is not a performance consideration. Full (dynamic) prefetches are not affected by this change. These are streaming responses — even though they are cached, they are a special case where dynamic data is treated as if it were cached. They don't need to be buffered on either the server or the client the way normal cached responses are.
When a prefetch response includes vary params, the segment cache rekeys entries to a more generic path based on which params the segment actually depends on. Previously, the rekeying only happened when vary params were provided. Now that vary params are tracked for more response types (and eventually will always be tracked), entries are rekeyed in more cases than before. This exposed a potential race condition: the scheduler would capture a vary path at scheduling time and upsert the entry at that path when the fetch completed. But the fetch functions themselves rekey entries to a different (more generic) path upon fulfillment. The deferred upsert could then move the entry back to the less generic path, undoing the rekeying. To fix this, move the upsert logic inline into the fetch functions that fulfill entries, rather than deferring it to an external callback. This removes the race condition, simplifies the model, and reduces implementation complexity. The previous structure existed to avoid the rekeying cost when vary params weren't available, but rekeying is inexpensive and not worth the added indirection. The upsert function itself already handles concurrent writes by comparing fetch strategies and checking whether the new entry provides more complete data than any existing entry. So it's safe to always call it inline — whichever entry wins will be the most complete one.
a1c7c7f to
0de21b5
Compare
**Previous:** 1. #91487 2. #91488 **Current:** 3. #89297 --- Most of the vary params infrastructure was already implemented in previous PRs for static prerenders. This wires up the remaining pieces for runtime prefetches — creating the accumulator, setting it on the prerender store, and resolving it before abort — and adds additional test cases covering empty/full vary sets, searchParams, metadata, and per-segment layout/page splits with runtime prefetching.
Merging this PR will not alter performance
Comparing Footnotes
|
Previous:
Current:
Up next:
When a prefetch response includes vary params, the segment cache rekeys entries to a more generic path based on which params the segment actually depends on. Previously, the rekeying only happened when vary params were provided. Now that vary params are tracked for more response types (and eventually will always be tracked), entries are rekeyed in more cases than before.
This exposed a potential race condition: the scheduler would capture a vary path at scheduling time and upsert the entry at that path when the fetch completed. But the fetch functions themselves rekey entries to a different (more generic) path upon fulfillment. The deferred upsert could then move the entry back to the less generic path, undoing the rekeying.
To fix this, move the upsert logic inline into the fetch functions that fulfill entries, rather than deferring it to an external callback. This removes the race condition, simplifies the model, and reduces implementation complexity. The previous structure existed to avoid the rekeying cost when vary params weren't available, but rekeying is inexpensive and not worth the added indirection.
The upsert function itself already handles concurrent writes by comparing fetch strategies and checking whether the new entry provides more complete data than any existing entry. So it's safe to always call it inline — whichever entry wins will be the most complete one.