-
Notifications
You must be signed in to change notification settings - Fork 26.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix revalidation/refresh behavior with parallel routes #63607
fix revalidation/refresh behavior with parallel routes #63607
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. |
Failing test suitesCommit: 42e6cc6
Expand output● AMP Validation on Export › production mode › should have shown errors during build
Read more about building and testing Next.js in contributing.md. |
Stats from current PRDefault Build (Increase detected
|
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
buildDuration | 14s | 14.1s | N/A |
buildDurationCached | 7.5s | 6.4s | N/A |
nodeModulesSize | 198 MB | 198 MB | |
nextStartRea..uration (ms) | 403ms | 418ms | N/A |
Client Bundles (main, webpack) Overall increase ⚠️
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
2453-HASH.js gzip | 31 kB | 31.3 kB | |
3304.HASH.js gzip | 181 B | 181 B | ✓ |
3f784ff6-HASH.js gzip | 53.7 kB | 53.7 kB | ✓ |
8299-HASH.js gzip | 5.04 kB | 5.04 kB | N/A |
framework-HASH.js gzip | 45.2 kB | 45.2 kB | ✓ |
main-app-HASH.js gzip | 241 B | 240 B | N/A |
main-HASH.js gzip | 32.2 kB | 32.2 kB | N/A |
webpack-HASH.js gzip | 1.68 kB | 1.68 kB | N/A |
Overall change | 130 kB | 130 kB |
Legacy Client Bundles (polyfills)
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
polyfills-HASH.js gzip | 31 kB | 31 kB | ✓ |
Overall change | 31 kB | 31 kB | ✓ |
Client Pages
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
_app-HASH.js gzip | 196 B | 197 B | N/A |
_error-HASH.js gzip | 184 B | 184 B | ✓ |
amp-HASH.js gzip | 505 B | 505 B | ✓ |
css-HASH.js gzip | 324 B | 325 B | N/A |
dynamic-HASH.js gzip | 2.5 kB | 2.5 kB | N/A |
edge-ssr-HASH.js gzip | 258 B | 258 B | ✓ |
head-HASH.js gzip | 352 B | 352 B | ✓ |
hooks-HASH.js gzip | 370 B | 371 B | N/A |
image-HASH.js gzip | 4.21 kB | 4.21 kB | ✓ |
index-HASH.js gzip | 259 B | 259 B | ✓ |
link-HASH.js gzip | 2.67 kB | 2.67 kB | N/A |
routerDirect..HASH.js gzip | 314 B | 312 B | N/A |
script-HASH.js gzip | 386 B | 386 B | ✓ |
withRouter-HASH.js gzip | 309 B | 309 B | ✓ |
1afbb74e6ecf..834.css gzip | 106 B | 106 B | ✓ |
Overall change | 6.57 kB | 6.57 kB | ✓ |
Client Build Manifests
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
_buildManifest.js gzip | 481 B | 484 B | N/A |
Overall change | 0 B | 0 B | ✓ |
Rendered Page Sizes
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
index.html gzip | 530 B | 529 B | N/A |
link.html gzip | 541 B | 541 B | ✓ |
withRouter.html gzip | 524 B | 524 B | ✓ |
Overall change | 1.06 kB | 1.06 kB | ✓ |
Edge SSR bundle Size
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
edge-ssr.js gzip | 95.3 kB | 95.3 kB | N/A |
page.js gzip | 3.04 kB | 3.04 kB | ✓ |
Overall change | 3.04 kB | 3.04 kB | ✓ |
Middleware size
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
middleware-b..fest.js gzip | 625 B | 624 B | N/A |
middleware-r..fest.js gzip | 151 B | 151 B | ✓ |
middleware.js gzip | 25.5 kB | 25.5 kB | N/A |
edge-runtime..pack.js gzip | 839 B | 839 B | ✓ |
Overall change | 990 B | 990 B | ✓ |
Next Runtimes
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
app-page-exp...dev.js gzip | 170 kB | 170 kB | N/A |
app-page-exp..prod.js gzip | 97 kB | 97 kB | N/A |
app-page-tur..prod.js gzip | 98.7 kB | 98.7 kB | N/A |
app-page-tur..prod.js gzip | 93 kB | 93 kB | N/A |
app-page.run...dev.js gzip | 144 kB | 144 kB | N/A |
app-page.run..prod.js gzip | 91.5 kB | 91.5 kB | N/A |
app-route-ex...dev.js gzip | 21.4 kB | 21.4 kB | ✓ |
app-route-ex..prod.js gzip | 15.1 kB | 15.1 kB | ✓ |
app-route-tu..prod.js gzip | 15.1 kB | 15.1 kB | ✓ |
app-route-tu..prod.js gzip | 14.8 kB | 14.8 kB | ✓ |
app-route.ru...dev.js gzip | 21 kB | 21 kB | ✓ |
app-route.ru..prod.js gzip | 14.8 kB | 14.8 kB | ✓ |
pages-api-tu..prod.js gzip | 9.55 kB | 9.55 kB | ✓ |
pages-api.ru...dev.js gzip | 9.82 kB | 9.82 kB | ✓ |
pages-api.ru..prod.js gzip | 9.55 kB | 9.55 kB | ✓ |
pages-turbo...prod.js gzip | 22.5 kB | 22.5 kB | ✓ |
pages.runtim...dev.js gzip | 23.1 kB | 23.1 kB | ✓ |
pages.runtim..prod.js gzip | 22.4 kB | 22.4 kB | ✓ |
server.runti..prod.js gzip | 51 kB | 51 kB | ✓ |
Overall change | 250 kB | 250 kB | ✓ |
build cache Overall increase ⚠️
vercel/next.js canary | vercel/next.js 03-21-simply_applyRouterStatePatchToTree_handling | Change | |
---|---|---|---|
0.pack gzip | 1.57 MB | 1.58 MB | |
index.pack gzip | 106 kB | 105 kB | N/A |
Overall change | 1.57 MB | 1.58 MB |
Diff details
Diff for middleware.js
Diff too large to display
Diff for 2453-HASH.js
Diff too large to display
Diff for app-page-exp..ntime.dev.js
Diff too large to display
Diff for app-page-exp..time.prod.js
Diff too large to display
Diff for app-page-tur..time.prod.js
Diff too large to display
Diff for app-page-tur..time.prod.js
Diff too large to display
Diff for app-page.runtime.dev.js
Diff too large to display
Diff for app-page.runtime.prod.js
Diff too large to display
ac74fa8
to
6a3ac1f
Compare
4d248ba
to
22b1baf
Compare
6a3ac1f
to
41fcbf3
Compare
22b1baf
to
d2ff9d6
Compare
41fcbf3
to
fed2c5f
Compare
cf2489c
to
9a4f4a4
Compare
// TODO-APP: remove '' | ||
[''], | ||
currentTree, | ||
treePatch | ||
treePatch, | ||
location.pathname |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can using the global window.location
cause any weird bugs here? like race conditions with pending navigations or something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question -- I don't think so since actions are processed sequentially - the pending navigation would have had to occur before this reducer would have had a chance to run.
But it might be safer to use state.canonicalUrl
instead since that's meant to be the same value. The only reason I didn't is because it includes location.search
, and I want to store only the pathname
for these references. So I could probably do something like new URL(state.canonicalUrl, location.origin).pathname
-- or maybe just state.canonicalUrl.split('?')[0]
. Just felt kinda ugly either way 😆
### What When a parallel segment in the current router tree is no longer "active" during a soft navigation (ie, no longer matches a page component on the particular route), it remains on-screen until the page is refreshed, at which point it would switch to rendering the `default.tsx` component. However, when revalidating the router cache via `router.refresh`, or when a server action finishes & refreshes the router cache, this would trigger the "hard refresh" behavior. This would have the unintended consequence of a 404 being triggered (which is the default behavior of `default.tsx`) or inactive segments disappearing unexpectedly. ### Why When the router cache is refreshed, it currently fetches new data for the page by fetching from the current URL the user is on. This means that the server will never respond with the data it needs if the segment wasn't "activated" via the URL we're fetching from, as it came from someplace else. Instead, the server will give us data for the `default.tsx` component, which we don't want to render when doing a soft refresh. ### How This updates the `FlightRouterState` to encode information about the URL that caused the segment to become active. That way, when some sort of revalidation event takes place, we can both refresh the data for the current URL (existing handling), and recursively refetch segment data for anything that was still present in the tree but requires fetching from a different URL. We patch this new data into the tree before committing the final `CacheNode` to the router. **Note**: I re-used the existing `refresh` and `url` arguments in `FlightRouterState` to avoid introducing more options to this data structure that is already a bit tricky to work with. Initially I was going to re-use `"refetch"` as-is, which seemed to work ok, but I'm worried about potential implications of this considering they have different semantics. In an abundance of caution, I added a new marker type ("`refresh`", alternative suggestions welcome). This has some trade-offs: namely, if there are a lot of different segments that are in this stale state that require data from different URLs, the refresh is going to be blocked while we fetch all of these segments. Having to do a separate round-trip for each of these segments could be expensive. In an ideal world, we'd be able to enumerate the segments we'd want to refetch and where they came from, so it could be handled in a single round-trip. There are some ideas on how to improve per-segment fetching which are out of scope of this PR. However, due to the implicit contract that `middleware.ts` creates with URLs, we still need to identify these resources by URLs. Fixes #60815 Fixes #60950 Fixes #51711 Fixes #51714 Fixes #58715 Fixes #60948 Fixes #62213 Fixes #61341 Closes NEXT-1845 Closes NEXT-2030
649c362
to
42e6cc6
Compare
(Original PR #63608 was erroneously merged into this branch. As a result, I've copied the notes from it below.)
What
When a parallel segment in the current router tree is no longer "active" during a soft navigation (ie, no longer matches a page component on the particular route), it remains on-screen until the page is refreshed, at which point it would switch to rendering the default.tsx component. However, when revalidating the router cache via router.refresh, or when a server action finishes & refreshes the router cache, this would trigger the "hard refresh" behavior. This would have the unintended consequence of a 404 being triggered (which is the default behavior of default.tsx) or inactive segments disappearing unexpectedly.
Why
When the router cache is refreshed, it currently fetches new data for the page by fetching from the current URL the user is on. This means that the server will never respond with the data it needs if the segment wasn't "activated" via the URL we're fetching from, as it came from someplace else. Instead, the server will give us data for the default.tsx component, which we don't want to render when doing a soft refresh.
How
This updates the FlightRouterState to encode information about the URL that caused the segment to become active. That way, when some sort of revalidation event takes place, we can both refresh the data for the current URL (existing handling), and recursively refetch segment data for anything that was still present in the tree but requires fetching from a different URL. We patch this new data into the tree before committing the final CacheNode to the router.
Note: I re-used the existing refresh and url arguments in FlightRouterState to avoid introducing more options to this data structure that is already a bit tricky to work with. Initially I was going to re-use "refetch" as-is, which seemed to work ok, but I'm worried about potential implications of this considering they have different semantics. In an abundance of caution, I added a new marker type ("refresh", alternative suggestions welcome).
This has some trade-offs: namely, if there are a lot of different segments that are in this stale state that require data from different URLs, the refresh is going to be blocked while we fetch all of these segments. Having to do a separate round-trip for each of these segments could be expensive. In an ideal world, we'd be able to enumerate the segments we'd want to refetch and where they came from, so it could be handled in a single round-trip. There are some ideas on how to improve per-segment fetching which are out of scope of this PR. However, due to the implicit contract that middleware.ts creates with URLs, we still need to identify these resources by URLs.
Other Notes:
applyRouterStatePatchToTree
had been refactored to support the case of not skipping the__DEFAULT__
segment, so thatrouter.refresh
or revalidating in a server action wouldn't break the router. (More details in this #59585)This was a stop-gap and not an ideal solution, as this behavior means
router.refresh()
would effectively behave like reloading the page, where "stale" segments (ones that went from__PAGE__
->__DEFAULT__
) would disappear.Fixes #60815
Fixes #60950
Fixes #51711
Fixes #51714
Fixes #58715
Fixes #60948
Fixes #62213
Fixes #61341
Closes NEXT-1845
Closes NEXT-2030
Closes NEXT-2903