Skip to content

[pull] canary from vercel:canary#1008

Merged
pull[bot] merged 9 commits intocode:canaryfrom
vercel:canary
Apr 29, 2026
Merged

[pull] canary from vercel:canary#1008
pull[bot] merged 9 commits intocode:canaryfrom
vercel:canary

Conversation

@pull
Copy link
Copy Markdown

@pull pull Bot commented Apr 29, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

unstubbable and others added 9 commits April 29, 2026 08:20
A prerendered route's `expire` — set via `cacheLife({ expire })` inside
`'use cache'` or via the `expireTime` config fallback — lands in the
prerender manifest as `initialExpireSeconds` / `fallbackExpire`
(#76207), but the runtime never read it: `IncrementalCache.get` only
considered `revalidate`. So past expire, Next.js served stale with a
background refresh instead of the blocking regeneration the `cacheLife`
`expire` docs describe.

The fix is three coordinated changes. The render-time
`responseGenerator` in `app-page.ts`, `app-route.ts`, and
`pages-handler.ts` now applies the `expireTime` fallback as soon as it
has the render's `cacheControl`, so every downstream consumer (the cache
stored via `IncrementalCache.set`, the response `Cache-Control` header,
the entry returned to `handleResponse`) sees a finalized `cacheControl`
with a populated `expire` — mirroring the build-time fallback.
`IncrementalCache.get` then returns `isStale = -1` when `lastModified +
expire * 1000 < now`, and `response-cache.handleGet` skips its early
`resolve(previousEntry)` for `isStale === -1` so the blocking
revalidation inside `responseGenerator` (which already picks
`BLOCKING_STATIC_RENDER` on that signal) can return its fresh output to
the user. Previously the early resolve committed the stale value to the
response first, so even though `responseGenerator` still ran a fresh
render its output only warmed the cache for the next request. As a side
effect this also closes the same early-resolve hole on the existing
tag-expired `isStale = -1` path.

On Vercel, ISR cache decisions live at the Proxy and the Proxy currently
ignores `staleExpiration` (using a hard-coded one-year value instead).
It is also expected, once it starts honoring `staleExpiration`, to pick
up updated values from the `stale-while-revalidate` response header.
Until that lands this change is only observable on `next start` —
deploy-mode behavior is tracked independently of Next.js.

Two test suites cover the new behavior.
`test/production/app-dir/use-cache-expire` uses `cacheComponents` +
`cacheLife({ expire: 300 })` with a custom cache handler that shifts
`lastModified` via an `x-test-cache-age-offset-ms` header, exercising
the fully-static shell, the partially-static route shell for a known
param, and the partially-static fallback shell for unknown params.
`test/e2e/app-dir/expire-time` covers classic ISR (`revalidate = 1`,
`expireTime: 2`) with a real three-second wait and is `it.failing` on
deploy, so it will flip the moment the Proxy honors the expire value.

fixes #78269
…ers same-origin when `assetPrefix` is a CDN (#93271)

### What?

Adds `experimental.turbopackWorkerAssetPrefix` — a Turbopack counterpart
to webpack's
[`output.workerPublicPath`](https://webpack.js.org/configuration/output/#outputworkerpublicpath).
Opt-in, off by default, fully backward compatible.

```js
// next.config.js
module.exports = {
  assetPrefix: 'https://cdn.example.com',
  experimental: {
    // Same shape as `assetPrefix`: bare prefix, no trailing slash, no `/_next`.
    // `/_next/` is appended automatically. `''` → same-origin `/_next/...`.
    turbopackWorkerAssetPrefix: '',
  },
}
```

### Why?

When `assetPrefix` points to a cross-origin CDN (common production
setup), Turbopack emits Worker URLs under that CDN origin. Browsers
reject cross-origin `new Worker(url)` with `SecurityError`, breaking
every feature that uses `new Worker(new URL('./worker.ts',
import.meta.url))` — image decoders, WebP/GIF renderers, compression,
OffscreenCanvas, etc.

webpack solved this with `output.workerPublicPath`. Turbopack ignores
the `webpack()` callback in `next.config.js`, so the existing
same-origin workaround silently breaks when projects switch to
Turbopack.

Full design doc, alternatives considered, related issues (#88602,
#92676, #74621, #29468): **#93044**

Repro repo (`pnpm dev` reproduces, `pnpm dev:webpack` works):
https://github.com/GuinsooRocky/next-turbopack-worker-cdn-repro

### How?

Threads a new optional `turbopack_worker_asset_prefix: Option<RcStr>`
from `ExperimentalConfig` → `BrowserChunkingContext` → the JS runtime,
where `createWorker` reads it via a new `WORKER_BASE_PATH` global
emitted alongside the existing `CHUNK_BASE_PATH`. When unset, behavior
is identical to today.

Semantics, modeled on `assetPrefix`:

- **Bare prefix.** `/_next/` is appended automatically (same as
`computed_asset_prefix`); the user supplies just the origin (or empty
string for same-origin).
- **`undefined` vs `''`.** `undefined` falls back to the regular chunk
base path; `''` is a literal empty prefix that resolves to same-origin
`/_next/...`. The runtime distinguishes the two by injecting a JS `null`
literal vs a quoted string, and uses `WORKER_BASE_PATH ??
CHUNK_BASE_PATH` (not `||`).
- **Applies to entrypoint and module chunks.** The Turbopack worker
bootstrap rejects cross-origin module chunks (`Refusing to load script
from foreign origin`), so the override has to cover both — overriding
only the entrypoint would leave the worker unable to load any chunks
under a cross-origin `assetPrefix`.

Touchpoints:

- `packages/next/src/server/config-{shared,schema}.ts` — public option +
zod under `experimental`
- `crates/next-{api,core}/...` — wiring through to
`ClientChunkingContextOptions`;
`NextConfig::turbopack_worker_asset_prefix()` resolves to
`Some("<prefix>/_next/")` or `None`
- `turbopack/crates/turbopack-browser/src/chunking_context.rs` — new
field + builder + getter
- `turbopack/crates/turbopack-ecmascript-runtime/src/browser_runtime.rs`
— emit `WORKER_BASE_PATH` declaration as `null` or a quoted string
- `turbopack/.../runtime-base.ts` — `createWorker` resolves
`workerBasePath` and uses it for both the entrypoint URL and
module-chunk URLs delivered via params
- `test/e2e/turbopack-worker-asset-prefix/` — e2e fixture using real
cross-origin (`localhost` page, `127.0.0.1` `assetPrefix`); intercepts
`new Worker()` to assert on the URL the runtime helper resolved. Three
cases: no override (cross-origin URL → `SecurityError`), explicit
override origin, and `''` (relative `/_next/...`).
- `docs/01-app/03-api-reference/08-turbopack.mdx` — option reference
- `telemetry/events/version.ts` — `useTurbopackWorkerAssetPrefix:
boolean` (set on any explicit value, including empty string)

### Notes

- Opening as **Draft** to invite API-shape feedback before going to
review. Per the contributing guide, the design discussion is at #93044.
- All checklist items the PR template requires for a feature are
covered: e2e tests, docs, telemetry, no error-link path applies.

cc @sokra

Closes #93044

<!-- NEXT_JS_LLM_PR -->

---------

Co-authored-by: Tobias Koppers <tobias.koppers@googlemail.com>
### What?

Adds a new `TurboMalloc::memory_pressure()` method that returns a
normalized OS-level memory pressure value in the range `0..=100` as
`Option<u8>`. That value is attached to every memory sample in the
tracing layer (`TraceRow::MemorySample`) and propagated through the
trace-server so that span queries return a `memory_pressure_samples`
vector next to the existing `memory_samples`.

### Why?

Our current tracing only records the in-process allocator usage
(`TurboMalloc::memory_usage()`), which does not tell us when the
*operating system* is actually under memory pressure. We want that
signal in the trace output to:

1. Surface real OS memory pressure in trace dashboards alongside our own
allocation totals.
2. Eventually use it as input to task-eviction decisions in
`turbo-tasks` (see branch description). This PR lands the plumbing; the
eviction heuristic is not part of this change.

### How?

**`TurboMalloc::memory_pressure() -> Option<u8>`** — new, in
`turbopack/crates/turbo-tasks-malloc/`. Values are normalized so that
`0` = no pressure, `100` = maximum pressure. Platform-specific backends:

| Platform | Source | Notes |
|---|---|---|
| Linux | `/proc/pressure/memory` (`some` `avg10`), fallback to
`(MemTotal - MemAvailable) / MemTotal` from `/proc/meminfo` | PSI is not
available on all kernels (< 4.20, without `CONFIG_PSI`, restricted
containers). The meminfo fallback keeps the signal meaningful on any
standard Linux system and matches the semantics of Windows'
`dwMemoryLoad`. |
| macOS | `kern.memorystatus_level` sysctl (% free memory) | Pressure =
`100 - level`, read via `libc::sysctlbyname`. |
| Windows | `MEMORYSTATUSEX::dwMemoryLoad` via `GlobalMemoryStatusEx`
(`windows-sys`) | Already a 0–100 percentage of physical memory in use.
|
| Other / wasm | — | Returns `None`. |

All runtime failures (missing file, sysctl error, failed API call,
unparseable content) silently yield `None` rather than panicking.

**Wiring into tracing:**

- `TraceRow::MemorySample` gains a `memory_pressure: u8` field. `0` is
used when `memory_pressure()` returns `None` on unsupported platforms.
- `RawTraceLayer::maybe_report_memory_sample` populates it on every
sample (sampling cadence unchanged).
- This is a breaking change to the postcard wire format of
`MemorySample`; old trace files cannot be read by the new
`turbopack-trace-server`. Given the dev-only nature of this data that
seemed acceptable — let me know if a migration is desired.

**Wiring into the trace-server:**

- `Store::memory_samples` is now `Vec<(Timestamp, u64, u8)>` and
`add_memory_sample(ts, memory, memory_pressure)`.
- A new `Store::memory_pressure_samples_for_range(start, end) ->
Vec<u8>` mirrors `memory_samples_for_range`: same `MAX_MEMORY_SAMPLES =
200` cap and same group-and-max downsampling, so both vectors align
index-by-index for a given span query.
- `ServerToClientMessage::QueryResult` gains a `memory_pressure_samples:
Vec<u8>` field next to `memory_samples`.

**Dependencies:** `libc` (macOS only, `cfg`-gated), `windows-sys` with
the `Win32_System_SystemInformation` feature (Windows only,
`cfg`-gated). No new deps on Linux.

### Verification

- `cargo build -p turbo-tasks-malloc -p turbopack-trace-utils -p
turbopack-trace-server`
- `cargo clippy -p turbo-tasks-malloc -p turbopack-trace-utils -p
turbopack-trace-server --all-targets -- -D warnings`
- `cargo test -p turbo-tasks-malloc` — 7 tests pass, including:
- `memory_pressure_is_in_range`: asserts `Some(_)` and `≤ 100` on Linux,
macOS and Windows (via `cfg`-gated `.expect()`), and allows `None`
elsewhere.
- Parser tests for both PSI and `/proc/meminfo` code paths (typical
content, malformed input, clamping).
- Runtime sanity check on the Linux sandbox: `/proc/pressure/memory` is
absent (kernel 5.10 without `CONFIG_PSI`); the `/proc/meminfo` fallback
returned `Some(3)` as expected.

<!-- NEXT_JS_LLM_PR -->

---------

Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
- SingleModuleReference is dead code, never actually called
- SingleOutputAssetReference was created, but never actually did anything because of the ChunkingType being None
It never made sense to have module references that point to an output
asset, and the `ChunkingType` value on those made even less sense. We
had already migrated almost everything off anyway
In #92575 we started using
Turborepo to build a fresh copy of Next.js before each test. This can
become quite noisy IMO since Turborepo replays logs by default. Now we
hide logs for cache hits.

If you want to debug logs from cached builds, you should use `pnpm
build` directly.


```terminal
@next/polyfill-nomodule#build > cache hit, suppressing logs 951e3e04258a486d 
 @next/playwright#build > cache hit, suppressing logs a37fc1279a93411b 
 @next/routing#build > cache hit, suppressing logs e29bf8207712d9ea 
 @next/env#build > cache hit, suppressing logs 6f68cdf7c8f202f6 
 @next/bundle-analyzer-ui#build > cache hit, suppressing logs 5dbf9112c4f7029e 
 @next/polyfill-module#build > cache hit, suppressing logs 601b6f5afa47853d 
 @vercel/devlow-bench#build > cache hit, suppressing logs dfd1c037ef4e8100 
 @next/codemod#build > cache hit, suppressing logs a5eac293134b4263 
 @next/react-refresh-utils#build > cache hit, suppressing logs 51d64c1f7e8e5cfb 
 @next/eslint-plugin-next#build > cache hit, suppressing logs 74ec27fec9910c71 
 create-next-app#build > cache hit, suppressing logs 0c4c9d38e0e0913e 
 @next/font#build > cache hit, suppressing logs d1d057c5ffe70285 
 @next/bundle-analyzer#pack-for-isolated-tests > cache hit, suppressing logs 867b1e9082440087 
 @next/env#pack-for-isolated-tests > cache hit, suppressing logs 50d7a018277cafb8 
 next-rspack#pack-for-isolated-tests > cache hit, suppressing logs 1046e9cfa3ba6879 
 @next/mdx#pack-for-isolated-tests > cache hit, suppressing logs 6da45b672b5b081a 
 eslint-config-next#build > cache hit, suppressing logs fcf529c5b1428c5d 
 @next/font#pack-for-isolated-tests > cache hit, suppressing logs 9675f7ed199b9841 
 next#build > cache hit, suppressing logs 8c62df3806949bad 
 @vercel/turbopack-next#build > cache hit, suppressing logs 94890b06e4b937bf 
 @next/third-parties#build > cache hit, suppressing logs f1da8c56f9222a0c 
 next#pack-for-isolated-tests > cache hit, suppressing logs f11a5a11e2584c14 
 @next/third-parties#pack-for-isolated-tests > cache hit, suppressing logs 988b5be9471a943e
```
Nesting TUIs is hard but also a bit excessive since Turborepo's TUI
isn't interactive if `stdin` is ignored.

Co-authored-by: Copilot <copilot@github.com>
@pull pull Bot locked and limited conversation to collaborators Apr 29, 2026
@pull pull Bot added the ⤵️ pull label Apr 29, 2026
@pull pull Bot merged commit 4945b6e into code:canary Apr 29, 2026
1 check passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants