chore(wasm): rebuild runtimed-wasm for RuntimeLifecycle phase 4#2103
Merged
chore(wasm): rebuild runtimed-wasm for RuntimeLifecycle phase 4#2103
Conversation
The committed WASM bundle was last rebuilt in #2060. Since then, the RuntimeLifecycle stack landed in crates/runtime-doc: - #2081 phase 1 (derived field) - #2085 phase 2 (CRDT keys + typed writers) - #2091 typed KernelErrorReason + set_activity stale-phase fix - #2092 typed Rust callers Phase 5 (#2093) then migrated the TS frontend to read `state.kernel.lifecycle.lifecycle` and `state.kernel.error_reason`. With a stale WASM, read_state never emits those keys — the frontend sees `kernel.lifecycle === undefined`, derived-state.ts dereferences `.lifecycle` on it, and the TypeError is caught by App's ErrorBoundary. That's why every E2E spec on 2026-04-23 fails with "App not ready — toolbar not found within 15s": the toolbar is replaced by the "Something went wrong" fallback before the selector ever mounts. Rebuilds only the .wasm binary. No JS glue changed because the affected types round-trip through `serde_wasm_bindgen::to_value` (a dynamic JsValue), not typed wasm-bindgen — the shape change doesn't touch the .d.ts. Verification: - `pnpm exec vp test packages/runtimed` — 194 passing. - `pnpm exec vp test apps/notebook` — 444 passing.
3 tasks
rgbkrk
added a commit
that referenced
this pull request
Apr 23, 2026
…2104) CI's existing WASM guard diffs the JS glue but skips the `.wasm` binary (non-reproducible across macOS/Linux — see #1172). That gap let a stale `runtimed_wasm_bg.wasm` ship alongside the RuntimeLifecycle refactor (#2081/#2085/#2091/#2092 + TS migration #2093): the frontend read `state.kernel.lifecycle.lifecycle`, the stale WASM never emitted `lifecycle`, and the ErrorBoundary swallowed the TypeError. Every E2E spec failed with "App not ready — toolbar not found." See #2103. Adds a Deno smoke test that loads the committed WASM, reads a fresh RuntimeState via `get_runtime_state()`, and asserts the shape TS consumers in apps/notebook and packages/runtimed expect: - `state.kernel.lifecycle.lifecycle === "NotStarted"` (the tagged union shape is actually present, not the pre-refactor bare string) - `state.kernel.error_reason` key exists (even if null) - Top-level `queue`, `env`, `trust`, `executions` present Catches the "forgot to rebuild WASM after a schema change" failure mode directly, and is platform-independent because it tests the emitted JSON shape rather than bytes. Depends on #2103 (WASM rebuild) — without that commit, this test fails against the current main bundle, which is exactly the point.
3 tasks
rgbkrk
added a commit
that referenced
this pull request
Apr 23, 2026
* docs(specs): streaming Arrow IPC for DataFrame repr (#1816) Design for dx emitting a Parquet head + a pull handle for incremental Arrow IPC continuation, so huge DataFrames render a first screenful immediately and grow in place. Key shape: - Head: 100-ish rows serialized as Parquet through the existing dx path. Sift's existing load hits immediately. - Continuation: new `nteract.dx.stream.<id>` comm. Runtime agent pulls Arrow IPC chunks outside the execution-message hot path and appends them as blob refs in a new manifest field on the same output id. - Transport is shared with #1815 (query backend). - No mutable blobs, no ContentRef shape change — chunks are a JSON list of existing blob refs inside the manifest. - Late joiners replay from the CRDT because chunks go through normal sync, not a side channel. * docs: update streaming Arrow IPC spec for runtime-doc crate changes - RuntimeStateDoc moved from notebook-doc to runtime-doc (#2056) - CRDT writes go through RuntimeStateHandle (#2059) - Pull task uses fork()/merge() for async blob work - Dead broadcasts removed (#2065) - manifest updates propagate via CRDT - Updated review pointers to current file paths * docs: fix reserved-comm-namespace pointers in streaming Arrow IPC spec The namespace rule moved out of CLAUDE.md and now lives in .claude/rules/architecture.md § "Reserved Comm Namespace: `nteract.dx.*`". Update the two spec references to point there. No change to the design itself. * feat(runtimed-wasm): install console_error_panic_hook on module init Rust panics inside WASM currently surface to the frontend as an opaque `__wbg___wbindgen_throw_6b64449b9b9ed33c` stack with wasm-function indices and no file/line. The error reaches the App ErrorBoundary and the "Something went wrong" fallback renders, but the cause is invisible in packaged / CI builds. This is exactly what's happening on UV Pyproject + UV Prewarmed E2E today (post-#2103): something in the runtime-doc read path panics when the daemon syncs a RuntimeState that walks through the full lifecycle starting → running, and we have no way to name it. Install `console_error_panic_hook::set_once()` from a `#[wasm_bindgen(start)]` function so it runs exactly once before any `NotebookHandle` is constructed. Panics now log with file, line, message, and a Rust backtrace. Combined with #2101 (ErrorBoundary → host logger), the next failing E2E run will emit both the React component stack and the Rust panic payload into `e2e-logs/app.log`. Rebuilds the WASM bundle to pick up the hook wiring. Verification: - `cargo xtask wasm runtimed` — succeeds - `deno test --allow-read crates/runtimed-wasm/tests/` — shape test still passes (51 filtered + 1 ok, the expected set) * feat(notebook-app): forward console.error to host logger The wasm panic hook from the previous commit calls `console.error`. In dev builds `attachConsole()` from tauri-plugin-log is DEV-only (see packages/notebook-host/src/tauri/index.ts:280), and the plugin only bridges Rust log output INTO the browser console — it doesn't forward browser console OUT to Rust. In packaged / CI builds the panic message goes to `console.error` and stops there. Install a small forwarder in main.tsx: wrap `console.error` to also call `logger.error` (host-log). WASM panics now land in notebook.log alongside everything else, visible in CI's `e2e-logs/app.log`. Preserves the original console.error behavior so devtools stays unchanged. The forwarding call is in a try/catch so a logger failure can't swallow the original error.
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Rebuilds
apps/notebook/src/wasm/runtimed-wasm/runtimed_wasm_bg.wasm. The JS glue and.d.tsare unchanged (the affected types round-trip throughserde_wasm_bindgen::to_value, not typedwasm-bindgen), so this is a pure binary refresh.Why — fixes every E2E failure on main
The committed WASM was last rebuilt in #2060. Since then, the
RuntimeLifecyclerefactor landed incrates/runtime-doc:KernelErrorReason+set_activitystale-phase fixPhase 5 (#2093) then migrated the TS frontend to the new shape:
With a stale WASM,
read_statedoes not emitkernel.lifecycleat all. The frontend readsstate.kernel.lifecycle === undefined, dereferencing.lifecycleon it throws aTypeError, and the React ErrorBoundary inApp.tsxreplaces the entire tree with "Something went wrong."That's why every E2E spec from run 24839322000 and earlier on this week's main fails with
App not ready — toolbar not found within 15s. The toolbar selector never mounts because the subtree is unmounted. Screenshot evidence:should-load-app-and-show-toolbar-2026-04-23T14-07-55-241Z.png→ the "Something went wrong" fallbackapp.logshows a healthy startup followed immediately byCleanup: flushing and stopping engine(the boundary tearing down the bad tree)Test plan
cargo xtask wasm runtimed— rebuild succeedspnpm exec vp test packages/runtimed— 194 passingpnpm exec vp test apps/notebook— 444 passingShip after #2101 (ErrorBoundary logger) or independently — they're orthogonal.
Follow-up
The underlying process bug — landing Rust runtime-doc changes without rebuilding the WASM bundle that consumers depend on — is worth a guard: a CI check that greps for changes under
crates/runtime-doc/,crates/notebook-doc/, orcrates/runtimed-wasm/and fails ifruntimed_wasm_bg.wasmhasn't been re-committed. Not in this PR.