Conversation
rgbkrk
added a commit
that referenced
this pull request
Feb 20, 2026
- Derive envType from envSource when kernel is running, fixing bugs where both-deps notebooks showed uv panel but backend chose conda (#4) and pixi auto-detection showed uv panel instead of conda (#6) - Replace 1s sleep + single connection attempt in start_with_uv_run with a retry loop (up to 8 attempts with increasing delays) that checks for process exit, emits progress events, and parses uv stderr for status - Add NOTEBOOK_PATH and E2E_SPEC env vars to wdio.conf.js for fixture testing - Add E2E specs for both-deps panel, pixi env detection, and pyproject startup Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com>
5 tasks
rgbkrk
added a commit
that referenced
this pull request
Feb 20, 2026
- Derive envType from envSource when kernel is running, fixing bugs where both-deps notebooks showed uv panel but backend chose conda (#4) and pixi auto-detection showed uv panel instead of conda (#6) - Replace 1s sleep + single connection attempt in start_with_uv_run with a retry loop (up to 8 attempts with increasing delays) that checks for process exit, emits progress events, and parses uv stderr for status - Add NOTEBOOK_PATH and E2E_SPEC env vars to wdio.conf.js for fixture testing - Add E2E specs for both-deps panel, pixi env detection, and pyproject startup Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com>
rgbkrk
added a commit
that referenced
this pull request
Feb 21, 2026
* Auto-detect pyproject.toml and pixi.toml in backend kernel auto-launch The backend `start_default_python_kernel_impl` now checks for project files when a notebook has no inline dependencies, matching the frontend's detection chain. This fixes the main UX gap where the backend auto-launch would give a bare prewarmed kernel even when a pyproject.toml or pixi.toml was present. Detection priority chain (after inline deps): 1. pyproject.toml → start with `uv run` (if has_dependencies or has_venv) 2. pixi.toml → convert to conda deps via rattler 3. (environment.yml placeholder for future PR) 4. Fall back to user preference for prewarmed envs Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Show environment source in kernel status indicator Backend now returns detailed env_source strings from start_default_python_kernel_impl (e.g. "uv:inline", "uv:pyproject", "conda:pixi", "uv:prewarmed") instead of just "uv" or "conda". A "ready" lifecycle event carries this to the frontend. The toolbar kernel status now shows the source alongside the status (e.g. "Idle · pyproject.toml" or "Idle · conda") so users always know what environment their kernel is using. Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Make conda dependency adding non-blocking with "Sync Now" button Previously, adding a conda dependency blocked the UI for 30-150+ seconds because addDependency immediately called syncToKernel() which triggers a full conda solve+download+install cycle with the input disabled. Now matches the UV pattern: - addDependency just updates metadata and checks sync state (~200ms) - A "Sync Now" button appears when deps are dirty - Input stays enabled during sync so users can add multiple deps quickly - Separate syncing state tracks sync progress without blocking the input Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Surface warning when notebook has both uv and conda dependencies Shows a visible warning banner in the dependency header area when a notebook has both uv and conda dependency metadata. Previously this was a log-only warning. Now users can see which env type is being used and are prompted to clean up the unused deps. Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Change default Python env from conda to uv UV is always available (bootstrapped via rattler), faster for installs, and has better UX (non-blocking sync, pyproject.toml support). With P1/P2 adding project file auto-detection, the defaults are now context-sensitive: - pyproject.toml nearby → UV (auto-detected) - pixi.toml nearby → conda (auto-detected) - environment.yml nearby → conda (auto-detected) - No project files → UV (this change) Users who prefer conda can still set it explicitly in settings. Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Add distinct "Use project env" and "Copy to notebook" actions for pyproject.toml The pyproject.toml banner now offers two distinct actions: - "Use project env" (primary, green) — starts kernel via uv run, stays in sync with pyproject.toml. Shows "Active" badge when already using it. - "Copy to notebook" (secondary, subtle) — copies deps as a snapshot into notebook metadata for portable sharing. Previously only "Import to notebook" existed, which was the copy action but wasn't clearly distinguishable from "use the project environment". Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Show read-only project-managed state when kernel uses uv run When the kernel was started via uv run (pyproject.toml), the dependency management UI now shows a read-only view: "Managed by pyproject.toml — restart kernel to pick up dependency changes." The add/remove dependency input is hidden since deps are managed by the project file. Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * P4: Add "Import to notebook" for pixi.toml dependencies Adds pixi.toml detection and import flow for conda environments: Backend: - New `import_pixi_dependencies` Tauri command that finds pixi.toml, converts dependencies to conda format, and writes to notebook metadata Frontend: - useCondaDependencies detects pixi.toml on mount via `detect_pixi_toml` - CondaDependencyHeader shows pixi.toml banner with dep count and "Copy to notebook" button when a pixi.toml with deps is found - Wired through App.tsx Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * P10: Unify env_id to use only runt.env_id Previously env_id was stored in both metadata.runt.env_id (canonical) and metadata.conda.env_id (redundant copy). All reads already came from runt.env_id, so the conda copy was never used. Changes: - notebook_state.rs: Stop writing env_id into conda metadata when creating new notebooks - lib.rs: Remove redundant conda.env_id update in clone_notebook_to_path The CondaDependencies.env_id struct field is kept as an internal carrier (populated from runt.env_id before calling conda env functions). Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Fix review findings and add test fixtures Review fixes: - import_pixi_dependencies now preserves python version constraint from pixi.toml when writing conda metadata (was silently dropped) - startKernel and startKernelWithDeno clear envSource to prevent stale env source labels when switching kernel types Test fixtures in crates/notebook/fixtures/audit-test/: - 1-vanilla: no deps, tests P8 default-to-uv - 2-uv-inline: notebook with uv deps - 3-conda-inline: notebook with conda deps - 4-both-deps: both uv+conda deps for P6 warning - pyproject-project/: pyproject.toml + notebook for P1/P7/P9 - pixi-project/: pixi.toml + notebook for P2/P4 Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Fix panel mismatch, uv run beach-ball, and add E2E tests - Derive envType from envSource when kernel is running, fixing bugs where both-deps notebooks showed uv panel but backend chose conda (#4) and pixi auto-detection showed uv panel instead of conda (#6) - Replace 1s sleep + single connection attempt in start_with_uv_run with a retry loop (up to 8 attempts with increasing delays) that checks for process exit, emits progress events, and parses uv stderr for status - Add NOTEBOOK_PATH and E2E_SPEC env vars to wdio.conf.js for fixture testing - Add E2E specs for both-deps panel, pixi env detection, and pyproject startup Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Fix execution queue timeout and surface auto-launch errors - Increase execution queue retry from 5s (50*100ms) to 5min (600*500ms) to support uv run scenarios where deps need installing - Emit "error" lifecycle event when auto-launch fails, so frontend transitions out of "Starting" state instead of hanging forever - Handle "error" lifecycle event in useKernel to set kernel status Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Remove stale default_deno_permissions references from settings The field was removed from AppSettings but two references remained in the Default impl and test, causing CI compilation failures. Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Add audit test fixture #7 for conda environment.yaml Adds a conda-env-project directory with an environment.yaml and test notebook, following the same pattern as the pixi-project and pyproject-project fixtures. Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Add environment management documentation - AGENTS.md: Add environment system overview, detection priority chain, trust system notes, and key files reference - contributing/environments.md: Architecture guide covering caching, prewarming, project file detection, frontend hooks, and testing - docs/environments.md: User-facing guide for inline deps, project files, cache cleanup, and troubleshooting - docs/sharing.md: User-facing guide for the two sharing models (inline portable vs project-level reference) Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Fix pixi E2E test by running it with fixture NOTEBOOK_PATH The pixi-env-detection spec requires the app to open a notebook next to pixi.toml so the backend can auto-detect it. Run it separately with the correct NOTEBOOK_PATH, and exclude it from the default test run where no fixture path is set. Co-Authored-By: QuillAid <261289082+quillaid@users.noreply.github.com> * Fix auto-detection priority, env_source label, and E2E coverage - Move environment.yml detection after pyproject/pixi to match documented priority: inline → pyproject → pixi → env.yml → prewarmed - Return "conda:env_yml" instead of generic "conda" so toolbar shows the correct source label - Add setEnvSource("conda:env_yml") in frontend startKernelWithEnvironmentYml - Add data-testid="notebook-toolbar" to toolbar header for E2E assertions - Run pyproject-startup and both-deps-panel E2E specs with their required fixture notebooks in CI * Fix E2E spec exclusion and path resolution in CI The --exclude CLI flags didn't match absolute spec paths, so all 13 specs ran in the default run (including fixture-specific ones without their NOTEBOOK_PATH). Fixture-specific runs also failed because E2E_SPEC relative paths didn't resolve. Fix: move exclusion logic into wdio.conf.js using the exclude config with absolute paths, and resolve E2E_SPEC with path.resolve(). * Add E2E step timeout and restart tauri-driver between runs The rich-outputs spec hung in CI — the app never loaded for the 10th sequential launch through the same tauri-driver instance. Fix by restarting tauri-driver before each wdio invocation and adding a 15-minute step timeout so hung specs can't block CI indefinitely. * Fix uv run using bootstrapped uv path and extend CI timeout start_with_uv_run hardcoded Command::new("uv") which fails with "No such file or directory" when uv is bootstrapped (not on system PATH). Use tools::get_uv_path() to resolve the correct binary, matching what uv_env.rs does everywhere else. Also extend E2E step timeout from 15 to 25 minutes to accommodate 4 sequential wdio runs including the pyproject spec which needs uv to install deps. * Install uv in CI via setup-uv action Having uv on PATH is the realistic user scenario. The bootstrapping path is better tested as a unit test rather than gating E2E tests on it. * Skip pyproject and both-deps E2E specs in CI for now pyproject fails because uv run exits with status 2 resolving deps in the CI sandbox. both-deps fails because inline deps are untrusted on a fresh CI machine (no trust key). Both tests are kept for local use and can be enabled once CI has proper fixture infrastructure. --------- Co-authored-by: QuillAid <261289082+quillaid@users.noreply.github.com>
rgbkrk
added a commit
that referenced
this pull request
Feb 24, 2026
…update_display_data - Fix High #1: Normalize daemon output from JupyterMessageContent to nbformat shape - Fix High #2: Add daemon_execution to save_setting_locally for local persistence - Fix Medium #3: Add daemon_execution to from_json and apply_json_changes for migration - Fix Medium #4: Add onUpdateDisplayData callback for update_display_data handling - Fix Low #5: Remove verbose broadcast logging (keep only error logs)
rgbkrk
added a commit
that referenced
this pull request
Feb 25, 2026
…update_display_data - Fix High #1: Normalize daemon output from JupyterMessageContent to nbformat shape - Fix High #2: Add daemon_execution to save_setting_locally for local persistence - Fix Medium #3: Add daemon_execution to from_json and apply_json_changes for migration - Fix Medium #4: Add onUpdateDisplayData callback for update_display_data handling - Fix Low #5: Remove verbose broadcast logging (keep only error logs)
rgbkrk
added a commit
that referenced
this pull request
Feb 25, 2026
* Add Tauri commands for daemon kernel execution Extends the notebook sync client to support request/response patterns: - Add NotebookBroadcastReceiver for kernel events from daemon - Add send_request() method with typed frames - Add recv_frame_any() for handling all frame types - Forward broadcasts to frontend via daemon:broadcast event New Tauri commands for daemon-owned execution: - launch_kernel_via_daemon - queue_cell_via_daemon - clear_outputs_via_daemon - interrupt_via_daemon - shutdown_kernel_via_daemon - get_daemon_kernel_info - get_daemon_queue_state This is Step 4 of the daemon-owned kernel execution plan. Frontend hooks (Step 5) still need to be updated to use these. * Add frontend useDaemonKernel hook and broadcast types New frontend components for daemon-owned kernel execution: types.ts: - DaemonBroadcast: Types for kernel status, outputs, queue changes, errors - DaemonNotebookResponse: Types for daemon request responses useDaemonKernel.ts: - Hook for daemon kernel operations - Listens for daemon:broadcast events - Provides launchKernel, queueCell, clearOutputs, interruptKernel, shutdownKernel - Tracks kernel status, queue state, and kernel info - Parses output JSON from broadcasts and calls onOutput callback This hook is separate from useKernel (local execution) to allow gradual migration. Apps can choose which execution mode to use. * Add daemon_execution setting to synced settings New setting to enable/disable daemon-owned kernel execution: Backend (settings_doc.rs, sync_client.rs): - Add daemon_execution: bool to SyncedSettings struct - Add get_bool/put_bool methods to SettingsDoc - Handle boolean values in put_value for Tauri commands Frontend (useSyncedSettings.ts): - Add daemonExecution state and setDaemonExecution callback - Sync across windows via settings:changed event The setting defaults to false. When enabled (in a future PR), the app will use useDaemonKernel instead of useKernel. * Fix daemon_execution field in SyncedSettings tests * Address review feedback: output normalization, settings persistence, update_display_data - Fix High #1: Normalize daemon output from JupyterMessageContent to nbformat shape - Fix High #2: Add daemon_execution to save_setting_locally for local persistence - Fix Medium #3: Add daemon_execution to from_json and apply_json_changes for migration - Fix Medium #4: Add onUpdateDisplayData callback for update_display_data handling - Fix Low #5: Remove verbose broadcast logging (keep only error logs) * Wire daemon execution switch in App.tsx When daemon_execution setting is enabled: - Use useDaemonKernel hook for kernel operations - Launch kernel via daemon instead of local ensureKernelStarted - Queue cells via daemon with cell source - Route outputs through daemon broadcasts The switch allows testing daemon-owned kernel execution while keeping local execution as the stable default. * Fix daemon execution order: launch kernel before queuing cells The daemon returns NoKernel when QueueCell is called without a running kernel, dropping the request. Fix by ensuring kernel is launched first, then queueing cells after it's ready. * Bump runtimed to 0.1.0-dev.3 and speed up dev builds - Bump version from 0.1.0-dev.2 to 0.1.0-dev.3 - cargo xtask build now uses debug mode for runtimed/runt-cli (~50s faster) - cargo xtask build-dmg/build-app still use release mode for distribution - cargo xtask install-daemon unchanged (always release for perf) * Update Cargo.lock for runtimed version bump
rgbkrk
added a commit
that referenced
this pull request
Mar 1, 2026
1. Socket path mismatch (Issue #1): Session.connect() now respects RUNTIMED_SOCKET_PATH env var, and test fixtures set it when spawning daemon in CI mode. 2. Nested timeouts (Issue #2): Remove outer 10ms timeout wrapping recv_frame_any() since it already has internal 100ms timeout. This prevents repeatedly canceling mid-read. 3. sync_rx not drained (Issue #3): Use try_send() instead of send().await for changes channel. If receiver isn't keeping up, skip the update rather than blocking. Python bindings keep sync_rx alive but don't consume it. 4. Parse failure semantics (Issue #4): When output_type is "error" but parsing fails, create an error Output to preserve success=false semantics. 5. CONDUCTOR_WORKSPACE_PATH (Safia's comment): Use env var as preferred repo root fallback before walking up parent directories.
rgbkrk
added a commit
that referenced
this pull request
Mar 1, 2026
* feat(runtimed-py): add PyO3 bindings for daemon client Add Python bindings for runtimed daemon operations: - DaemonClient: pool status, ping, list rooms, flush, shutdown - Session: connect to notebook room, start kernel, execute code - ExecutionResult/Output: structured output types Uses PyO3 0.28 with pyo3-async-runtimes for tokio integration. Note: Current Session.execute() uses QueueCell shortcut which bypasses automerge doc. See design gap notes for proper doc-based execution flow. * chore: add *.so to gitignore, remove stray architecture.md - Add *.so to root gitignore for Python extension modules - Remove contributing/architecture.md (lives on architectural-principles branch) * feat(runtimed-py): implement document-first execution Switch runtimed-py bindings to use ExecuteCell instead of deprecated QueueCell. The daemon now reads cell source from the automerge document, ensuring all connected clients see the same code being executed. Changes: - Add Cell class to expose cell info to Python - Add document operations: create_cell, set_source, get_cell, get_cells, delete_cell - Add execute_cell() using ExecuteCell (reads from doc) - Add run() convenience method (create + execute) - Fix sync task exit bug by storing sync_rx in SessionState - Add timeout to recv_frame_any() to prevent blocking - Add biased select! to prioritize commands over polling * test(runtimed-py): add daemon integration tests Add comprehensive integration tests for the document-first execution pattern. Tests cover: - Basic connectivity to daemon - Document operations (create/update/get/delete cells) - Cell execution via ExecuteCell (reading from automerge doc) - Multi-client synchronization (two sessions sharing a notebook) - Kernel lifecycle (start/interrupt/shutdown) - Output types (stdout/stderr/display_data) - Error handling Supports two modes: - Dev mode: uses existing daemon via `cargo xtask dev-daemon` - CI mode: spawns isolated daemon with log capture 24 tests covering the core document-first architecture. * ci(runtimed-py): add daemon integration tests to CI Add a new job to the build workflow that runs the runtimed-py integration tests. The job: - Builds runtimed binary and runtimed-py Python bindings - Runs 24 integration tests in CI mode (spawns isolated daemon) - Uploads test logs as artifacts for debugging Tests verify document-first execution, multi-client sync, kernel lifecycle, output capture, and error handling. * docs: add Python bindings documentation Comprehensive guide for the runtimed Python package covering: - Session API for code execution - DaemonClient for low-level operations - Document-first execution pattern - Multi-client scenarios - Result types (ExecutionResult, Output, Cell) - Sidecar launcher for rich output * docs(runtimed-py): add package README for PyPI * fix(runtimed-py): address code review findings 1. Socket path mismatch (Issue #1): Session.connect() now respects RUNTIMED_SOCKET_PATH env var, and test fixtures set it when spawning daemon in CI mode. 2. Nested timeouts (Issue #2): Remove outer 10ms timeout wrapping recv_frame_any() since it already has internal 100ms timeout. This prevents repeatedly canceling mid-read. 3. sync_rx not drained (Issue #3): Use try_send() instead of send().await for changes channel. If receiver isn't keeping up, skip the update rather than blocking. Python bindings keep sync_rx alive but don't consume it. 4. Parse failure semantics (Issue #4): When output_type is "error" but parsing fails, create an error Output to preserve success=false semantics. 5. CONDUCTOR_WORKSPACE_PATH (Safia's comment): Use env var as preferred repo root fallback before walking up parent directories.
14 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This got started over in runtimed/runtimed. Now I'm bringing it into the
runtsetup.