fix(onboard): clarify preflight messages reference local NIM#2575
fix(onboard): clarify preflight messages reference local NIM#2575brandonpelfrey merged 2 commits intomainfrom
Conversation
Signed-off-by: zyang-dev <267119621+zyang-dev@users.noreply.github.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughConsole messages in the preflight GPU-detection flow were updated to remove claims about falling back to cloud inference; messages now only report that local NIM is unavailable for Apple GPUs, no-GPU detection, or insufficient NVIDIA VRAM. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/lib/onboard.ts (1)
2489-2503: Unify and tighten “local NIM” preflight wording for consistency across GPU branchesThe change direction is correct (no “cloud inference” implication), but the three messages aren’t phrased consistently:
- NVIDIA insufficient-VRAM says “GPU VRAM too small for local NIM” (not explicitly “Local NIM unavailable …” like the others).
- Apple and “no GPU” are close, but could be even clearer that this constraint is specifically for local NIM availability.
Suggested patch (wording-only, behavior unchanged) to make all three branches match the same pattern:
💡 Proposed wording diff
if (gpu && gpu.type === "nvidia") { console.log(` ✓ NVIDIA GPU detected: ${gpu.count} GPU(s), ${gpu.totalMemoryMB} MB VRAM`); if (!gpu.nimCapable) { - console.log(" ⓘ GPU VRAM too small for local NIM"); + console.log(" ⓘ Local NIM unavailable — NVIDIA GPU VRAM too small"); } } else if (gpu && gpu.type === "apple") { console.log( ` ✓ Apple GPU detected: ${gpu.name}${gpu.cores ? ` (${gpu.cores} cores)` : ""}, ${gpu.totalMemoryMB} MB unified memory`, ); - console.log(" ⓘ Local NIM requires NVIDIA GPU"); + console.log(" ⓘ Local NIM unavailable on Apple GPUs — NVIDIA GPU required"); } else { - console.log(" ⓘ No GPU detected — local NIM unavailable"); + console.log(" ⓘ Local NIM unavailable — NVIDIA GPU required"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/onboard.ts` around lines 2489 - 2503, The three GPU-branch console messages for local NIM are inconsistent: update the messaging in the nim.detectGpu() block so all branches explicitly state "Local NIM unavailable" with the same phrasing; specifically change the NVIDIA low-VRAM branch (when gpu exists, gpu.type === "nvidia" and !gpu.nimCapable) to log a message like " ⓘ Local NIM unavailable — GPU VRAM too small" and ensure the Apple branch (gpu.type === "apple") and the no-GPU branch also use "Local NIM unavailable — ..." wording to match; locate these messages in the onboarding code around the nim.detectGpu() call and update the console.log strings accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/onboard.ts`:
- Around line 2489-2503: The three GPU-branch console messages for local NIM are
inconsistent: update the messaging in the nim.detectGpu() block so all branches
explicitly state "Local NIM unavailable" with the same phrasing; specifically
change the NVIDIA low-VRAM branch (when gpu exists, gpu.type === "nvidia" and
!gpu.nimCapable) to log a message like " ⓘ Local NIM unavailable — GPU VRAM too
small" and ensure the Apple branch (gpu.type === "apple") and the no-GPU branch
also use "Local NIM unavailable — ..." wording to match; locate these messages
in the onboarding code around the nim.detectGpu() call and update the
console.log strings accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 326d1a57-18e3-4732-b694-8c3054d91658
📒 Files selected for processing (1)
src/lib/onboard.ts
Signed-off-by: zyang-dev <267119621+zyang-dev@users.noreply.github.com>
Pulls in: - a323104 fix(sandbox): include generate-openclaw-config.py in optimized build context (#2565) — same fix as the cherry-pick on this branch (ddb9e15), collapses cleanly. - d392ec0 fix(onboard): clarify preflight messages reference local NIM (#2575) # Conflicts: # test/sandbox-build-context.test.ts
## Summary Refreshes user-facing docs for the last 24 hours of merged NemoClaw history and bumps the docs metadata to 0.0.29, the next version after v0.0.28. The updates are limited to behavior supported by merged PR descriptions and diffs. ## Changes - `docs/reference/commands.md`: documented `nemoclaw <name> policy-add --from-file` and `--from-dir`, including custom preset review guidance, from #2077 / commit `7720b175`. - `docs/deployment/deploy-to-remote-gpu.md`: clarified that non-loopback `CHAT_UI_URL` disables OpenClaw device pairing for remote browser-only deployments, from #2449 / commit `f5ee8a4d`. - `docs/inference/inference-options.md`: documented provider-aware credential retry validation and the NVIDIA-only `nvapi-` prefix check, from #2389 / commit `6f7f0c6d`. - `docs/inference/switch-inference-providers.md`: documented `NEMOCLAW_INFERENCE_INPUTS` for text/image-capable model metadata baked into `openclaw.json`, from #2441 / commit `f4391892`. - `docs/reference/troubleshooting.md`: added the Git certificate verification entry for proxy CA propagation through `GIT_SSL_CAINFO`, `GIT_SSL_CAPATH`, `CURL_CA_BUNDLE`, and `REQUESTS_CA_BUNDLE`, from #2345 / commit `fa0dc1ab`. - `docs/versions1.json` and `docs/project.json`: promoted docs version `0.0.29`; `docs/versions1.json` omits unpublished `0.0.26`, `0.0.27`, and `0.0.28` entries. - `.agents/skills/nemoclaw-user-*`: regenerated derived user skill references from the updated docs. - Reviewed with no extra doc changes: #2575 / `d392ec07`, #2565 / `a3231049`, #1965 / `db1ef3ca`, #1990 / `db665834`, #2495 / `7da86fa3`, #2496 / `3192f4f4`, #2490 / `8c209058`, #2487 / `1f615e2f`, #2483 / `5653d33a`, #2482 / `31c782c0`, #2464 / `23bb5703`, #2472 / `a54f9a34`, and #2437 / `6bc860d7`. - Skipped per docs policy: #2420 / `7b76df6b` touched the experimental sandbox config path listed in `docs/.docs-skip`; #2466 / `cc15689c` touched a skipped term and CI-only sandbox image files. ## Type of Change - [ ] Code change (feature, bug fix, or refactor) - [ ] Code change with doc updates - [ ] Doc only (prose changes, no code sample modifications) - [x] Doc only (includes code sample changes) ## Verification <!-- Check each item you ran and confirmed. Leave unchecked items you skipped. --> - [x] `npx prek run --all-files` passes - [ ] `npm test` passes — failed locally in installer-integration tests and one onboard helper timeout; the doc-scoped hook test projects passed under `prek`. - [ ] Tests added or updated for new or changed behavior - [x] No secrets, API keys, or credentials committed - [x] Docs updated for user-facing behavior changes - [ ] `make docs` builds without warnings (doc changes only) — build succeeded, but local Sphinx emitted the existing version-switcher file read message. - [x] Doc pages follow the [style guide](https://github.com/NVIDIA/NemoClaw/blob/main/docs/CONTRIBUTING.md) (doc changes only) - [ ] New doc pages include SPDX header and frontmatter (new pages only) ## AI Disclosure <!-- If an AI agent authored or co-authored this PR, check the box and name the tool. Remove this section for fully human-authored PRs. --> - [x] AI-assisted — tool: Codex --- <!-- DCO sign-off required by CI. Run: git config user.name && git config user.email --> Signed-off-by: Miyoung Choi <miyoungc@nvidia.com> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Support for custom YAML presets in policy configuration via --from-file and --from-dir. * New build-time inference input option to declare accepted modalities (text or text,image). * **Improvements** * Credential validation now offers interactive recovery: re-enter key, retry, choose another provider, or exit. * Clarified provider-specific API key prefix handling (nvapi- only applies to NVIDIA keys). * **Documentation** * TLS certificate troubleshooting for inspected networks. * Clarified remote dashboard security/device-pairing behavior; command docs updated; docs version bumped. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Miyoung Choi <miyoungc@nvidia.com>
Summary
Preflight currently says "will use cloud inference" whenever a NIM-capable GPU is missing, which overstates the constraint. Local Ollama on CPU and other CPU-friendly providers remain available, so the message misled CPU-only users into thinking cloud was their only option.
Changes
src/lib/onboard.ts:2494,2500,2502: drop the "will use cloud inference" tail from the three preflight GPU noticesType of Change
Verification
npx prek run --all-filespassesnpm testpassesmake docsbuilds without warnings (doc changes only)AI Disclosure
Signed-off-by: zyang-dev 267119621+zyang-dev@users.noreply.github.com
Summary by CodeRabbit