Skip to content

fix(onboard): clarify preflight messages reference local NIM#2575

Merged
brandonpelfrey merged 2 commits intomainfrom
fix/preflight-nim-gpu-wording
Apr 27, 2026
Merged

fix(onboard): clarify preflight messages reference local NIM#2575
brandonpelfrey merged 2 commits intomainfrom
fix/preflight-nim-gpu-wording

Conversation

@zyang-dev
Copy link
Copy Markdown
Contributor

@zyang-dev zyang-dev commented Apr 27, 2026

Summary

Preflight currently says "will use cloud inference" whenever a NIM-capable GPU is missing, which overstates the constraint. Local Ollama on CPU and other CPU-friendly providers remain available, so the message misled CPU-only users into thinking cloud was their only option.

Changes

  • src/lib/onboard.ts:2494,2500,2502: drop the "will use cloud inference" tail from the three preflight GPU notices
  • Add a "local NIM" qualifier so all three messages consistently describe local NIM availability rather than implying all local inference is gated by GPU

Type of Change

  • Code change (feature, bug fix, or refactor)
  • Code change with doc updates
  • Doc only (prose changes, no code sample modifications)
  • Doc only (includes code sample changes)

Verification

  • npx prek run --all-files passes
  • npm test passes
  • Tests added or updated for new or changed behavior
  • No secrets, API keys, or credentials committed
  • Docs updated for user-facing behavior changes
  • make docs builds without warnings (doc changes only)
  • Doc pages follow the style guide (doc changes only)
  • New doc pages include SPDX header and frontmatter (new pages only)

AI Disclosure

  • AI-assisted — tool: Claude Code

Signed-off-by: zyang-dev 267119621+zyang-dev@users.noreply.github.com

Summary by CodeRabbit

  • Bug Fixes
    • Corrected messaging when local NIM is unavailable due to GPU or no-GPU detection, removing inaccurate references to any cloud inference fallback and clarifying local availability.

Signed-off-by: zyang-dev <267119621+zyang-dev@users.noreply.github.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 27, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: b902d190-5552-4abc-ab79-47738d4372f0

📥 Commits

Reviewing files that changed from the base of the PR and between 0e03abf and ac467d2.

📒 Files selected for processing (1)
  • src/lib/onboard.ts
✅ Files skipped from review due to trivial changes (1)
  • src/lib/onboard.ts

📝 Walkthrough

Walkthrough

Console messages in the preflight GPU-detection flow were updated to remove claims about falling back to cloud inference; messages now only report that local NIM is unavailable for Apple GPUs, no-GPU detection, or insufficient NVIDIA VRAM.

Changes

Cohort / File(s) Summary
Preflight Messaging Update
src/lib/onboard.ts
Removed references to cloud inference fallback in GPU detection messages; messages now state only that local NIM is unavailable for Apple GPUs, no GPU, or insufficient NVIDIA VRAM.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

🐰 I hopped through logs with gentle care,

Cleared cloudy claims from the air,
If GPUs lack the needed grace,
I’ll speak only of the local case,
Honest and small — a rabbit’s trace.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: updating preflight messages to clarify they reference local NIM constraints rather than implying all local inference is unavailable.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/preflight-nim-gpu-wording

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/lib/onboard.ts (1)

2489-2503: Unify and tighten “local NIM” preflight wording for consistency across GPU branches

The change direction is correct (no “cloud inference” implication), but the three messages aren’t phrased consistently:

  • NVIDIA insufficient-VRAM says “GPU VRAM too small for local NIM” (not explicitly “Local NIM unavailable …” like the others).
  • Apple and “no GPU” are close, but could be even clearer that this constraint is specifically for local NIM availability.

Suggested patch (wording-only, behavior unchanged) to make all three branches match the same pattern:

💡 Proposed wording diff
   if (gpu && gpu.type === "nvidia") {
     console.log(`  ✓ NVIDIA GPU detected: ${gpu.count} GPU(s), ${gpu.totalMemoryMB} MB VRAM`);
     if (!gpu.nimCapable) {
-      console.log("  ⓘ GPU VRAM too small for local NIM");
+      console.log("  ⓘ Local NIM unavailable — NVIDIA GPU VRAM too small");
     }
   } else if (gpu && gpu.type === "apple") {
     console.log(
       `  ✓ Apple GPU detected: ${gpu.name}${gpu.cores ? ` (${gpu.cores} cores)` : ""}, ${gpu.totalMemoryMB} MB unified memory`,
     );
-    console.log("  ⓘ Local NIM requires NVIDIA GPU");
+    console.log("  ⓘ Local NIM unavailable on Apple GPUs — NVIDIA GPU required");
   } else {
-    console.log("  ⓘ No GPU detected — local NIM unavailable");
+    console.log("  ⓘ Local NIM unavailable — NVIDIA GPU required");
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/onboard.ts` around lines 2489 - 2503, The three GPU-branch console
messages for local NIM are inconsistent: update the messaging in the
nim.detectGpu() block so all branches explicitly state "Local NIM unavailable"
with the same phrasing; specifically change the NVIDIA low-VRAM branch (when gpu
exists, gpu.type === "nvidia" and !gpu.nimCapable) to log a message like "  ⓘ
Local NIM unavailable — GPU VRAM too small" and ensure the Apple branch
(gpu.type === "apple") and the no-GPU branch also use "Local NIM unavailable —
..." wording to match; locate these messages in the onboarding code around the
nim.detectGpu() call and update the console.log strings accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/lib/onboard.ts`:
- Around line 2489-2503: The three GPU-branch console messages for local NIM are
inconsistent: update the messaging in the nim.detectGpu() block so all branches
explicitly state "Local NIM unavailable" with the same phrasing; specifically
change the NVIDIA low-VRAM branch (when gpu exists, gpu.type === "nvidia" and
!gpu.nimCapable) to log a message like "  ⓘ Local NIM unavailable — GPU VRAM too
small" and ensure the Apple branch (gpu.type === "apple") and the no-GPU branch
also use "Local NIM unavailable — ..." wording to match; locate these messages
in the onboarding code around the nim.detectGpu() call and update the
console.log strings accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 326d1a57-18e3-4732-b694-8c3054d91658

📥 Commits

Reviewing files that changed from the base of the PR and between a323104 and 0e03abf.

📒 Files selected for processing (1)
  • src/lib/onboard.ts

Signed-off-by: zyang-dev <267119621+zyang-dev@users.noreply.github.com>
@brandonpelfrey brandonpelfrey merged commit d392ec0 into main Apr 27, 2026
17 checks passed
@zyang-dev zyang-dev deleted the fix/preflight-nim-gpu-wording branch April 27, 2026 23:56
ericksoa added a commit that referenced this pull request Apr 28, 2026
Pulls in:
- a323104 fix(sandbox): include generate-openclaw-config.py in
  optimized build context (#2565) — same fix as the cherry-pick on
  this branch (ddb9e15), collapses cleanly.
- d392ec0 fix(onboard): clarify preflight messages reference local NIM (#2575)

# Conflicts:
#	test/sandbox-build-context.test.ts
@miyoungc miyoungc mentioned this pull request Apr 28, 2026
13 tasks
miyoungc added a commit that referenced this pull request Apr 28, 2026
## Summary
Refreshes user-facing docs for the last 24 hours of merged NemoClaw
history and bumps the docs metadata to 0.0.29, the next version after
v0.0.28. The updates are limited to behavior supported by merged PR
descriptions and diffs.

## Changes
- `docs/reference/commands.md`: documented `nemoclaw <name> policy-add
--from-file` and `--from-dir`, including custom preset review guidance,
from #2077 / commit `7720b175`.
- `docs/deployment/deploy-to-remote-gpu.md`: clarified that non-loopback
`CHAT_UI_URL` disables OpenClaw device pairing for remote browser-only
deployments, from #2449 / commit `f5ee8a4d`.
- `docs/inference/inference-options.md`: documented provider-aware
credential retry validation and the NVIDIA-only `nvapi-` prefix check,
from #2389 / commit `6f7f0c6d`.
- `docs/inference/switch-inference-providers.md`: documented
`NEMOCLAW_INFERENCE_INPUTS` for text/image-capable model metadata baked
into `openclaw.json`, from #2441 / commit `f4391892`.
- `docs/reference/troubleshooting.md`: added the Git certificate
verification entry for proxy CA propagation through `GIT_SSL_CAINFO`,
`GIT_SSL_CAPATH`, `CURL_CA_BUNDLE`, and `REQUESTS_CA_BUNDLE`, from #2345
/ commit `fa0dc1ab`.
- `docs/versions1.json` and `docs/project.json`: promoted docs version
`0.0.29`; `docs/versions1.json` omits unpublished `0.0.26`, `0.0.27`,
and `0.0.28` entries.
- `.agents/skills/nemoclaw-user-*`: regenerated derived user skill
references from the updated docs.
- Reviewed with no extra doc changes: #2575 / `d392ec07`, #2565 /
`a3231049`, #1965 / `db1ef3ca`, #1990 / `db665834`, #2495 / `7da86fa3`,
#2496 / `3192f4f4`, #2490 / `8c209058`, #2487 / `1f615e2f`, #2483 /
`5653d33a`, #2482 / `31c782c0`, #2464 / `23bb5703`, #2472 / `a54f9a34`,
and #2437 / `6bc860d7`.
- Skipped per docs policy: #2420 / `7b76df6b` touched the experimental
sandbox config path listed in `docs/.docs-skip`; #2466 / `cc15689c`
touched a skipped term and CI-only sandbox image files.

## Type of Change
- [ ] Code change (feature, bug fix, or refactor)
- [ ] Code change with doc updates
- [ ] Doc only (prose changes, no code sample modifications)
- [x] Doc only (includes code sample changes)

## Verification
<!-- Check each item you ran and confirmed. Leave unchecked items you
skipped. -->
- [x] `npx prek run --all-files` passes
- [ ] `npm test` passes — failed locally in installer-integration tests
and one onboard helper timeout; the doc-scoped hook test projects passed
under `prek`.
- [ ] Tests added or updated for new or changed behavior
- [x] No secrets, API keys, or credentials committed
- [x] Docs updated for user-facing behavior changes
- [ ] `make docs` builds without warnings (doc changes only) — build
succeeded, but local Sphinx emitted the existing version-switcher file
read message.
- [x] Doc pages follow the [style
guide](https://github.com/NVIDIA/NemoClaw/blob/main/docs/CONTRIBUTING.md)
(doc changes only)
- [ ] New doc pages include SPDX header and frontmatter (new pages only)

## AI Disclosure
<!-- If an AI agent authored or co-authored this PR, check the box and
name the tool. Remove this section for fully human-authored PRs. -->
- [x] AI-assisted — tool: Codex

---
<!-- DCO sign-off required by CI. Run: git config user.name && git
config user.email -->
Signed-off-by: Miyoung Choi <miyoungc@nvidia.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Support for custom YAML presets in policy configuration via
--from-file and --from-dir.
* New build-time inference input option to declare accepted modalities
(text or text,image).

* **Improvements**
* Credential validation now offers interactive recovery: re-enter key,
retry, choose another provider, or exit.
* Clarified provider-specific API key prefix handling (nvapi- only
applies to NVIDIA keys).

* **Documentation**
  * TLS certificate troubleshooting for inspected networks.
* Clarified remote dashboard security/device-pairing behavior; command
docs updated; docs version bumped.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Miyoung Choi <miyoungc@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants