fix: require explicit confirmation instead of auto-selecting local inference#82
Closed
WuKongAI-CMU wants to merge 1 commit intoNVIDIA:mainfrom
Closed
fix: require explicit confirmation instead of auto-selecting local inference#82WuKongAI-CMU wants to merge 1 commit intoNVIDIA:mainfrom
WuKongAI-CMU wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
…ference Both the plugin onboard command and the host CLI wizard silently auto-selected Ollama or vLLM when detected on the local machine, bypassing user confirmation entirely. This surprised users who expected to use NVIDIA Cloud but had Ollama installed for other purposes. Now detected local engines are reported as informational messages and highlighted in the selection menu, but the user always chooses explicitly. Closes NVIDIA#12 Signed-off-by: peteryuqin <peter.yuqin@gmail.com>
Contributor
|
Thanks for the issue and the PR! We have a large PR #295 to address ollama detection, making it a choice in the installer, and having ability to choose the model. |
Contributor
Author
|
Closing to reduce my open PR count below the repo policy limit and refocus on a smaller set of higher-signal changes. I can revive this branch later if it becomes the right path again. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #12 — onboarding no longer silently auto-selects Ollama or vLLM when detected locally.
Before: If Ollama was running on
:11434or vLLM on:8000, onboarding skipped the inference selection menu entirely and used the local engine without asking.After: Detected local engines are reported as informational messages and highlighted in the selection hints, but the user always makes an explicit choice.
Changes in both:
nemoclaw/src/commands/onboard.ts(plugin wizard) — removed theif (ollama.running)auto-select branch, always shows the selection menu with detection hintsbin/lib/onboard.js(host CLI wizard) — replaced the early-return auto-select blocks with informational messages, detection flows into the normal options menuTest plan
nemoclaw onboard→ should show detection message but still present the full menunemoclaw onboardwith no local engines → menu appears as before🤖 Generated with Claude Code