Skip to content

fix: require explicit confirmation instead of auto-selecting local inference#82

Closed
WuKongAI-CMU wants to merge 1 commit intoNVIDIA:mainfrom
WuKongAI-CMU:fix/no-silent-ollama-autoselect
Closed

fix: require explicit confirmation instead of auto-selecting local inference#82
WuKongAI-CMU wants to merge 1 commit intoNVIDIA:mainfrom
WuKongAI-CMU:fix/no-silent-ollama-autoselect

Conversation

@WuKongAI-CMU
Copy link
Contributor

Summary

Fixes #12 — onboarding no longer silently auto-selects Ollama or vLLM when detected locally.

Before: If Ollama was running on :11434 or vLLM on :8000, onboarding skipped the inference selection menu entirely and used the local engine without asking.

After: Detected local engines are reported as informational messages and highlighted in the selection hints, but the user always makes an explicit choice.

Changes in both:

  • nemoclaw/src/commands/onboard.ts (plugin wizard) — removed the if (ollama.running) auto-select branch, always shows the selection menu with detection hints
  • bin/lib/onboard.js (host CLI wizard) — replaced the early-return auto-select blocks with informational messages, detection flows into the normal options menu

Test plan

  • TypeScript builds clean
  • Existing tests pass (19/19)
  • Manual: start Ollama, run nemoclaw onboard → should show detection message but still present the full menu
  • Manual: run nemoclaw onboard with no local engines → menu appears as before

🤖 Generated with Claude Code

…ference

Both the plugin onboard command and the host CLI wizard silently
auto-selected Ollama or vLLM when detected on the local machine,
bypassing user confirmation entirely. This surprised users who
expected to use NVIDIA Cloud but had Ollama installed for other
purposes.

Now detected local engines are reported as informational messages
and highlighted in the selection menu, but the user always chooses
explicitly.

Closes NVIDIA#12

Signed-off-by: peteryuqin <peter.yuqin@gmail.com>
@kjw3
Copy link
Contributor

kjw3 commented Mar 18, 2026

Thanks for the issue and the PR! We have a large PR #295 to address ollama detection, making it a choice in the installer, and having ability to choose the model.

@WuKongAI-CMU
Copy link
Contributor Author

Closing to reduce my open PR count below the repo policy limit and refocus on a smaller set of higher-signal changes. I can revive this branch later if it becomes the right path again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Require explicit inference-provider confirmation instead of silently auto-selecting detected Ollama

2 participants