Comprehensive Ollama Support PR#7432
Conversation
Fixes openclaw#4544 When Ollama is running locally with models installed, they were showing as 'missing' in 'openclaw models list' because the provider was only registered if an explicit API key was configured. Since Ollama is a local service that doesn't require authentication, this change: - Auto-discovers Ollama models by attempting to query the local instance - Registers the provider if models are found - Uses a placeholder API key 'ollama-local' for local instances - Still respects explicit configuration if provided This aligns with how other local/unauthenticated providers work (e.g., qwen-portal with OAuth placeholder). Users can now use Ollama models immediately after installing Ollama and pulling models, without needing to configure authentication.
…covery fix: Auto-discover Ollama models without requiring explicit API key
Brucemacd/ollama onboard
Add ollama native tests
There was a problem hiding this comment.
Pull request overview
This PR adds comprehensive Ollama support to the codebase, enabling users to configure and use Ollama for local and cloud-based model execution.
Changes:
- Added native Ollama API provider implementation with tool call support and availability checks
- Integrated Ollama as a new authentication choice in the onboarding flow
- Implemented automatic model discovery for local Ollama instances
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| src/providers/ollama-native.ts | Core Ollama provider implementation with chat and availability check functions |
| src/providers/ollama-native.test.ts | Comprehensive test suite for Ollama native provider functionality |
| src/commands/onboard-types.ts | Added "ollama" as a valid auth choice type |
| src/commands/onboard-non-interactive/local/auth-choice.ts | Non-interactive Ollama setup with base URL configuration |
| src/commands/onboard-auth.ts | Exported Ollama configuration functions and constants |
| src/commands/onboard-auth.models.ts | Defined Ollama base URL and default API key constants |
| src/commands/onboard-auth.credentials.ts | Added credential storage function for Ollama |
| src/commands/onboard-auth.config-core.ts | Implemented Ollama provider configuration logic |
| src/commands/auth-choice.preferred-provider.ts | Mapped "ollama" auth choice to provider |
| src/commands/auth-choice.apply.ts | Registered Ollama auth choice handler |
| src/commands/auth-choice.apply.ollama.ts | Interactive Ollama setup with auto-detection |
| src/commands/auth-choice-options.ts | Added Ollama to auth choice UI options |
| src/cli/program/register.onboard.ts | Added CLI flag for Ollama base URL |
| src/agents/models-config.providers.ts | Auto-discovery of Ollama provider when available |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| export async function setOllamaApiKey(key: string, agentDir?: string) { | ||
| upsertAuthProfile({ | ||
| profileId: "ollama:default", | ||
| credential: { | ||
| type: "api_key", | ||
| provider: "ollama", | ||
| key, | ||
| }, | ||
| agentDir: resolveAuthAgentDir(agentDir), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Missing return statement. The function signature is async but doesn't return the result of upsertAuthProfile, which may return a Promise. Add 'return await' before upsertAuthProfile.
| const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record< | ||
| string, | ||
| unknown | ||
| > as { apiKey?: string }; |
There was a problem hiding this comment.
Double type assertion pattern is unclear and error-prone. Consider using a more explicit approach: first cast to Record<string, unknown>, destructure, then use type guards if needed.
| const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record< | |
| string, | |
| unknown | |
| > as { apiKey?: string }; | |
| const existingProviderRecord = (existingProvider ?? {}) as Record<string, unknown>; | |
| const { apiKey: rawApiKey, ...existingProviderRest } = existingProviderRecord; | |
| const existingApiKey = typeof rawApiKey === "string" ? rawApiKey : undefined; |
| export const OLLAMA_BASE_URL = "http://127.0.0.1:11434/v1"; | ||
| export const OLLAMA_DEFAULT_API_KEY = "ollama"; | ||
|
|
There was a problem hiding this comment.
[P1] OLLAMA_DEFAULT_API_KEY is set to "ollama", but the rest of the PR treats the local/no-auth sentinel as "ollama-local" (e.g. models-config.providers.ts uses "ollama-local" and ollama-native.ts skips the Authorization header only for that value). As-is, onboarding writes "ollama" into the auth profile/config, which means downstream code may send Authorization: Bearer ollama to a local Ollama instance unnecessarily.
This mismatch also appears in src/commands/auth-choice.apply.ollama.ts and src/commands/onboard-non-interactive/local/auth-choice.ts (they store OLLAMA_DEFAULT_API_KEY).
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/onboard-auth.models.ts
Line: 18:20
Comment:
[P1] `OLLAMA_DEFAULT_API_KEY` is set to `"ollama"`, but the rest of the PR treats the local/no-auth sentinel as `"ollama-local"` (e.g. `models-config.providers.ts` uses `"ollama-local"` and `ollama-native.ts` skips the `Authorization` header only for that value). As-is, onboarding writes `"ollama"` into the auth profile/config, which means downstream code may send `Authorization: Bearer ollama` to a local Ollama instance unnecessarily.
This mismatch also appears in `src/commands/auth-choice.apply.ollama.ts` and `src/commands/onboard-non-interactive/local/auth-choice.ts` (they store `OLLAMA_DEFAULT_API_KEY`).
How can I resolve this? If you propose a fix, please make it concise.| const ollamaProvider = await buildOllamaProvider(); | ||
| if (ollamaProvider.models.length > 0) { | ||
| const ollamaKey = | ||
| resolveEnvApiKeyVarName("ollama") ?? | ||
| resolveApiKeyFromProfiles({ provider: "ollama", store: authStore }) ?? | ||
| "ollama-local"; // Placeholder for local Ollama instance | ||
| providers.ollama = { ...ollamaProvider, apiKey: ollamaKey }; |
There was a problem hiding this comment.
[P1] resolveImplicitProviders now always calls buildOllamaProvider() (which hits the network unless VITEST/NODE_ENV === "test"). This can introduce noticeable latency/hangs during startup/config resolution when Ollama isn't running, since it waits up to the fetch timeout.
If the intent is “auto-discover only when local Ollama is reachable”, consider a lightweight reachability check first (or make discovery lazy/on-demand) to avoid doing a 5s probe on every invocation in non-test environments.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/models-config.providers.ts
Line: 457:463
Comment:
[P1] `resolveImplicitProviders` now always calls `buildOllamaProvider()` (which hits the network unless `VITEST`/`NODE_ENV === "test"`). This can introduce noticeable latency/hangs during startup/config resolution when Ollama isn't running, since it waits up to the fetch timeout.
If the intent is “auto-discover only when local Ollama is reachable”, consider a lightweight reachability check first (or make discovery lazy/on-demand) to avoid doing a 5s probe on every invocation in non-test environments.
How can I resolve this? If you propose a fix, please make it concise.| const providers = { ...cfg.models?.providers }; | ||
| const existingProvider = providers.ollama; | ||
| const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record< | ||
| string, | ||
| unknown | ||
| > as { apiKey?: string }; | ||
| const resolvedApiKey = typeof existingApiKey === "string" ? existingApiKey : undefined; | ||
| const normalizedApiKey = resolvedApiKey?.trim() || OLLAMA_DEFAULT_API_KEY; | ||
| const baseUrl = params?.baseUrl ?? OLLAMA_BASE_URL; | ||
|
|
||
| providers.ollama = { | ||
| ...existingProviderRest, | ||
| baseUrl, | ||
| api: "openai-completions", | ||
| apiKey: normalizedApiKey, | ||
| // Models are discovered dynamically via Ollama's /api/tags endpoint | ||
| models: [], | ||
| }; |
There was a problem hiding this comment.
[P2] applyOllamaProviderConfig always overwrites providers.ollama.models with an empty array. If a user has explicitly configured static models for Ollama in their config, this will wipe them during onboarding/apply and could make models list look empty until runtime discovery occurs.
If the desired behavior is “don’t change models here”, consider preserving any existing providers.ollama.models instead of forcing [].
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/onboard-auth.config-core.ts
Line: 470:487
Comment:
[P2] `applyOllamaProviderConfig` always overwrites `providers.ollama.models` with an empty array. If a user has explicitly configured static models for Ollama in their config, this will wipe them during onboarding/apply and could make `models list` look empty until runtime discovery occurs.
If the desired behavior is “don’t change models here”, consider preserving any existing `providers.ollama.models` instead of forcing `[]`.
How can I resolve this? If you propose a fix, please make it concise.| const textDecoder = new TextDecoder(); | ||
|
|
||
| function bodyToString(body: BodyInit | null | undefined): string { | ||
| if (typeof body === "string") { | ||
| return body; | ||
| } | ||
|
|
||
| if (!body) { | ||
| return ""; | ||
| } | ||
|
|
||
| if (body instanceof URLSearchParams) { | ||
| return body.toString(); | ||
| } | ||
|
|
||
| if (body instanceof ArrayBuffer) { | ||
| return textDecoder.decode(new Uint8Array(body)); | ||
| } | ||
|
|
||
| if (body instanceof Uint8Array) { | ||
| return textDecoder.decode(body); | ||
| } | ||
|
|
||
| if ( | ||
| typeof body === "object" && | ||
| typeof (body as { toString?: () => string }).toString === "function" | ||
| ) { | ||
| return body.toString(); | ||
| } | ||
|
|
||
| throw new Error("Unable to decode fetch body"); |
There was a problem hiding this comment.
[P2] bodyToString uses a module-level TextDecoder. In some test runners/environments, TextDecoder may not exist (or be non-standard), which would make these unit tests fail even though the provider logic is fine.
If you want the helper to be robust across Node environments, consider instantiating TextDecoder lazily (or avoiding it entirely since body is always a JSON string in these tests).
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/providers/ollama-native.test.ts
Line: 120:150
Comment:
[P2] `bodyToString` uses a module-level `TextDecoder`. In some test runners/environments, `TextDecoder` may not exist (or be non-standard), which would make these unit tests fail even though the provider logic is fine.
If you want the helper to be robust across Node environments, consider instantiating `TextDecoder` lazily (or avoiding it entirely since `body` is always a JSON string in these tests).
How can I resolve this? If you propose a fix, please make it concise.Co-authored-by: charlieduzstuf <99685295+charlieduzstuf@users.noreply.github.com>
Co-authored-by: spiceoogway <105812383+spiceoogway@users.noreply.github.com> Co-authored-by: alltomatos <179011971+alltomatos@users.noreply.github.com> Co-authored-by: charlieduzstuf <99685295+charlieduzstuf@users.noreply.github.com>
Co-authored-by: spiceoogway <105812383+spiceoogway@users.noreply.github.com> Co-authored-by: alltomatos <179011971+alltomatos@users.noreply.github.com> Co-authored-by: charlieduzstuf <99685295+charlieduzstuf@users.noreply.github.com>
Integrate Ollama auto-discovery and installer
|
good job, i want to test ollama on my game GPU |
bfc1ccb to
f92900f
Compare
Takes most/all of the Ollama PRs and merges them into 1 more managable PR.
Greptile Overview
Greptile Summary
This PR consolidates Ollama onboarding and provider support by:
--ollama-base-urlpath./api/chat+/api/tags) with unit tests.ollamaprovider when models are detected.The changes fit into the existing onboarding/auth-choice flow (
src/commands/onboard-*) and provider discovery (src/agents/models-config.providers.ts) by treating Ollama as an OpenAI-compatible provider and wiring it into the same config/profile mechanisms used by other providers.Confidence Score: 3/5
ollamavsollama-local) and provider discovery now introduces a network probe during implicit provider resolution, which could degrade UX when Ollama isn’t running.(2/5) Greptile learns from your feedback when you react with thumbs up/down!
Context used:
dashboard- CLAUDE.md (source)dashboard- AGENTS.md (source)