Skip to content

Comprehensive Ollama Support PR#7432

Open
charlieduzstuf wants to merge 18 commits intoopenclaw:mainfrom
charlieduzstuf:main
Open

Comprehensive Ollama Support PR#7432
charlieduzstuf wants to merge 18 commits intoopenclaw:mainfrom
charlieduzstuf:main

Conversation

@charlieduzstuf
Copy link

@charlieduzstuf charlieduzstuf commented Feb 2, 2026

Takes most/all of the Ollama PRs and merges them into 1 more managable PR.

Greptile Overview

Greptile Summary

This PR consolidates Ollama onboarding and provider support by:

  • Adding an interactive auth choice handler for Ollama plus a non-interactive --ollama-base-url path.
  • Introducing a native Ollama client (/api/chat + /api/tags) with unit tests.
  • Changing provider resolution to auto-discover a local Ollama instance and populate an ollama provider when models are detected.

The changes fit into the existing onboarding/auth-choice flow (src/commands/onboard-*) and provider discovery (src/agents/models-config.providers.ts) by treating Ollama as an OpenAI-compatible provider and wiring it into the same config/profile mechanisms used by other providers.

Confidence Score: 3/5

  • This PR looks reasonably safe to merge, but has a few integration mismatches that could cause confusing behavior in real Ollama setups.
  • Core changes are straightforward and covered by unit tests for the new native client, but there’s a likely sentinel mismatch for the local API key (ollama vs ollama-local) and provider discovery now introduces a network probe during implicit provider resolution, which could degrade UX when Ollama isn’t running.
  • src/commands/onboard-auth.models.ts, src/agents/models-config.providers.ts, src/commands/onboard-auth.config-core.ts

(2/5) Greptile learns from your feedback when you react with thumbs up/down!

Context used:

  • Context from dashboard - CLAUDE.md (source)
  • Context from dashboard - AGENTS.md (source)

spiceoogway and others added 11 commits January 31, 2026 16:25
Fixes openclaw#4544

When Ollama is running locally with models installed, they were showing
as 'missing' in 'openclaw models list' because the provider was only
registered if an explicit API key was configured.

Since Ollama is a local service that doesn't require authentication,
this change:
- Auto-discovers Ollama models by attempting to query the local instance
- Registers the provider if models are found
- Uses a placeholder API key 'ollama-local' for local instances
- Still respects explicit configuration if provided

This aligns with how other local/unauthenticated providers work
(e.g., qwen-portal with OAuth placeholder).

Users can now use Ollama models immediately after installing Ollama
and pulling models, without needing to configure authentication.
…covery

fix: Auto-discover Ollama models without requiring explicit API key
Copilot AI review requested due to automatic review settings February 2, 2026 20:27
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive Ollama support to the codebase, enabling users to configure and use Ollama for local and cloud-based model execution.

Changes:

  • Added native Ollama API provider implementation with tool call support and availability checks
  • Integrated Ollama as a new authentication choice in the onboarding flow
  • Implemented automatic model discovery for local Ollama instances

Reviewed changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/providers/ollama-native.ts Core Ollama provider implementation with chat and availability check functions
src/providers/ollama-native.test.ts Comprehensive test suite for Ollama native provider functionality
src/commands/onboard-types.ts Added "ollama" as a valid auth choice type
src/commands/onboard-non-interactive/local/auth-choice.ts Non-interactive Ollama setup with base URL configuration
src/commands/onboard-auth.ts Exported Ollama configuration functions and constants
src/commands/onboard-auth.models.ts Defined Ollama base URL and default API key constants
src/commands/onboard-auth.credentials.ts Added credential storage function for Ollama
src/commands/onboard-auth.config-core.ts Implemented Ollama provider configuration logic
src/commands/auth-choice.preferred-provider.ts Mapped "ollama" auth choice to provider
src/commands/auth-choice.apply.ts Registered Ollama auth choice handler
src/commands/auth-choice.apply.ollama.ts Interactive Ollama setup with auto-detection
src/commands/auth-choice-options.ts Added Ollama to auth choice UI options
src/cli/program/register.onboard.ts Added CLI flag for Ollama base URL
src/agents/models-config.providers.ts Auto-discovery of Ollama provider when available

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +182 to +192
export async function setOllamaApiKey(key: string, agentDir?: string) {
upsertAuthProfile({
profileId: "ollama:default",
credential: {
type: "api_key",
provider: "ollama",
key,
},
agentDir: resolveAuthAgentDir(agentDir),
});
}
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing return statement. The function signature is async but doesn't return the result of upsertAuthProfile, which may return a Promise. Add 'return await' before upsertAuthProfile.

Copilot uses AI. Check for mistakes.
Comment on lines +472 to +475
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Double type assertion pattern is unclear and error-prone. Consider using a more explicit approach: first cast to Record<string, unknown>, destructure, then use type guards if needed.

Suggested change
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
const existingProviderRecord = (existingProvider ?? {}) as Record<string, unknown>;
const { apiKey: rawApiKey, ...existingProviderRest } = existingProviderRecord;
const existingApiKey = typeof rawApiKey === "string" ? rawApiKey : undefined;

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 4 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +18 to +20
export const OLLAMA_BASE_URL = "http://127.0.0.1:11434/v1";
export const OLLAMA_DEFAULT_API_KEY = "ollama";

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] OLLAMA_DEFAULT_API_KEY is set to "ollama", but the rest of the PR treats the local/no-auth sentinel as "ollama-local" (e.g. models-config.providers.ts uses "ollama-local" and ollama-native.ts skips the Authorization header only for that value). As-is, onboarding writes "ollama" into the auth profile/config, which means downstream code may send Authorization: Bearer ollama to a local Ollama instance unnecessarily.

This mismatch also appears in src/commands/auth-choice.apply.ollama.ts and src/commands/onboard-non-interactive/local/auth-choice.ts (they store OLLAMA_DEFAULT_API_KEY).

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/onboard-auth.models.ts
Line: 18:20

Comment:
[P1] `OLLAMA_DEFAULT_API_KEY` is set to `"ollama"`, but the rest of the PR treats the local/no-auth sentinel as `"ollama-local"` (e.g. `models-config.providers.ts` uses `"ollama-local"` and `ollama-native.ts` skips the `Authorization` header only for that value). As-is, onboarding writes `"ollama"` into the auth profile/config, which means downstream code may send `Authorization: Bearer ollama` to a local Ollama instance unnecessarily.

This mismatch also appears in `src/commands/auth-choice.apply.ollama.ts` and `src/commands/onboard-non-interactive/local/auth-choice.ts` (they store `OLLAMA_DEFAULT_API_KEY`).

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 457 to 463
const ollamaProvider = await buildOllamaProvider();
if (ollamaProvider.models.length > 0) {
const ollamaKey =
resolveEnvApiKeyVarName("ollama") ??
resolveApiKeyFromProfiles({ provider: "ollama", store: authStore }) ??
"ollama-local"; // Placeholder for local Ollama instance
providers.ollama = { ...ollamaProvider, apiKey: ollamaKey };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] resolveImplicitProviders now always calls buildOllamaProvider() (which hits the network unless VITEST/NODE_ENV === "test"). This can introduce noticeable latency/hangs during startup/config resolution when Ollama isn't running, since it waits up to the fetch timeout.

If the intent is “auto-discover only when local Ollama is reachable”, consider a lightweight reachability check first (or make discovery lazy/on-demand) to avoid doing a 5s probe on every invocation in non-test environments.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/models-config.providers.ts
Line: 457:463

Comment:
[P1] `resolveImplicitProviders` now always calls `buildOllamaProvider()` (which hits the network unless `VITEST`/`NODE_ENV === "test"`). This can introduce noticeable latency/hangs during startup/config resolution when Ollama isn't running, since it waits up to the fetch timeout.

If the intent is “auto-discover only when local Ollama is reachable”, consider a lightweight reachability check first (or make discovery lazy/on-demand) to avoid doing a 5s probe on every invocation in non-test environments.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +470 to +487
const providers = { ...cfg.models?.providers };
const existingProvider = providers.ollama;
const { apiKey: existingApiKey, ...existingProviderRest } = (existingProvider ?? {}) as Record<
string,
unknown
> as { apiKey?: string };
const resolvedApiKey = typeof existingApiKey === "string" ? existingApiKey : undefined;
const normalizedApiKey = resolvedApiKey?.trim() || OLLAMA_DEFAULT_API_KEY;
const baseUrl = params?.baseUrl ?? OLLAMA_BASE_URL;

providers.ollama = {
...existingProviderRest,
baseUrl,
api: "openai-completions",
apiKey: normalizedApiKey,
// Models are discovered dynamically via Ollama's /api/tags endpoint
models: [],
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] applyOllamaProviderConfig always overwrites providers.ollama.models with an empty array. If a user has explicitly configured static models for Ollama in their config, this will wipe them during onboarding/apply and could make models list look empty until runtime discovery occurs.

If the desired behavior is “don’t change models here”, consider preserving any existing providers.ollama.models instead of forcing [].

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/onboard-auth.config-core.ts
Line: 470:487

Comment:
[P2] `applyOllamaProviderConfig` always overwrites `providers.ollama.models` with an empty array. If a user has explicitly configured static models for Ollama in their config, this will wipe them during onboarding/apply and could make `models list` look empty until runtime discovery occurs.

If the desired behavior is “don’t change models here”, consider preserving any existing `providers.ollama.models` instead of forcing `[]`.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +120 to +150
const textDecoder = new TextDecoder();

function bodyToString(body: BodyInit | null | undefined): string {
if (typeof body === "string") {
return body;
}

if (!body) {
return "";
}

if (body instanceof URLSearchParams) {
return body.toString();
}

if (body instanceof ArrayBuffer) {
return textDecoder.decode(new Uint8Array(body));
}

if (body instanceof Uint8Array) {
return textDecoder.decode(body);
}

if (
typeof body === "object" &&
typeof (body as { toString?: () => string }).toString === "function"
) {
return body.toString();
}

throw new Error("Unable to decode fetch body");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] bodyToString uses a module-level TextDecoder. In some test runners/environments, TextDecoder may not exist (or be non-standard), which would make these unit tests fail even though the provider logic is fine.

If you want the helper to be robust across Node environments, consider instantiating TextDecoder lazily (or avoiding it entirely since body is always a JSON string in these tests).

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/providers/ollama-native.test.ts
Line: 120:150

Comment:
[P2] `bodyToString` uses a module-level `TextDecoder`. In some test runners/environments, `TextDecoder` may not exist (or be non-standard), which would make these unit tests fail even though the provider logic is fine.

If you want the helper to be robust across Node environments, consider instantiating `TextDecoder` lazily (or avoiding it entirely since `body` is always a JSON string in these tests).

How can I resolve this? If you propose a fix, please make it concise.

@openclaw-barnacle openclaw-barnacle bot added cli CLI command changes commands Command implementations agents Agent runtime and tooling labels Feb 2, 2026
charlieduzstuf and others added 6 commits February 3, 2026 08:57
Co-authored-by: charlieduzstuf <99685295+charlieduzstuf@users.noreply.github.com>
Co-authored-by: spiceoogway <105812383+spiceoogway@users.noreply.github.com>
Co-authored-by: alltomatos <179011971+alltomatos@users.noreply.github.com>

Co-authored-by: charlieduzstuf <99685295+charlieduzstuf@users.noreply.github.com>
Co-authored-by: spiceoogway <105812383+spiceoogway@users.noreply.github.com>
Co-authored-by: alltomatos <179011971+alltomatos@users.noreply.github.com>

Co-authored-by: charlieduzstuf <99685295+charlieduzstuf@users.noreply.github.com>
Integrate Ollama auto-discovery and installer
@openclaw-barnacle openclaw-barnacle bot added docs Improvements or additions to documentation docker Docker and sandbox tooling labels Feb 3, 2026
@fr0zenrain
Copy link

fr0zenrain commented Feb 7, 2026

good job, i want to test ollama on my game GPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling cli CLI command changes commands Command implementations docker Docker and sandbox tooling docs Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants

Comments