V.2.7.0#3
Conversation
… compaction bypass (#11356) thanks @shadril238 Verified: - pnpm build - pnpm check - pnpm test (ran; one unrelated existing failure in models forward-compat test) - pnpm vitest src/agents/pi-embedded-runner.history-limit-from-session-key.test.ts Co-authored-by: shadril238 <63901551+shadril238@users.noreply.github.com> Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
…loader (#11029) thanks @shadril238 Verified: - pnpm build - pnpm check - pnpm test Co-authored-by: shadril238 <63901551+shadril238@users.noreply.github.com> Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
…shadril238 Verified: - pnpm install --frozen-lockfile - pnpm build - pnpm check - pnpm test Co-authored-by: shadril238 <63901551+shadril238@users.noreply.github.com> Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
Reviewer's GuideRefactors CLI routing and provider auth helpers into dedicated modules, introduces a native Ollama streaming implementation with config/schema/docs updates, tightens embedded-runner history limits for channel/group sessions, stops auto-creating HEARTBEAT.md on workspace bootstrap, and replaces ad-hoc logging/NDJSON/URL logic with reusable helpers plus expanded tests. Sequence diagram for Ollama native streaming in embedded runnersequenceDiagram
actor User
participant EmbeddedRunner as EmbeddedRunner_runEmbeddedAttempt
participant Agent as PiEmbeddedAgent
participant OllamaStream as createOllamaStreamFn
participant OllamaAPI as Ollama_api_chat
User->>EmbeddedRunner: start embedded run with model api=ollama
EmbeddedRunner->>EmbeddedRunner: resolve model and config
EmbeddedRunner->>EmbeddedRunner: detect model.api == ollama
EmbeddedRunner->>OllamaStream: createOllamaStreamFn(ollamaBaseUrl)
OllamaStream-->>EmbeddedRunner: StreamFn
EmbeddedRunner->>Agent: set agent.streamFn = StreamFn
User->>Agent: send messages context
Agent->>Agent: agent.streamFn(model, context, options)
Agent->>OllamaAPI: POST /api/chat (stream=true, tools, num_ctx, num_predict)
OllamaAPI-->>Agent: NDJSON stream chunks
Agent->>OllamaStream: parseNdjsonStream(reader)
OllamaStream-->>Agent: OllamaChatResponse chunks
Agent->>OllamaStream: buildAssistantMessage(finalResponse, modelInfo)
OllamaStream-->>Agent: AssistantMessage (text, toolCall content)
Agent-->>User: streamed assistant events
User->>Agent: cancel request (abort)
Agent->>OllamaAPI: abort fetch via signal
OllamaAPI-->>Agent: stream closed
Sequence diagram for CLI routed commands with new routes modulesequenceDiagram
actor User
participant CLI as CliEntryPoint
participant Router as Routes_findRoutedCommand
participant Route as RouteSpec_run
participant Command as CommandModule
User->>CLI: invoke `openclaw status --json`
CLI->>CLI: parse argv to path [status]
CLI->>Router: findRoutedCommand(path)
Router-->>CLI: RouteSpec for status
CLI->>CLI: prepareRoutedCommand(argv)
CLI->>Route: route.run(argv)
Route->>Command: dynamic import statusCommand
Command-->>Route: statusCommand
Route->>Command: statusCommand({ flags }, defaultRuntime)
Command-->>Route: Promise resolved
Route-->>CLI: true (handled)
CLI-->>User: print status output and exit
Class diagram for shared provider auth helper moduleclassDiagram
class ProviderAuthHelpers {
<<module>>
+resolveProviderMatch(providers, rawProvider) ProviderPlugin
+pickAuthMethod(provider, rawMethod) ProviderAuthMethod
+mergeConfigPatch(base, patch) T
+applyDefaultModel(cfg, model) OpenClawConfig
}
class ProviderPlugin {
+id string
+aliases string[]
+auth ProviderAuthMethod[]
}
class ProviderAuthMethod {
+id string
+label string
}
class OpenClawConfig {
+agents any
+models any
}
class AuthCommands {
<<module>>
+resolveRequestedLoginProviderOrThrow(providers, rawProvider) ProviderPlugin
+loginWithProvider()
}
class AuthChoicePluginProvider {
<<module>>
+applyAuthChoicePluginProvider(params, options)
}
ProviderAuthHelpers ..> ProviderPlugin : uses
ProviderAuthHelpers ..> ProviderAuthMethod : uses
ProviderAuthHelpers ..> OpenClawConfig : uses
AuthCommands ..> ProviderAuthHelpers : uses
AuthChoicePluginProvider ..> ProviderAuthHelpers : uses
AuthCommands ..> ProviderPlugin : uses
AuthCommands ..> ProviderAuthMethod : uses
AuthChoicePluginProvider ..> ProviderPlugin : uses
AuthChoicePluginProvider ..> ProviderAuthMethod : uses
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Summary of ChangesHello @PeterTheSavage, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces significant improvements to the platform's model integration, particularly for Ollama, by switching to its native API for enhanced streaming and tool-calling capabilities. It also refines agent behavior with more granular control over session history limits and optimizes workspace initialization by preventing unnecessary file creation. Developer experience is boosted through CLI routing refactoring, standardized hook logging, and more robust plugin installation. Additionally, several internal tests were updated and improved for better coverage and efficiency. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Hey - I've found 3 issues
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `src/agents/ollama-stream.ts:326-329` </location>
<code_context>
+ headers.Authorization = `Bearer ${options.apiKey}`;
+ }
+
+ const response = await fetch(chatUrl, {
+ method: "POST",
+ headers,
+ body: JSON.stringify(body),
+ signal: options?.signal,
+ });
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Only a single `done` event is emitted; there is no token-by-token streaming despite using a streaming HTTP API.
Right now the NDJSON stream is fully buffered and only a final `{ type: "done" }` event is emitted, so callers see this as a non-streaming completion. If consumers (e.g. UIs) rely on incremental `type: "update"` events with partial content, this will break parity with other streaming providers. Either emit intermediate events as chunks arrive (keeping `done` for the final accumulated message), or clearly document/enforce that Ollama integrations only provide final messages so callers don’t expect token-level streaming.
Suggested implementation:
```typescript
// For Ollama we currently only expose a *final* assistant message.
// Emit a last `update` event with the accumulated content so that
// consumers relying on `type: "update"` continue to work, even
// though Ollama does not provide token-by-token updates here.
if (finalAssistantMessage && finalAssistantMessage.content?.length) {
controller.enqueue({
type: "update",
id: finalAssistantMessage.id,
role: finalAssistantMessage.role ?? "assistant",
delta: {
content: finalAssistantMessage.content,
},
// Mark this as the final update in the stream for parity with
// other providers that send multiple incremental updates.
isFinal: true,
});
}
// Finalize the stream.
controller.enqueue({ type: "done" });
controller.close();
```
I only see a small portion of the file, so you will need to align the above with your actual implementation:
1. Ensure you have a variable (here referenced as `finalAssistantMessage`) that contains the final, fully-accumulated assistant message just before the stream is closed.
- If you currently only have a plain string (e.g. `fullText`), then construct the `update` payload accordingly:
```ts
if (fullText) {
controller.enqueue({
type: "update",
role: "assistant",
delta: { content: fullText },
isFinal: true,
});
}
```
and drop the `id` field if you don't use it elsewhere.
2. If your event shape for other providers differs (different field names for `delta`, `role`, etc.), match the structure you use in the rest of your streaming layer so callers see a consistent `type: "update"` event.
3. Update any relevant documentation / JSDoc for `ollamaStream` (or the exported function in this file) to explicitly state that:
- Ollama currently produces a *single* `type: "update"` event that contains the full, final message.
- There is no token-level / chunk-level streaming, even though the HTTP API is streaming.
This satisfies the “clearly document/enforce that Ollama integrations only provide final messages” part of your review comment.
4. If you want *true* incremental updates in the future, you can refactor the NDJSON parsing loop so each incoming chunk updates the accumulated content and emits an `update` event before the final one; the `isFinal` flag can then be used to signal the last update instead of emitting only once at the end.
</issue_to_address>
### Comment 2
<location> `src/commands/models/list.list-command.ts:101-115` </location>
<code_context>
continue;
}
- const model = modelByKey.get(entry.key);
+ let model = modelByKey.get(entry.key);
+ if (!model && modelRegistry) {
+ const forwardCompat = resolveForwardCompatModel(
+ entry.ref.provider,
+ entry.ref.model,
+ modelRegistry,
+ );
+ if (forwardCompat) {
+ model = forwardCompat;
+ modelByKey.set(entry.key, forwardCompat);
+ }
+ }
+ if (!model) {
+ model = resolveModel(entry.ref.provider, entry.ref.model, undefined, cfg).model;
+ }
</code_context>
<issue_to_address>
**suggestion (performance):** Consider caching the `resolveModel` fallback into `modelByKey` as well to avoid repeated lookups.
Currently, when `modelByKey` and `resolveForwardCompatModel` both fail, you call `resolveModel(...)` but don’t cache the result. If multiple entries share the same key, this leads to repeated `resolveModel` calls. Mirroring the forward-compat caching here would avoid redundant work and keep behavior consistent:
```ts
if (!model) {
const resolved = resolveModel(entry.ref.provider, entry.ref.model, undefined, cfg);
model = resolved.model;
if (model) {
modelByKey.set(entry.key, model);
}
}
```
```suggestion
let model = modelByKey.get(entry.key);
if (!model && modelRegistry) {
const forwardCompat = resolveForwardCompatModel(
entry.ref.provider,
entry.ref.model,
modelRegistry,
);
if (forwardCompat) {
model = forwardCompat;
modelByKey.set(entry.key, forwardCompat);
}
}
if (!model) {
const resolved = resolveModel(entry.ref.provider, entry.ref.model, undefined, cfg);
model = resolved.model;
if (model) {
modelByKey.set(entry.key, model);
}
}
```
</issue_to_address>
### Comment 3
<location> `src/plugins/install.e2e.test.ts:23-26` </location>
<code_context>
-}
-
-function packToArchive({
+async function packToArchive({
pkgDir,
outDir,
</code_context>
<issue_to_address>
**question (bug_risk):** Switching from `npm pack` to a raw tarball changes packaging semantics; verify this still matches `installPluginFromArchive` expectations.
The old `npm pack` call applied npm’s packaging rules (`files` in `package.json`, `.npmignore`/`.gitignore`, pruning dev files). The new approach archives the directory verbatim, which may change which files are included. Please confirm that `installPluginFromArchive` only relies on a root `package.json` in a single top-level folder, or otherwise verify that the new tarball layout and contents still match its expectations.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| const response = await fetch(chatUrl, { | ||
| method: "POST", | ||
| headers, | ||
| body: JSON.stringify(body), |
There was a problem hiding this comment.
suggestion (bug_risk): Only a single done event is emitted; there is no token-by-token streaming despite using a streaming HTTP API.
Right now the NDJSON stream is fully buffered and only a final { type: "done" } event is emitted, so callers see this as a non-streaming completion. If consumers (e.g. UIs) rely on incremental type: "update" events with partial content, this will break parity with other streaming providers. Either emit intermediate events as chunks arrive (keeping done for the final accumulated message), or clearly document/enforce that Ollama integrations only provide final messages so callers don’t expect token-level streaming.
Suggested implementation:
// For Ollama we currently only expose a *final* assistant message.
// Emit a last `update` event with the accumulated content so that
// consumers relying on `type: "update"` continue to work, even
// though Ollama does not provide token-by-token updates here.
if (finalAssistantMessage && finalAssistantMessage.content?.length) {
controller.enqueue({
type: "update",
id: finalAssistantMessage.id,
role: finalAssistantMessage.role ?? "assistant",
delta: {
content: finalAssistantMessage.content,
},
// Mark this as the final update in the stream for parity with
// other providers that send multiple incremental updates.
isFinal: true,
});
}
// Finalize the stream.
controller.enqueue({ type: "done" });
controller.close();I only see a small portion of the file, so you will need to align the above with your actual implementation:
-
Ensure you have a variable (here referenced as
finalAssistantMessage) that contains the final, fully-accumulated assistant message just before the stream is closed.- If you currently only have a plain string (e.g.
fullText), then construct theupdatepayload accordingly:
if (fullText) { controller.enqueue({ type: "update", role: "assistant", delta: { content: fullText }, isFinal: true, }); }
and drop the
idfield if you don't use it elsewhere. - If you currently only have a plain string (e.g.
-
If your event shape for other providers differs (different field names for
delta,role, etc.), match the structure you use in the rest of your streaming layer so callers see a consistenttype: "update"event. -
Update any relevant documentation / JSDoc for
ollamaStream(or the exported function in this file) to explicitly state that:- Ollama currently produces a single
type: "update"event that contains the full, final message. - There is no token-level / chunk-level streaming, even though the HTTP API is streaming.
This satisfies the “clearly document/enforce that Ollama integrations only provide final messages” part of your review comment.
- Ollama currently produces a single
-
If you want true incremental updates in the future, you can refactor the NDJSON parsing loop so each incoming chunk updates the accumulated content and emits an
updateevent before the final one; theisFinalflag can then be used to signal the last update instead of emitting only once at the end.
| let model = modelByKey.get(entry.key); | ||
| if (!model && modelRegistry) { | ||
| const forwardCompat = resolveForwardCompatModel( | ||
| entry.ref.provider, | ||
| entry.ref.model, | ||
| modelRegistry, | ||
| ); | ||
| if (forwardCompat) { | ||
| model = forwardCompat; | ||
| modelByKey.set(entry.key, forwardCompat); | ||
| } | ||
| } | ||
| if (!model) { | ||
| model = resolveModel(entry.ref.provider, entry.ref.model, undefined, cfg).model; | ||
| } |
There was a problem hiding this comment.
suggestion (performance): Consider caching the resolveModel fallback into modelByKey as well to avoid repeated lookups.
Currently, when modelByKey and resolveForwardCompatModel both fail, you call resolveModel(...) but don’t cache the result. If multiple entries share the same key, this leads to repeated resolveModel calls. Mirroring the forward-compat caching here would avoid redundant work and keep behavior consistent:
if (!model) {
const resolved = resolveModel(entry.ref.provider, entry.ref.model, undefined, cfg);
model = resolved.model;
if (model) {
modelByKey.set(entry.key, model);
}
}| let model = modelByKey.get(entry.key); | |
| if (!model && modelRegistry) { | |
| const forwardCompat = resolveForwardCompatModel( | |
| entry.ref.provider, | |
| entry.ref.model, | |
| modelRegistry, | |
| ); | |
| if (forwardCompat) { | |
| model = forwardCompat; | |
| modelByKey.set(entry.key, forwardCompat); | |
| } | |
| } | |
| if (!model) { | |
| model = resolveModel(entry.ref.provider, entry.ref.model, undefined, cfg).model; | |
| } | |
| let model = modelByKey.get(entry.key); | |
| if (!model && modelRegistry) { | |
| const forwardCompat = resolveForwardCompatModel( | |
| entry.ref.provider, | |
| entry.ref.model, | |
| modelRegistry, | |
| ); | |
| if (forwardCompat) { | |
| model = forwardCompat; | |
| modelByKey.set(entry.key, forwardCompat); | |
| } | |
| } | |
| if (!model) { | |
| const resolved = resolveModel(entry.ref.provider, entry.ref.model, undefined, cfg); | |
| model = resolved.model; | |
| if (model) { | |
| modelByKey.set(entry.key, model); | |
| } | |
| } |
| async function packToArchive({ | ||
| pkgDir, | ||
| outDir, | ||
| outName, |
There was a problem hiding this comment.
question (bug_risk): Switching from npm pack to a raw tarball changes packaging semantics; verify this still matches installPluginFromArchive expectations.
The old npm pack call applied npm’s packaging rules (files in package.json, .npmignore/.gitignore, pruning dev files). The new approach archives the directory verbatim, which may change which files are included. Please confirm that installPluginFromArchive only relies on a root package.json in a single top-level folder, or otherwise verify that the new tarball layout and contents still match its expectations.
There was a problem hiding this comment.
Pull request overview
This PR is a version release (v2.7.0) containing multiple bug fixes, feature enhancements, and test optimizations across the codebase.
Changes:
- Added native Ollama API integration with
/api/chatendpoint support for reliable streaming and tool calling - Implemented channel/group history limit support to prevent context overflow in long-running sessions
- Fixed workspace bootstrap to stop auto-creating HEARTBEAT.md file as documented
- Refactored test infrastructure for better performance and reliability
- Added subsystem logging to hooks loader
- Consolidated CLI routing code and extracted shared provider auth helpers
Reviewed changes
Copilot reviewed 47 out of 47 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| src/agents/ollama-stream.ts | New native Ollama /api/chat streaming implementation with NDJSON parsing and tool calling support |
| src/agents/ollama-stream.test.ts | Comprehensive test coverage for Ollama streaming functions |
| src/agents/pi-embedded-runner/run/attempt.ts | Integrate Ollama native streaming when api type is 'ollama' |
| src/agents/models-config.providers.ts | Switch Ollama provider to use native API, remove streaming:false workaround |
| src/agents/models-config.providers.ollama.e2e.test.ts | Update tests for native Ollama API integration |
| src/agents/pi-embedded-runner/history.ts | Add channel/group historyLimit support, keep backward compat for dm/direct |
| src/agents/pi-embedded-runner.ts | Export getHistoryLimitFromSessionKey, keep deprecated alias |
| src/agents/pi-embedded-runner.history-limit-from-session-key.test.ts | Test coverage for history limit with channel/group sessions |
| src/agents/pi-embedded-runner.get-dm-history-limit-from-session-key.returns-undefined-sessionkey-is-undefined.e2e.test.ts | Extended tests for all session types |
| src/agents/workspace.ts | Remove HEARTBEAT.md auto-creation during bootstrap |
| src/agents/workspace.e2e.test.ts | Verify HEARTBEAT.md is not created |
| src/agents/model-forward-compat.ts | Export forward-compat candidates for reuse |
| src/hooks/loader.ts | Replace console.* with subsystem logger |
| src/hooks/loader.test.ts | Remove console spy tests |
| src/config/zod-schema.core.ts | Add 'ollama' to ModelApiSchema |
| src/config/types.models.ts | Add 'ollama' to ModelApi type |
| src/commands/provider-auth-helpers.ts | Extract shared auth helper functions |
| src/commands/models/auth.ts | Use extracted auth helpers |
| src/commands/auth-choice.apply.plugin-provider.ts | Use extracted auth helpers |
| src/commands/models/shared.ts | Add isLocalBaseUrl utility function |
| src/commands/models/list.registry.ts | Use isLocalBaseUrl from shared, extract forward-compat candidates |
| src/commands/models/list.list-command.ts | Add forward-compat + resolveModel fallback for missing models |
| src/commands/models/list.list-command.forward-compat.test.ts | Update mock for resolveForwardCompatModel |
| src/commands/models.list.test.ts | Simplify MockModelRegistry without SDK type dependencies |
| src/commands/models.list.e2e.test.ts | Deleted (784 lines removed) |
| src/cli/route.ts | Import findRoutedCommand from routes.ts |
| src/cli/program/routes.ts | New file extracting route definitions from command-registry |
| src/cli/program/routes.test.ts | Tests for routes module |
| src/cli/program/command-registry.ts | Remove route definitions and findRoutedCommand |
| src/plugins/install.e2e.test.ts | Replace npm pack with tar library for test reliability |
| src/web/auto-reply.web-auto-reply.compresses-common-formats-jpeg-cap.e2e.test.ts | Share randomBytes buffer across iterations |
| src/telegram/bot.media.downloads-media-file-path-no-file-download.e2e.test.ts | Reduce test timing constants for faster execution |
| src/browser/screenshot.e2e.test.ts | Use sharp.create instead of crypto.randomBytes |
| src/browser/server.skips-default-maxchars-explicitly-set-zero.test.ts | Merge tests from deleted file |
| src/browser/server.serves-status-starts-browser-requested.test.ts | Deleted (merged into skips-default-maxchars file) |
| src/auto-reply/reply.raw-body.test.ts | Remove duplicate test case |
| scripts/test-parallel.mjs | Remove deleted test file from isolation list |
| docs/providers/ollama.md | Update docs for native API, remove streaming workaround section |
| CHANGELOG.md | Document all fixes and changes |
There was a problem hiding this comment.
Code Review
This pull request updates the version to v.2.7.0, bringing a range of fixes, refactorings, and documentation updates. It includes a significant refactoring of the Ollama integration to use its native API, introduces lazy loading of CLI commands for improved performance, and offers more granular history limit controls. The codebase also sees improvements through refactored shared helper functions and enhanced logging consistency. However, a critical vulnerability exists in the new hook loading mechanism within src/hooks/loader.ts. This logic allows the execution of arbitrary JavaScript files from the agent's workspace via dynamic import(), which can be exploited through prompt injection to achieve Remote Code Execution (RCE) and sandbox escape. It is strongly recommended to disable loading executable code from the workspace or implement a robust code signing and verification mechanism to address this security risk.
| try { | ||
| yield JSON.parse(trimmed) as OllamaChatResponse; | ||
| } catch { | ||
| console.warn("[ollama-stream] Skipping malformed NDJSON line:", trimmed.slice(0, 120)); |
There was a problem hiding this comment.
For consistency with other logging changes in this PR (e.g., in src/hooks/loader.ts), please use the subsystem logger here instead of console.warn. This ensures all logs are routed through the standard logging system.
You'll need to add the following at the top of the file:
import { createSubsystemLogger } from "../logging/subsystem.js";
const log = createSubsystemLogger("ollama-stream");| console.warn("[ollama-stream] Skipping malformed NDJSON line:", trimmed.slice(0, 120)); | |
| log.warn(`Skipping malformed NDJSON line: ${trimmed.slice(0, 120)}`); |
| console.warn( | ||
| "[ollama-stream] Skipping malformed trailing data:", | ||
| buffer.trim().slice(0, 120), | ||
| ); |
…arosstale Verified: - pnpm build - pnpm check - pnpm test Co-authored-by: arosstale <117890364+arosstale@users.noreply.github.com> Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
… thanks @arosstale Verified: - pnpm build - pnpm check - pnpm test Co-authored-by: arosstale <117890364+arosstale@users.noreply.github.com> Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
* fix: enforce Telegram 100-command limit with warning (#5787) Telegram's setMyCommands API rejects requests with more than 100 commands. When skills + custom + plugin commands exceed the limit, truncate to 100 and warn the user instead of silently failing on every startup. * fix: enforce Telegram menu cap + keep hidden commands callable (#15844) (thanks @battman21) --------- Co-authored-by: Peter Steinberger <steipete@gmail.com>
…isk (#11560) * fix: preserve ${VAR} env var references when writing config back to disk Fixes #11466 When config is loaded, ${VAR} references are resolved to their plaintext values. Previously, writeConfigFile would serialize the resolved values, silently replacing "${ANTHROPIC_API_KEY}" with "sk-ant-api03-..." in the config file. Now writeConfigFile reads the current file pre-substitution, and for each value that matches what a ${VAR} reference would resolve to, restores the original reference. Values the caller intentionally changed are kept as-is. This fixes all 50+ writeConfigFile call sites (doctor, configure wizard, gateway config.set/apply/patch, plugins, hooks, etc.) without requiring any caller changes. New files: - src/config/env-preserve.ts — restoreEnvVarRefs() utility - src/config/env-preserve.test.ts — 11 unit tests * fix: remove global config env snapshot race * docs(changelog): note config env snapshot race fix --------- Co-authored-by: Peter Steinberger <steipete@gmail.com>
…12364) thanks @arosstale Verified: - pnpm build - pnpm check - pnpm test Co-authored-by: arosstale <117890364+arosstale@users.noreply.github.com> Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
#14532) thanks @arosstale Verified: - pnpm build - pnpm check - pnpm test Co-authored-by: arosstale <117890364+arosstale@users.noreply.github.com> Co-authored-by: Tak Hoffman <781889+Takhoffman@users.noreply.github.com>
Summary by Sourcery
Refine Ollama native API integration and history limits, centralize auth/provider helpers and CLI routing, update hook logging, and adjust workspace/bootstrap behavior while expanding coverage and tests.
New Features:
Bug Fixes:
Enhancements:
Documentation:
Tests: