Skip to content

feat: LM Studio Integration#53248

Merged
frankekn merged 26 commits intoopenclaw:mainfrom
rugvedS07:feat/lmstudio-support
Apr 13, 2026
Merged

feat: LM Studio Integration#53248
frankekn merged 26 commits intoopenclaw:mainfrom
rugvedS07:feat/lmstudio-support

Conversation

@rugvedS07
Copy link
Copy Markdown
Contributor

@rugvedS07 rugvedS07 commented Mar 24, 2026

Summary

  • Problem: Users worldwide rely on LM Studio for running open models on their own hardware. Connecting LM Studio with OpenClaw is currently hard to do. This PR fixes it
  • Why it matters: Makes it easier for users to use local models in OpenClaw using LM Studio
  • What changed: Adds LM Studio in the onboarding, loads the default embedding model when required. Users can configure LM Studio to use models for inference and embedding generation.
  • What did NOT change (scope boundary): Only affects changes regarding LM Studio and adds new functionality.

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

User-visible / Behavior Changes

image
image
screenshot-2026-04-06_12-33-27
image

Security Impact (required)

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? Yes - stores lmstudio:default API-key auth profile / resolves LM_API_TOKEN (placeholder allowed for local no-auth setups), while real auth is enforced by the external LM Studio server.
  • New/changed network calls? Yes - uses LM Studio's API, /api/v1/load and /api/v1/models on the base URL the user has configured
  • Command/tool execution surface changed? No
  • Data access scope changed? No
  • If any Yes, explain risk + mitigation:
    Requests are made to users LM Studio instance (localhost in most cases) and the API token is used for the said instance only.

Repro + Verification

Environment

  • OS: Linux
  • Runtime/container: Node 22
  • Model/provider: LM Studio
  • Integration/channel (if any): NA
  • Relevant config (redacted): Default for LM Studio

Steps

  1. Run openclaw onboard
  2. Select LM Studio as provider and complete on-boarding
  3. To use LM Studio as an embeddings provider, use openclaw config set agents.defaults.memorySearch.provider lmstudio

Expected

  • LM Studio provider is configured and populates all the models available while setup
  • Uses LM Studio for LLM inference and embedding generation
  • Stores the LM_API_TOKEN in the auth profile

Actual

  • Works as above

Evidence

Attach at least one:

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording
  • Perf numbers (if relevant)

Tests were added in

  • src/agents/lmstudio-models.test.ts
  • src/agents/lmstudio-runtime.test.ts
  • src/agents/memory-search.test.ts
  • src/agents/model-auth-markers.test.ts
  • src/agents/model-auth.test.ts
  • src/agents/models-config.providers.normalize-keys.test.ts
  • src/commands/doctor-memory-search.test.ts
  • src/commands/lmstudio-setup.test.ts
  • src/commands/onboard-non-interactive.provider-auth.test.ts
  • src/memory/embeddings-lmstudio.test.ts
  • src/memory/manager.mistral-provider.test.ts
  • src/plugins/bundled-provider-auth-env-vars.test.ts
  • src/plugins/contracts/discovery.contract.test.ts

Human Verification (required)

What you personally verified (not just CI), and how:

  • Verified scenarios: Using LM Studio with OpenClaw, chatting with the model, using tools and verifying if embeddings are generated when memory is updated
  • Edge cases checked: LM Studio server is off/unreachable, JIT setting in LM Studio is off
  • What you did not verify: NA

Review Conversations

  • I replied to or resolved every bot review conversation I addressed in this PR.
  • I left unresolved only the conversations that still need reviewer or maintainer judgment.

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? No
  • Migration needed? No

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly: Reverting commit should remove the LM Studio as a provider
  • Files/config to restore: None, LM Studio provider will not be picked up
  • Known bad symptoms reviewers should watch for: NA

Risks and Mitigations

None

@rugvedS07 rugvedS07 requested a review from a team as a code owner March 24, 2026 00:32
@openclaw-barnacle openclaw-barnacle bot added docs Improvements or additions to documentation scripts Repository scripts commands Command implementations agents Agent runtime and tooling size: XL labels Mar 24, 2026
@rugvedS07 rugvedS07 changed the title Feat: LM Studio Integration feat: LM Studio Integration Mar 24, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 26149a30b5

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread extensions/lmstudio/index.ts
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 24, 2026

Greptile Summary

This PR integrates LM Studio as a first-class provider in OpenClaw, covering onboarding, runtime model discovery, and embedding generation. The implementation is well-structured and thorough, following established patterns from the Ollama and Mistral integrations already in the codebase.

Key changes:

  • New extensions/lmstudio/ plugin registering the provider for auth, discovery, and dynamic model preparation
  • src/agents/lmstudio-models.ts — model-wire parsing, URL normalization (idempotent /v1 stripping), and JIT model loading via /api/v1/models/load
  • src/agents/lmstudio-runtime.ts — auth header construction with lmstudio-local placeholder support for no-auth local setups; API key resolution chaining profile store → env → placeholder
  • src/memory/embeddings-lmstudio.ts — embedding provider with 120-second warmup and graceful warmup-failure handling
  • src/commands/lmstudio-setup.ts — interactive and non-interactive setup flows, plus provider discovery merging explicit config with live-discovered models
  • Adds lmstudio to the EmbeddingProviderId union and the Zod memory-search schema, completing the end-to-end embedding path

Minor observations noted inline:

  • The module-level cachedDynamicModels in the plugin is never invalidated within a process lifetime; if models change in LM Studio between sessions the stale entries persist until restart
  • resolveEmbeddingTimeout uses remote (60s query, 2-min batch) timeouts for LM Studio, consistent with Ollama but potentially tight for large models on slower hardware after an unexpected model eviction — the initial 120-second warmup covers first-load but not re-loads

Confidence Score: 5/5

  • Safe to merge — the implementation is well-tested, backward-compatible, and follows established provider patterns throughout the codebase
  • The PR adds a substantial but self-contained feature with 13 dedicated test files, careful URL normalization (idempotent), proper SSRF guard usage in the embedding path, and a solid auth marker system for no-auth local setups. All changes are additive (no modifications to existing provider logic), and the PR description explicitly confirms backward compatibility. The two inline comments are non-blocking style suggestions.
  • No files require special attention for merge; the two inline P2 notes on src/memory/manager-embedding-ops.ts and extensions/lmstudio/index.ts can be addressed in follow-up work

Comments Outside Diff (1)

  1. src/memory/manager-embedding-ops.ts, line 621-627 (link)

    P2 LM Studio uses remote timeouts despite being a local provider

    resolveEmbeddingTimeout classifies only "local" (node-llama-cpp) for the extended 5-minute query / 10-minute batch timeouts. LM Studio is treated as a remote provider (60-second query, 2-minute batch), which is consistent with how Ollama is handled.

    The initial warmup in createLmstudioEmbeddingProvider uses a generous 120-second timeout, so the first model load is covered. However, if LM Studio evicts the embedding model under memory pressure during a session, subsequent embedding calls go through the 60-second remote timeout, which could expire on slower hardware with large models before LM Studio can reload and infer.

    Consider whether lmstudio (and ollama) should use a longer timeout tier — or at minimum document this as a known limitation in the provider docs.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: src/memory/manager-embedding-ops.ts
    Line: 621-627
    
    Comment:
    **LM Studio uses remote timeouts despite being a local provider**
    
    `resolveEmbeddingTimeout` classifies only `"local"` (node-llama-cpp) for the extended 5-minute query / 10-minute batch timeouts. LM Studio is treated as a remote provider (60-second query, 2-minute batch), which is consistent with how Ollama is handled.
    
    The initial warmup in `createLmstudioEmbeddingProvider` uses a generous 120-second timeout, so the first model load is covered. However, if LM Studio evicts the embedding model under memory pressure during a session, subsequent embedding calls go through the 60-second remote timeout, which could expire on slower hardware with large models before LM Studio can reload and infer.
    
    Consider whether `lmstudio` (and `ollama`) should use a longer timeout tier — or at minimum document this as a known limitation in the provider docs.
    
    
    
    How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/lmstudio/index.ts
Line: 16

Comment:
**Stale dynamic model cache**

`cachedDynamicModels` is a module-level `Map` that is populated by `prepareDynamicModel` but never invalidated. If the user loads or unloads models in LM Studio after the cache was last populated, `resolveDynamicModel` will return stale (or missing) entries until the process is restarted.

This is consistent with how other providers handle dynamic models in this codebase, but it's worth considering whether a TTL or explicit invalidation signal on provider config changes would improve the experience for LM Studio users who frequently swap models between sessions.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/memory/manager-embedding-ops.ts
Line: 621-627

Comment:
**LM Studio uses remote timeouts despite being a local provider**

`resolveEmbeddingTimeout` classifies only `"local"` (node-llama-cpp) for the extended 5-minute query / 10-minute batch timeouts. LM Studio is treated as a remote provider (60-second query, 2-minute batch), which is consistent with how Ollama is handled.

The initial warmup in `createLmstudioEmbeddingProvider` uses a generous 120-second timeout, so the first model load is covered. However, if LM Studio evicts the embedding model under memory pressure during a session, subsequent embedding calls go through the 60-second remote timeout, which could expire on slower hardware with large models before LM Studio can reload and infer.

Consider whether `lmstudio` (and `ollama`) should use a longer timeout tier — or at minimum document this as a known limitation in the provider docs.

```suggestion
  private resolveEmbeddingTimeout(kind: "query" | "batch"): number {
    const isLocal =
      this.provider?.id === "local" ||
      this.provider?.id === "lmstudio" ||
      this.provider?.id === "ollama";
    if (kind === "query") {
      return isLocal ? EMBEDDING_QUERY_TIMEOUT_LOCAL_MS : EMBEDDING_QUERY_TIMEOUT_REMOTE_MS;
    }
    return isLocal ? EMBEDDING_BATCH_TIMEOUT_LOCAL_MS : EMBEDDING_BATCH_TIMEOUT_REMOTE_MS;
  }
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "Feat: LM Studio Integration" | Re-trigger Greptile

Comment thread extensions/lmstudio/index.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 70050f6f6f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread extensions/lmstudio/index.ts
Comment thread src/commands/lmstudio-setup.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4f9ceea5d0

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/lmstudio-runtime.ts
@vincentkoc vincentkoc self-assigned this Mar 24, 2026
@SeananTQ
Copy link
Copy Markdown

😀##太好了!
我太需要这个功能了,因为我是LM Studio的用户,它的界面非常直观,便于我在调试OpenClaw的时候可以快速看后台信息以及各种模型的加载情况。

但今天我更新了OpenClaw 2026.3.23 (ccfeecb)版本,我发现了一个BUG。这个BUG具体就是OpenClaw 拿不到LM Studio的上下文使用情况,具体表现是当我输入/status 之后:

🦞 OpenClaw 2026.3.23 (ccfeecb)
🧠 Model: lmstudio/qwen/qwen3.5-35b-a3b · 🔑 api-key (models.json)
📚 Context: 0/131k (0%) · 🧹 Compactions: 0
🧵 Session: agent:main:feishu:direct:ou_47c32081487e7930ea2b8cd7fcf7aa9f • updated just now
⚙️ Runtime: direct · Think: off
🪢 Queue: collect (depth 0)

其中 📚 Context: 0/131k (0%) · 🧹 Compactions: 0 是一个明显的错误

然后我又在tui里试了一下/status

 - agent:main:feishu:direct:ou_47c32081487e7930ea2b8cd7fcf7aa9f [direct] | 2h ago | model
 qwen/qwen3.5-35b-a3b | tokens ?/131k | flags: system, id:b3dccaa9-cf57-4652-a375-555caeaa94e8

其中| tokens ?/131k | 是一个明显的错误。

因为我是一个小白用户,不知道该如何解决这个问题,也不知道怎么提交我发现的问题,我通过搜索引擎找到了这个地方,把我所知道的东西讲出来,希望有人能看到。

@rugvedS07
Copy link
Copy Markdown
Contributor Author

@SeananTQ thank you for flagging this, it will be fixed once this PR is merged!

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b1e0cf903a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread extensions/lmstudio/index.ts
@rugvedS07 rugvedS07 force-pushed the feat/lmstudio-support branch from b1e0cf9 to 71c7640 Compare March 25, 2026 16:26
@rugvedS07 rugvedS07 force-pushed the feat/lmstudio-support branch from 9f99133 to fb42863 Compare April 6, 2026 18:54
@openclaw-barnacle openclaw-barnacle bot added extensions: memory-core Extension: memory-core cli CLI command changes labels Apr 6, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fb42863623

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/cli/program/register.onboard.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fb42863623

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread extensions/lmstudio/src/setup.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fb42863623

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread extensions/lmstudio/src/setup.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3c973a698c

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread extensions/lmstudio/index.ts Outdated
Comment thread extensions/lmstudio/src/stream.ts Outdated
@openclaw-barnacle openclaw-barnacle bot added the gateway Gateway runtime label Apr 6, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 99da19495f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/commands/doctor-memory-search.ts Outdated
Comment thread extensions/lmstudio/src/setup.ts
@rugvedS07 rugvedS07 force-pushed the feat/lmstudio-support branch from 99da194 to d1951ee Compare April 6, 2026 20:13
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 48bc6245f3

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread extensions/lmstudio/src/stream.ts
Comment thread extensions/lmstudio/src/setup.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0d63251970

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/commands/doctor-memory-search.ts
Copy link
Copy Markdown
Member

Thanks for the PR. I did a clean-worktree review and local validation against a real LM Studio install. This is not merge-ready yet.

Blockers:

  1. pnpm check and pnpm build are both failing on the current branch.

  2. The LM Studio setup path is currently breaking the provider config type contract. In extensions/lmstudio/src/setup.ts, the header patch is built as Record<string, string | undefined>, but ModelProviderConfig.headers is typed as Record<string, SecretInput>. That mismatch is one of the reasons the branch is red.

  3. The branch also introduces a larger memory-core compile break. The current changes in extensions/memory-core/src/memory/* are not type-clean right now, so this PR is not only adding LM Studio support, it is also leaving the repo in a non-buildable state.

  4. I ran local LM Studio validation with a real server and model. Discovery, non-interactive onboarding, config writing, gateway startup, and model preload all worked. But the full live gateway lane still failed on tool probing for lmstudio/google/gemma-4-26b-a4b with tool probe missing nonce. So at the moment this proves the basic wiring is partially working, but it does not prove the LM Studio integration is end-to-end ready for the OpenClaw gateway live suite.

Please update the PR so that:

  • pnpm check passes
  • pnpm build passes
  • the memory-core compile regressions are removed from this branch
  • LM Studio is validated with a live gateway run that passes the tool path, not only discovery/onboarding

Once those are addressed, I can re-review.

@frankekn frankekn force-pushed the feat/lmstudio-support branch from cda2c06 to 7ed05cd Compare April 13, 2026 07:22
@frankekn frankekn merged commit 0cfb83e into openclaw:main Apr 13, 2026
10 checks passed
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7ed05cd3e9

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +232 to +236
}
if (isNonSecretApiKeyMarker(resolvedApiKey) && resolvedApiKey !== CUSTOM_LOCAL_AUTH_MARKER) {
return await resolveConfiguredApiKeyOrThrow();
}
return resolvedApiKey;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Suppress profile API key when Authorization header is set

resolveLmstudioRuntimeApiKey returns any resolved profile/env key even when an Authorization header is configured, because the header-only check is only used in the fallback path. In header-auth LM Studio/proxy setups with a stale lmstudio:default profile key, this causes downstream request assembly to replace the configured header with Bearer <stale-key> (for example via resolveLmstudioRequestContext in dynamic model discovery), leading to avoidable 401s and failed model resolution.

Useful? React with 👍 / 👎.

Comment on lines +127 to +131
const parsed = new URL(resolved);
const pathname = normalizeUrlPath(parsed.pathname);
parsed.pathname = pathname.length > 0 ? pathname : "/";
parsed.search = "";
parsed.hash = "";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Require http(s) when normalizing LM Studio base URLs

This path accepts any URL protocol and returns it as normalized base URL, but inputs like localhost:1234 are parsed by URL as a custom scheme (localhost:), not an HTTP host. Because setup validation only checks non-empty input, users can enter common host:port values without http://, which then normalize to non-fetchable endpoints and cause discovery/setup failures even when LM Studio is running.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling commands Command implementations docs Improvements or additions to documentation extensions: memory-core Extension: memory-core gateway Gateway runtime scripts Repository scripts size: XL

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: For God's sake provide LM Studio in onboarding! 功能建议:原生支持 LM Studio 本地模型配置

4 participants