Feature Request: Native LM Studio Integration
Description
Is your feature request related to a problem? Please describe.
Currently, VS Code has native integration for Ollama (ollama launch vscode command, built-in model picker, official docs), but LM Studio users must rely on a third-party extension with manual configuration.
As local LLM adoption grows, having multiple local runner options is important for users. LM Studio has different models, better GPU utilization on some systems, and an OpenAI-compatible API that makes integration straightforward.
Describe the solution you would like
Add native LM Studio support similar to Ollama:
- CLI Command:
lms launch vscode (mirrors ollama launch vscode)
- Settings UI: LM Studio option in Language Models settings
- Model Picker: "LM Studio" provider alongside "Ollama"
- Auto-detection: VS Code detects
lms CLI and running server
- Documentation: Official docs at code.visualstudio.com/docs/copilotchat/local-models
Describe alternatives you have considered
- Current third-party extension:
NullSetIndustries.lmstudio-byok-chat-provider (892 installs)
- Not officially supported
- Manual configuration required
- Not discoverable in VS Code docs
- Unclear maintenance/longevity
Additional context
Why LM Studio?
- OpenAI-compatible API: Easy integration (same as Ollama effort)
- Growing adoption: 892+ installs for unofficial extension (organic growth)
- Different model zoo: Some models available on LM Studio but not Ollama
- Headless deployment:
llmster daemon for servers/CI
- Enterprise-friendly: LM Studio has commercial support options
Technical Details
LM Studio uses OpenAI-compatible API at http://localhost:1234 by default.
Proposed integration would mirror Ollama pattern:
- Detect
lms CLI command
- Start LM Studio server if not running
- Register models in Copilot Chat model picker
- Add settings UI for configuration
References
Implementation Effort
Estimated: 4-6 weeks to MVP
- Phase 1: CLI detection +
lms launch vscode command (2-3 weeks)
- Phase 2: Settings UI + model picker (1-2 weeks)
- Phase 3: Documentation (1 week)
Complexity: Low-Medium (LM Studio uses OpenAI-compatible API)
Community Interest
LM Studio users who want this: Please react with 👍 or comment below!
Use cases:
- Privacy-focused development (no cloud API calls)
- Custom fine-tuned models
- Offline development
- Cost savings (no API fees)
- Enterprise compliance (data sovereignty)
Feature Request: Native LM Studio Integration
Description
Is your feature request related to a problem? Please describe.
Currently, VS Code has native integration for Ollama (
ollama launch vscodecommand, built-in model picker, official docs), but LM Studio users must rely on a third-party extension with manual configuration.As local LLM adoption grows, having multiple local runner options is important for users. LM Studio has different models, better GPU utilization on some systems, and an OpenAI-compatible API that makes integration straightforward.
Describe the solution you would like
Add native LM Studio support similar to Ollama:
lms launch vscode(mirrorsollama launch vscode)lmsCLI and running serverDescribe alternatives you have considered
NullSetIndustries.lmstudio-byok-chat-provider(892 installs)Additional context
Why LM Studio?
llmsterdaemon for servers/CITechnical Details
LM Studio uses OpenAI-compatible API at http://localhost:1234 by default.
Proposed integration would mirror Ollama pattern:
lmsCLI commandReferences
Implementation Effort
Estimated: 4-6 weeks to MVP
lms launch vscodecommand (2-3 weeks)Complexity: Low-Medium (LM Studio uses OpenAI-compatible API)
Community Interest
LM Studio users who want this: Please react with 👍 or comment below!
Use cases: