Skip to content

Feature Request: Native LM Studio Integration (like Ollama) #308442

@CyberRaccoonTeam

Description

@CyberRaccoonTeam

Feature Request: Native LM Studio Integration

Description

Is your feature request related to a problem? Please describe.

Currently, VS Code has native integration for Ollama (ollama launch vscode command, built-in model picker, official docs), but LM Studio users must rely on a third-party extension with manual configuration.

As local LLM adoption grows, having multiple local runner options is important for users. LM Studio has different models, better GPU utilization on some systems, and an OpenAI-compatible API that makes integration straightforward.

Describe the solution you would like

Add native LM Studio support similar to Ollama:

  1. CLI Command: lms launch vscode (mirrors ollama launch vscode)
  2. Settings UI: LM Studio option in Language Models settings
  3. Model Picker: "LM Studio" provider alongside "Ollama"
  4. Auto-detection: VS Code detects lms CLI and running server
  5. Documentation: Official docs at code.visualstudio.com/docs/copilotchat/local-models

Describe alternatives you have considered

  • Current third-party extension: NullSetIndustries.lmstudio-byok-chat-provider (892 installs)
    • Not officially supported
    • Manual configuration required
    • Not discoverable in VS Code docs
    • Unclear maintenance/longevity

Additional context

Why LM Studio?

  • OpenAI-compatible API: Easy integration (same as Ollama effort)
  • Growing adoption: 892+ installs for unofficial extension (organic growth)
  • Different model zoo: Some models available on LM Studio but not Ollama
  • Headless deployment: llmster daemon for servers/CI
  • Enterprise-friendly: LM Studio has commercial support options

Technical Details

LM Studio uses OpenAI-compatible API at http://localhost:1234 by default.

Proposed integration would mirror Ollama pattern:

  • Detect lms CLI command
  • Start LM Studio server if not running
  • Register models in Copilot Chat model picker
  • Add settings UI for configuration

References

Implementation Effort

Estimated: 4-6 weeks to MVP

  • Phase 1: CLI detection + lms launch vscode command (2-3 weeks)
  • Phase 2: Settings UI + model picker (1-2 weeks)
  • Phase 3: Documentation (1 week)

Complexity: Low-Medium (LM Studio uses OpenAI-compatible API)


Community Interest

LM Studio users who want this: Please react with 👍 or comment below!

Use cases:

  • Privacy-focused development (no cloud API calls)
  • Custom fine-tuned models
  • Offline development
  • Cost savings (no API fees)
  • Enterprise compliance (data sovereignty)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions