-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Describe the feature or problem you'd like to solve
No response
Proposed solution
Description
I would like to propose an enhancement to the GitHub Copilot CLI that allows it to automatically detect, connect to, and prioritize local LLM instances (via Ollama or LM Studio) as a primary or secondary backend.
Key Proposed Features
Auto-Detection & Toggle:
The CLI should check for active local endpoints (e.g., localhost:11434 for Ollama) on startup.
Allow users to switch between Cloud and Local models instantly using a slash command (e.g., /local or /cloud).
Visual Feedback (The "Privacy Color" System):
To ensure the user always knows where their code context is being sent, the prompt line should change color based on the active provider.
Suggested Defaults: Green for Local LLM (Private/Free) and Blue for Cloud LLM (GitHub Hosted).
Make these colors user-configurable in the .copilot-config (e.g., prompt_color_local: "green", prompt_color_cloud: "blue").
Session Defaults for Token Management:
Add a configuration option to set the Local LLM as the default for every new session.
This prevents accidental token consumption or hitting rate limits for simple tasks that a local model (like Llama 3 or Mistral) can handle easily.
The "Win-Win" Argument for GitHub
It is worth noting that even when using a local LLM, users would still maintain their active Copilot subscription for the CLI access itself and advanced cloud features. By allowing users to offload "simple" tasks (boilerplate, CLI command explanations, unit tests) to their own hardware, GitHub benefits from:
Reduced Server Load: Significant reduction in compute costs for GitHub’s backend.
Enhanced Reliability: Users can continue working even during GitHub outages or spotty connectivity.
User Retention: Keeps power users (like myself) within the official CLI ecosystem rather than switching to open-source alternatives.
Personal Context & Use Case
I recently started using the Copilot CLI and find it incredibly useful for my workflow, particularly because I work in a legacy Eclipse environment where the CLI is the most efficient way to get AI assistance.
Using the CLI as a "bridge" to local models would be a game-changer for privacy-conscious enterprise work. Knowing at a glance (via the prompt color) that my code is staying on my machine would provide immense peace of mind
Example prompts or workflows
No response
Additional context
No response