Merged
Conversation
- Introduce a new Local model provider to connect to services like Ollama. - Add configuration settings for local model Base URL, Model Name, and API key. - Implement a UI in settings to manage Docker-based models (Ollama, Docker Model Runner). - Include a file for easy local Ollama setup. - Enhance ESLint configuration with TypeScript support and apply fixes. - Add roadmap and setup documentation for local LLM integration.
- Introduce support for running local models via a managed Docker Compose setup. - Update the Terminal utility to locate and run the from the extension's path. - Change default local LLM endpoint to the Docker Model Runner service (). - Dynamically set the in settings when selecting between a Docker Runner model () and an Ollama model. - Enhance error reporting for model pulling to display specific Docker errors in the UI. - Update the list of predefined local models and their descriptions.
- Set qwen2.5-coder and the Ollama endpoint as the new default local configuration - Integrate Ollama into the developer agent using the OpenAI-compatible API - Add a default system prompt to local LLM calls to ensure plain text responses - Limit Ollama container memory to 32G in docker-compose.yml - Update Docker model runner command syntax in the terminal utility
- Add async generator to the base webview provider and implement it for all LLM providers. - Refactor the and to handle the streaming lifecycle (, , ). - Enable real-time, token-by-token display of model responses in the webview. - Update the main extension entry point in from to .
- Introduce to intelligently select and rank code context based on model token budgets and relevance. - Add support for file mentions in the chat UI, allowing users to provide explicit file context. - Automatically include the currently active file in the context for all codebase-related questions. - Prioritize user-mentioned files and the active file over auto-gathered code snippets. - Update developer agent prompts to improve conversational responses and reduce unnecessary tool usage. - Revise documentation and FAQs to explain the new context features and update recommended local models.
Stanley00
pushed a commit
to stanley-fork/codebuddy
that referenced
this pull request
Mar 27, 2026
…AL_LLMS Feature local llms
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.