Skip to content

Feature local llms#299

Merged
olasunkanmi-SE merged 5 commits intofeature-DOCKER_MCPfrom
feature-LOCAL_LLMS
Feb 1, 2026
Merged

Feature local llms#299
olasunkanmi-SE merged 5 commits intofeature-DOCKER_MCPfrom
feature-LOCAL_LLMS

Conversation

@olasunkanmi-SE
Copy link
Copy Markdown
Owner

No description provided.

Oyinlola Olasunkanmi and others added 5 commits February 2, 2026 00:46
- Introduce a new Local model provider to connect to services like Ollama.
- Add configuration settings for local model Base URL, Model Name, and API key.
- Implement a UI in settings to manage Docker-based models (Ollama, Docker Model Runner).
- Include a  file for easy local Ollama setup.
- Enhance ESLint configuration with TypeScript support and apply fixes.
- Add roadmap and setup documentation for local LLM integration.
- Introduce support for running local models via a managed Docker Compose setup.
- Update the Terminal utility to locate and run the  from the extension's path.
- Change default local LLM endpoint to the Docker Model Runner service ().
- Dynamically set the  in settings when selecting between a Docker Runner model () and an Ollama model.
- Enhance error reporting for model pulling to display specific Docker errors in the UI.
- Update the list of predefined local models and their descriptions.
- Set qwen2.5-coder and the Ollama endpoint as the new default local configuration
- Integrate Ollama into the developer agent using the OpenAI-compatible API
- Add a default system prompt to local LLM calls to ensure plain text responses
- Limit Ollama container memory to 32G in docker-compose.yml
- Update Docker model runner command syntax in the terminal utility
- Add  async generator to the base webview provider and implement it for all LLM providers.
- Refactor the  and  to handle the streaming lifecycle (, , ).
- Enable real-time, token-by-token display of model responses in the webview.
- Update the main extension entry point in  from  to .
- Introduce  to intelligently select and rank code context based on model token budgets and relevance.
- Add support for  file mentions in the chat UI, allowing users to provide explicit file context.
- Automatically include the currently active file in the context for all codebase-related questions.
- Prioritize user-mentioned files and the active file over auto-gathered code snippets.
- Update developer agent prompts to improve conversational responses and reduce unnecessary tool usage.
- Revise documentation and FAQs to explain the new context features and update recommended local models.
@olasunkanmi-SE olasunkanmi-SE merged commit ccf5344 into feature-DOCKER_MCP Feb 1, 2026
1 check passed
Stanley00 pushed a commit to stanley-fork/codebuddy that referenced this pull request Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant