-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Open
Labels
EnhancementNew feature or requestNew feature or request
Description
Edited in response to roomote's comments.
Problem (one or two sentences)
Users cannot audit or inspect the internal reasoning of models invoked through Roo Code with the Ollama provider, limiting their ability to diagnose failures, tune prompts, or audit model behavior.
Context (who is affected and when)
This affects users of the Ollama provider who would benefit from insight into model reasoning during API requests and subtasks.
Desired behavior (conceptual, not technical)
Roo Code should add support for viewing reasoning and managing reasoning effort to the Ollama provider.
Constraints / preferences (optional)
- Distinguish clearly between when an API request is queued and when it is streaming, enabling the user to distinguish between queued/loading states and active reasoning streams.
- Ollama supports streaming, so this information would preferably be real-time when available.
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear context and impact
Roo Code Task Links (optional)
No response
Acceptance criteria (optional)
No response
Proposed approach (optional)
No response
Trade-offs / risks (optional)
No response
Metadata
Metadata
Assignees
Labels
EnhancementNew feature or requestNew feature or request
Type
Projects
Status
Triage