-
Notifications
You must be signed in to change notification settings - Fork 297
Description
Environment:
- Roo Code version: 3.28.14
- OS: Windows 11 with WSL2 (Ubuntu)
- VSCode: Running in WSL remote
- API Provider: Ollama (local)
- Model: llama3.1:8b-instruct-q4_K_M
- Embedder: Ollama all-minilm (384 dimensions)
- Vector DB: Qdrant (local Docker container)
- Ollama endpoint: http://localhost:11434/
- Qdrant endpoint: http://localhost:6333/
Issue:
The LLM never calls the codebase_search tool. Instead, it:
- Guesses file locations based on directory structure
- Hallucinates code content (generates fake FastAPI/Flask/Click code that doesn't exist)
- Enters an infinite loop, asking follow-up questions
- Treats codebase_search as a bash command when explicitly asked to use it
Verification Steps Completed:
✅ Qdrant has 4,561 indexed points in collection ws-dd9beb7f436a9121
✅ Ollama embedder works (generates correct 384-dim vectors)
✅ Direct Qdrant search returns correct results with proper file paths and code chunks
✅ WSL networking confirmed working (localhost resolves correctly)
✅ Extension shows as "Enabled on WSL: Ubuntu"
Expected Behaviour:
LLM should call the codebase_search tool and use the actual search results from Qdrant.
Actual Behaviour:
LLM has no awareness of available tools. When asked "What tools do you have available?", it lists project files instead of its own tools. Extension host logs show no tool-related errors.
Additional Observations:
No "Roo: Index Codebase" or similar commands available in the command palette
Extension activates successfully, but doesn't appear in extension host activation logs
Reproduction:
Configure Roo Code with local Ollama and Qdrant as described
Ask: "Find the main function in my codebase"
Observe LLM guessing instead of searching
Is this a known limitation with local Ollama models, or is there additional configuration required for tool-calling to work?