Problem (one or two sentences)
using Roo code with llama.cpp and local models is unusable because the model keeps on reading the same file in chunks again and again.
Context (who is affected and when)
it happens to anybody create a prompt triggering access to a file in the codebase
Reproduction steps
- version of Roo Code > 3.45 in vs code
- ask a llama.cpp model to change a file
Expected result
file is red without time consuming extra long loops
Actual result
extra long loops trying reading the same file in chunks
Variations tried (optional)
No response
App Version
3.45
API Provider (optional)
None
Model Used (optional)
No response
Roo Code Task Links (optional)
No response
Relevant logs or errors (optional)