You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried continue on vscode. The problem was that the messages were not sent correctly to the LLM. Other than that, it had some features that copilot didn't. But I still go back to gpt-4 if I can.
thanks! i tried llama and Mistral/Mixtra models. i tried lmstudio and llama.cpp as backends. another problem was that the models would keep writing infinite spaces or letters in comments. this happened only in Continue. I tried adding my own system prompts but that didn't solve it. seems like Continue formats the prompts in a way that doesn't match prompt templates or tokenizers
Yea, in fact, Continue includes 4 demo AIs, and CodeLlama 70B is one of them. It gives good and consistent results. Just not my local model running in llama.cpp. I wonder if Continue has some custom prompt setup for the demo AIs that's hidden from the user config.
To reproduce
No response
Log output
No response
The text was updated successfully, but these errors were encountered:
I faced a similar issue, fixed it by switching provider from llama.cpp to openai, and the problem was that llama.cpp uses completions instead of chat completions and there is no correct format for llama3 yet to handle completions correctly.
Before submitting your bug report
Relevant environment info
- IDE: VS Code
Description
from this r/LocalLlama thread:
To reproduce
No response
Log output
No response
The text was updated successfully, but these errors were encountered: