Replies: 2 comments
-
Ok you are directly using it with Llama.cpp. Copilot doesn't integrate with llama.cpp directly, it has option for either Ollama or LM Studio for local models. This issue sounds like a bug report but is actually a feature request to support llama.cpp? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Details:
I am using
llama.cpp
with GPU support for my projects. However, I found that the/api/generate
endpoint, which might be expected by Copilot for Obsidian, is not supported byllama.cpp
. Instead, the correct endpoint for generating text is/v1/completions
.This issue has already been reported to Ollama-Logseq. For more details, please refer to their report. You might want to update the plugin to accommodate the correct API path in
llama.cpp
.Reference:
Thank you for looking into this.
Beta Was this translation helpful? Give feedback.
All reactions