You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is a great plugin and I love it. But I found an error here.
[LLM] http error: error sending request for url (http://localhost:11434/api/generate): connection closed before message completed
Following the config in the readme.
{
"huggingface/llm.nvim",
opts = {
-- cf Setup
},
config = function()
local llm = require("llm")
llm.setup({
api_token = nil, -- cf Install paragraph
-- for ollama backend
backend = "ollama", -- backend ID, "huggingface" | "" | "openai" | "tgi"
model = "starcoder2:7b",
url = "http://localhost:11434/api/generate",
tokens_to_clear = { "<|endoftext|>" }, -- tokens to remove from the model's output
-- parameters that are added to the request body, values are arbitrary, you can set any field:value pair here it will be passed as is to the backend
request_body = {
parameters = {
max_new_tokens = 60,
temperature = 0.2,
top_p = 0.95,
},
},
-- set this if the model supports fill in the middle
fim = {
enabled = true,
prefix = "<fim_prefix>",
middle = "<fim_middle>",
suffix = "<fim_suffix>",
},
debounce_ms = 150,
accept_keymap = "<C-y>",
dismiss_keymap = "<C-n>",
tls_skip_verify_insecure = false,
-- llm-ls configuration, cf llm-ls section
lsp = {
bin_path = nil,
host = nil,
port = nil,
version = "0.5.2",
},
tokenizer = {
repository = "bigcode/starcoder2-7b",
}, -- cf Tokenizer paragraph
-- tokenizer = nil, -- cf Tokenizer paragraph
context_window = 4096, -- max number of tokens for the context window
enable_suggestions_on_startup = true,
enable_suggestions_on_files = "*", -- pattern matching syntax to enable suggestions on specific files, either a string or a list of strings
})
end,
}
The MOST wirred thing is that I can curl the answer to the same model & api url, and my vscode continue plugin can communicate this is ollama which is running on a docker container but this plugin cannot!
Thank you for your time and reply!
The text was updated successfully, but these errors were encountered:
This configuration worked for me as I try this plugin for the first time today. I only made one change, but it shouldn't have an impact on how the plugin behaves. According to lazy.nvim's documentation, "opts is the recommended way to configure plugins." So I replaced your empty opts dictionary with the one you passed to the setup function, then I eliminated the whole config property. My Ollama server is running in Docker on the same machine like you. I would need to make some tweaks to make it convenient to use in my workflow, but I didn't get the error you described. Maybe it's been fixed in the month since you posted?
It is a great plugin and I love it. But I found an error here.
Following the config in the readme.
The MOST wirred thing is that I can curl the answer to the same model & api url, and my vscode continue plugin can communicate this is ollama which is running on a docker container but this plugin cannot!
Thank you for your time and reply!
The text was updated successfully, but these errors were encountered: