-
Cross-posting this from gsuuon/model.nvim#9 which i thought was the same thing! Does this support ollama? So far ollama has been my smoothest experience to set up an llm locally. Here's what the API use looks like (as documented here): curl -X POST http://localhost:11434/api/generate -d '{
"model": "phind-codellama",
"prompt":"Implement a linked list in C++"
}' |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
I realised that a project with the same name existed after I created it, sorry about that. Hopefully Github's namespacing is enough to differentiate. I'm discovering the project with your question, so no llm.nvim doesn't support it. I haven't really looked into running models locally yet, though I'd like to get there at some point. I'm going the language server route for this plugin, I initially imagined using candle to load and run models locally. |
Beta Was this translation helpful? Give feedback.
Ollama is now supported as of
0.5.0
.