Replies: 6 comments 11 replies
-
$ git clone https://github.com/hqnicolas/devika |
Beta Was this translation helpful? Give feedback.
-
To make it work, you'll need to launch Ollama as it's being described. I'll explain each step assuming you haven't installed Ollama yet. If that's the case:
|
Beta Was this translation helpful? Give feedback.
-
Being able to access the API from ollama does NOT mean it will work. It will encounter all kinds of issues wth the local model itself, not producing the same output as GPT-4. |
Beta Was this translation helpful? Give feedback.
-
Would like to see easy setup feature for Ollama |
Beta Was this translation helpful? Give feedback.
-
running dockerized version I had to set Ollama endpoint to "http://ollama-service:11434" as the container running ollama in devika-subnetwork will be accessible through the host "ollama-service". After that all good, time to play with local llama3 :) |
Beta Was this translation helpful? Give feedback.
-
I need this help on the configuration to do
Beta Was this translation helpful? Give feedback.
All reactions