-
-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] provide the option to use a self hosted AI (e.g. ollama) #3037
Comments
Yes, that would be the next logical step. 😉 As long they are OpenAI API compatible... |
Which self-hosted AI did you have in mind? |
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
Ok, thank you. I'm currently struggling to get this into the script engine. I may need to rewrite some parts of the AI service handling... |
…penAiBackendsHook Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
…mentation, example script additions, changelog entry and settings information Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
I had to rewrite and rearrange a lot of stuff to make this happen. 😅 24.6.2
|
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
There now is a new release, could you please test it and report if it works for you? |
I've already found some small issues (like you need to set an |
Hm, I only got an empty result back from ollama over the API. 🤔 What is your experience? |
I added a script |
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
The requests are the same with the OpenAI API and Ollama, but what Ollama returns with https://github.com/ollama/ollama?tab=readme-ov-file#chat-with-a-model is not at all what is returned at https://platform.openai.com/docs/api-reference/chat/create! |
Hey, I'd like to thank you for the amazing program and the continuous improvements. I was curious to try this new feature with llama-cpp since I don't use ollama and it seems to be working:
With an API base URL: Is there anything specific you'd like me to test? |
It seems that in Ollama, you also need to append the |
Nice! Thank you very much for the hint. I'll try that then! |
Thank you! 😉 This feature took longer than anticipated (and will take some more tweaking)! |
Works perfectly, thank you very much! |
@eljamm, if you want to contribute a script for llama-cpp to https://github.com/qownnotes/scripts it would be great. 😉 |
@pbek Glad I could help and I don't mind writing the script for llama-cpp, but in my opinion it would it be better if the script was written for any local AI server in general since they more than likely support the OpenAI endpoint and it would be redundant to re-write the script for each one every time. What do you think? |
Wrote a PR in qownnotes/scripts#237 |
Please see qownnotes/scripts#237 (comment). One script only supporting one endpoint will not do it if you have multiple backends... |
Plus, you need to find out the correct endpoints yourself... |
Well, as far as I know, all OpenAI-compatible backends support the In this regard, I thought a general script would be better as it's like a template which users can use to make their own scripts. This means that we only have to maintain one script and the users won't think only the ollama or llama-cpp backends are supported. |
Is something like qownnotes/scripts#238 what you had in mind? |
it is already working with ollama. I could test it now. Thank you for your effort including this.
But like I said, it is working for me and again thank you for adding this. Those expectations are just improvements for the usability that I wanted to share. It did confuse me for a second why I had to add another script to add my own AI endpoint, although there is a dedicated AI settings page. |
I didn't want to go through the hassle of implementing a UI for adding multiple backends. Doing that in scripts turned out to be a hassle too, but at least it's more flexible now. There can be scripts preconfigured for certain backends, so you don't need to research the endpoint URL.
In the 3rd party OpenAI UIs I worked with, you were supposed to configure them too, because maybe you don't want to pay for an expensive one. Is there even an OpenAI API to get the models? But since the custom ones have script now those network requests to fetch the models (even to non-OpenAI APIs) could be done in those scripts! 🥳🎉 |
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
Signed-off-by: Patrizio Bekerle <patrizio@bekerle.com>
24.6.3
There now is a new release, could you please test it and report if it works for you? |
Tested again and everything is working as expected. |
Thanks for testing! |
@speedyconzales, you might like this: qownnotes/scripts@93735c6 ( |
I am happy to see support for AI integration. But I do would prefer an option for self hosted AI
The text was updated successfully, but these errors were encountered: