-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Remote Host Configuration for Ollama Integration in Lexido #49
Labels
enhancement
New feature or request
Comments
Interesting, I will look into it, it is probably something that I may be
able to do if ollama allows for something like this. I will keep you posted
…On Wed, Apr 10, 2024, 15:49 Observant4678 ***@***.***> wrote:
*Is your feature request related to a problem? Please describe.*
If I understand correctly, the current setup for Lexido and Ollama
requires running both applications on the same server, however, I have
minipcs as servers on my network , and a gaming machine with powerfull
graphics card. When trying to run ollama on minipc servers, I encounter
very slow performance because of hardware. But the gaming machine runs llms
just fine.
*Describe the solution you'd like*
I would like to request a new feature that allows users to set a remote
host where Ollama is running, instead of having to run it on the same
server as Lexido. This for example can be implemented through a new tag
similar to "--setModel", such as "--setOllama (to set Ollama's IP and port).
*Describe alternatives you've considered*
I tried relatively small and simple models (such as qwen:0.5b) but even
those models are slow and also not capable of doing the required task.
*Additional context*
I think that providing a way to set a remote host for Ollama would greatly
improve Lexidos performance and since ollama can be run as a docker
container, will help to setup more secure network.
—
Reply to this email directly, view it on GitHub
<#49>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGJEUKVDQJZ2VFUWSC35UNLY4WJS7AVCNFSM6AAAAABGBBJ4NGVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIZTMMZRGU4TANI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
I want to update you that I am working on this and that soon lexido will support all REST API LLMs. Including remote local LLMs or even ChatGPT, Claude, and more! |
Amazing. Cannot wait to test it out. Thank you!! |
Just posted version 1.4! Be sure to read the README and the section on remote LLMs If you have any questions feel free to ask, good luck! |
Re-open this if there are issues |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is your feature request related to a problem? Please describe.
If I understand correctly, the current setup for Lexido and Ollama requires running both applications on the same server, however, I have minipcs as servers on my network , and a gaming machine with powerfull graphics card. When trying to run ollama on minipc servers, I encounter very slow performance because of hardware. But the gaming machine runs llms just fine.
Describe the solution you'd like
I would like to request a new feature that allows users to set a remote host where Ollama is running, instead of having to run it on the same server as Lexido. This for example can be implemented through a new tag similar to "--setModel", such as "--setOllama (to set Ollama's IP and port).
Describe alternatives you've considered
I tried relatively small and simple models (such as qwen:0.5b) but even those models are slow and also not capable of doing the required task.
Additional context
I think that providing a way to set a remote host for Ollama would greatly improve Lexidos performance and since ollama can be run as a docker container, will help to setup more secure network.
The text was updated successfully, but these errors were encountered: