Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LiteLLM connection issue when deployed on another server #648

Open
kikohs opened this issue Feb 21, 2024 · 4 comments
Open

LiteLLM connection issue when deployed on another server #648

kikohs opened this issue Feb 21, 2024 · 4 comments

Comments

@kikohs
Copy link

kikohs commented Feb 21, 2024

I have successfully deployed a ollama model via LiteLLM on another machine.

When using guidance, I can't find a way to set the api_base other than localhost, see screenshot:

image

The correct server address should be 192.168.1.42:11434 instead of localhost.

And the error:
image

Thanks for your help

@kikohs
Copy link
Author

kikohs commented Feb 21, 2024

Solution is simply to capture api_base and pass it along to litellm in _lite_llm.py

image

@linkedlist771
Copy link

Hi, it looks like your are trying to usue the guidance remotely. I am also want to do that but find that the python package lib of the guidance has not been updated yet. I wonder how to set up the server and the client in this issue as you said.

@kikohs
Copy link
Author

kikohs commented Feb 22, 2024

I used ollama on a remote machine in server mode, then simply the LiteLLM package with the modifications I posted with the remote API using api_base.

@linkedlist771
Copy link

Your answer did solve my problem, but the documentation for guidance is poorly written. I couldn't find any mention of the LiteLLM remote before your explanation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants