-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Ollama: base_url is ignored in Windows #13493
Comments
To address the issue where the from llama_index.llms.ollama import Ollama
# Replace "http://your_custom_url:port" with your actual URL and port
ollama_instance = Ollama(base_url="http://your_custom_url:port", model="your_model_name", request_timeout=60.0) Make sure to replace If the issue persists after this adjustment, consider the following steps:
Following these steps should help you resolve the issue with the
|
I don't think windows would cause this parameter to be ignored Rather, this seems like some weird interaction with how ports work on windows. I'd suggest trying http://127.0.0.1 on windows ? |
Good idea, but that doesn't work either. Even if my URL has nothing to do with localhost like 'http://badurl.com:80', the host and port are both ignored. I always get the exceptions:
and
|
Not to muddy the waters, but I may have some version of this same problem. Application runs fine on my local machine (MacOS), but when deployed in a Linux docker container, no matter how I set base_url for Ollama, I get this error:
Edit: A bit more troubleshooting. I installed my application on a clean Debian VM and I have the same issue, so not docker related. I confirmed that I can connect using curl from the VM to successfully query the Ollama server on the remote host. Somehow, the base_url isn't "sticking". What is very confusing to me is that it seems to work fine on my development machine. |
Bug Description
The
base_url
parameter of the Ollama class is ignored on Windows. My script works on Linux but not on Windows.As a result of the
base_url
being ignored, the llama-index library tries to talk tolocalhost
at port 11434. This is why the following exception is thrown.Version
0.10.36
Steps to Reproduce
ollama pull llama3
pip install -r requirements.txt
which are belowpython bug.py
requirements.txt
bug.py
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: