-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remember my remote Ollama server:port in .env #615
Conversation
…moteOllamaServer (what is passed by user upon execution) first and only then within os.environ['REMOTE_OLLAMA_SERVER'] (what user has stored in dotenv.
installer/client/cli/fabric.py
Outdated
@@ -177,32 +177,32 @@ def main(): | |||
else: | |||
text = standalone.get_cli_input() | |||
if args.stream and not args.context: | |||
if args.remoteOllamaServer: | |||
standalone.streamMessage(text, host=args.remoteOllamaServer) | |||
if os.environ["REMOTE_OLLAMA_SERVER"] or args.remoteOllamaServer: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling os.environ["REMOTE_OLLAMA_SERVER"]
will result in a KeyError: 'REMOTE_OLLAMA_SERVER'
when a user doesn't have the REMOTE_OLLAMA_SERVER
key in their .env
file.
You should be using something like os.environ.get('REMOTE_OLLAMA_SERVER', None)
instead.
(This comment is applicable for all similar calls you added.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, thanks! I replaced the os.environ["REMOTE_OLLAMA_SERVER"]
calls with os.environ.get('REMOTE_OLLAMA_SERVER', None)
as you suggested.
Additionally, I added all of that to the PraisonAI agents section, replacing the http://localhost:11434/v1
to make agents call the remote Ollama server if needed.
…llama server usage possible.
We're getting ready to migrate to Golang; please resubmit then if this hasn't been addressed. Thank you! |
Could we have this merged for those of us who have working Python installs with a private Llama server? It really does make a difference... |
RATIONALE
In order to streamline local LLM use on servers, here is a tiny update that simplifies calling remote Ollama models from the terminal machines. You can do so by setting the --remoteOllamaServer flag in each fabric call... every time you do a call. For me personally it's 40 extra characters that include the --remoteOllamaServer flag plus IP and port.
To further simplify this process, I propose adding support for the REMOTE_OLLAMA_SERVER environment variable, which enables users to configure their remote Ollama server settings globally. This enhancement fits well within the general quick setup paradigm and caters to a broader range of users who may not require this feature in every situation.
CHANGES
REMOTE_OLLAMA_SERVER
environment variable.streamMessage
to useREMOTE_OLLAMA_SERVER
if available.Standalone
class to prioritizeREMOTE_OLLAMA_SERVER
.ollama.Client
.remoteOllamaServer
argument.