-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple Example #179
Simple Example #179
Conversation
examples/ps/main.py
Outdated
from ollama import ps | ||
|
||
response = ps() | ||
print(response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This example should be a bit more involved. To really show ps, you should load a model by calling chat or generate
examples/ps/main.py
Outdated
from ollama import ps, pull, chat | ||
|
||
pull('mistral') | ||
chat('mistral', messages=[{'role': 'user', 'content': 'Pick a number'}]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might be useful to output the output content
examples/ps/main.py
Outdated
@@ -0,0 +1,20 @@ | |||
from ollama import ps, pull, chat | |||
|
|||
pull('mistral') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: pull blocks without feedback since it doesn't stream which leaves the user wondering what's going on
Could show cpu vs gpu proportion calculations if helpful but just pushing the response for now