-
-
Notifications
You must be signed in to change notification settings - Fork 103
[Ollama] Add RAG examples and a small README file #562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Once models are downloaded you can run them with | ||
```bash | ||
ollama run <model-name> | ||
``` | ||
for example | ||
|
||
```bash | ||
ollama run llama3.2 | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running models is not mandatory, but serving them is. Using ollama serve
command. BTW running an embedding model is not possible using Ollama, output is not human readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
true, so to run the examples it would be:
ollama pull llama3.2
ollama pull nomic-embed-text
ollama serve
457ab21
to
ea8cd38
Compare
Thank you @damijanc. |
…and wrap up docs (chr-hertel) This PR was merged into the main branch. Discussion ---------- [Examples] Consistency about Ollama model config via env, and wrap up docs | Q | A | ------------- | --- | Bug fix? | no | New feature? | no | Docs? | yes | Issues | | License | MIT Following #562 and #563 Commits ------- 5292d15 Consistency about Ollama model config via env, and wrap up docs.
Ollama RAG examples and a readme file.