-
Notifications
You must be signed in to change notification settings - Fork 32
How to setup with Ollama
Long Huynh edited this page May 23, 2024
·
8 revisions
Additional set is required to use Ollama with Obsidian BMO Chatbot.
These instructions are specific to MacOS.
For additional resources, check out: How to Handle CORS Settings in OLLAMA: A Comprehensive Guide | by Durgaprasad Budhwani | DCoderAI | Mar, 2024 | Medium
- Go to the Ollama website.
- Click on 'Download' and select your operating system. (Ollama Download Page)
- After downloading and installing the application, quit the Ollama application from running in the background.
- Quit the Ollama application from running in the background.
- Open your terminal and run
OLLAMA_ORIGINS=app://obsidian.md* ollama serve
- Go to 'BMO Chatbot Settings > Ollama Connection > OLLAMA REST API URL' and insert Ollama's default url:
http://localhost:11434
into OLLAMA REST API URL under 'Ollama Connection'
Another option is to set the server on a different port.
- Open the terminal and run
OLLAMA_ORIGINS=* OLLAMA_HOST=127.0.0.1:11435 ollama serve
orOLLAMA_ORIGINS=app://obsidian.md* OLLAMA_HOST=127.0.0.1:11435 ollama serve
. This will allow you to run a separate server that will bypass the CORS policy within Obsidian. - Go to 'BMO Chatbot Settings > Ollama Connection > OLLAMA REST API URL' and insert Ollama's default url:
http://localhost:11435
into OLLAMA REST API URL under 'Ollama Connection'
- Open your terminal and run
launchctl setenv OLLAMA_ORIGINS "app://obsidian.md*"
(To unset,launchctl unsetenv OLLAMA_ORIGINS
) - Restart the Ollama application.
- Go to 'BMO Chatbot Settings > Ollama Connection > OLLAMA REST API URL' and insert Ollama's default url:
http://localhost:11434
into OLLAMA REST API URL under 'Ollama Connection'
- Go to https://ollama.com/library
- Install a model (e.g.
ollama run llama3
) - Reload the model (It is located in 'General > Model')
- In the "Model" dropdown, select any model that you downloaded with Ollama.