Welcome to Ollama, your friendly neighborhood local LLM service! Say goodbye to cloud dependency and hello to AI-driven tasks right at home. This repository is your trusty guide for deploying Ollama and its WebUI on Synology NAS and macOS using Docker Compose. Let's dive in!
This repository is inspired by the genius of Lixandru Marius Bogdan. We've crafted a one-stop-shop for deploying Ollama, making your setup as easy as pie (or should we say, as easy as a well-trained AI?). Whether you're on Synology NAS (via DSM Container Manager or Portainer) or macOS, we've got you covered!
Heads up! While Ollama plays nice in Docker on Synology NAS and macOS, performance can be a bit sluggish. Synology NAS might feel like it's running on a treadmill, and macOS might give your CPU a workout without tapping into that sweet GPU power. For the best experience on macOS, install Ollama natively and use this Docker Compose setup just for the WebUI. Check out Tips for Running Ollama Natively on macOS for the scoop on setting up a native Ollama service. And if you want to connect the WebUI to your native Ollama, see Connecting WebUI to Native Ollama.
- π§ Local AI Model Serving - Run LLM models locally, no cloud strings attached.
- π Web UI for Easy Interaction - Chat with your models through the Ollama WebUI.
- π Synology & macOS Compatible - Seamless operation across platforms.
- β‘ Lightweight & Efficient - Minimal resource hogging with optimized performance.
For setup instructions, hop on over to:
- π Ollama Setup Guide
Let's talk Docker IPAM (IP Address Management). It's the superhero of container networking! To keep your containers chatting smoothly, you might need to tweak your firewall settings.
Time to get your hands dirty! Configure the following in your .env
file:
# Define the subnet range for the network
COMPOSE_NETWORK_SUBNET="${COMPOSE_NETWORK_SUBNET:-172.24.0.0/16}"
# Define the IP range for containers
COMPOSE_NETWORK_IP_RANGE="${COMPOSE_NETWORK_IP_RANGE:-172.24.5.0/24}"
# Define the network gateway
COMPOSE_NETWORK_GATEWAY="${COMPOSE_NETWORK_GATEWAY:-172.24.5.254}"
Ready to unleash the power of communication for your Docker network? Update your Synology Firewall settings:
- Open Control Panel β Security (under Connectivity).
- Navigate to the Firewall tab β Click Edit Rules.
- Click Create to add a new rule:
- Ports: Select
All
- Source IP: Select
Specific IP
- Click
Select
β ChooseSubnet
- Enter
172.24.0.0
for IP Address and255.255.0.0
for Subnet mask/Prefix length - Action: Select
Allow
- Ports: Select
- Click OK to apply the changes.
Now your containers can chat freely! π
For more details, check out the Docker Compose IPAM documentation.
Let's get Ollama up and running! This guide is your roadmap to deploying Ollama with Docker Compose. You can do it on:
- π Synology NAS (DSM Container Manager or Portainer)
- π» macOS (Docker Desktop)
We've included a Makefile to make your life easier. Here's what you can do:
Command | Description |
---|---|
make up |
Fire up the Ollama service stack. |
make down |
Stop and remove containers and volumes. |
make logs |
Peek at real-time logs of the running containers. |
make open |
Launch the Ollama WebUI in your favorite web browser. |
make run |
Start the stack, open the WebUI, and show logs. |
make clean |
Just like make down , but with flair. |
make help |
Get a rundown of available commands. |
We've got a sample example.env
file for you! Copy it and tweak it to your liking:
cp example.env .env
vim .env # Edit as needed
Ready, set, deploy! Run this command to bring up the service stack:
make up
Once deployed, open the Ollama WebUI with:
make open
Or go the manual route:
- π Ollama WebUI:
http://localhost:8271
(on macOS) - π‘ Ollama WebUI:
http://<synology-ip>:8271
(on Synology NAS)
To see real-time logs, just run:
make logs
To stop and remove the stack, do this:
make down
-
Double-check the OLLAMA_BASE_URL in your
.env
file. -
Is the Ollama container running? Check with:
docker ps | grep ollama
-
Need a restart? Just run:
docker restart ollama
-
Verify the NAS IP and port (
8271
) are correct. -
Check running containers:
docker ps
-
Restart the stack like a pro:
make down make up
The docker-compose.yml file is where the magic happens. Tweak it as needed!
We've included a plist
file to manage the native Ollama service as a macOS Launch Agent:
Launch Agents are macOS-specific services managed by launchd
. The provided plist
file configures macOS to run Ollama as a background service on user login or on demand.
Copy the plist file to your LaunchAgents directory:
cp config/plist/com.ollama.serve.plist ~/Library/LaunchAgents/
Time to control your Ollama service:
# Start the ollama service
launchctl load ~/Library/LaunchAgents/com.ollama.serve.plist
# Stop the ollama service
launchctl unload ~/Library/LaunchAgents/com.ollama.serve.plist
Want to make your life even easier? Add these to your .bashrc
, .zshrc
, or preferred aliases file:
alias ollamaStart="launchctl load ~/Library/LaunchAgents/com.ollama.serve.plist"
alias ollamaStop="launchctl unload ~/Library/LaunchAgents/com.ollama.serve.plist"
Keep an eye on the action with:
tail -F /tmp/ollama.log /tmp/ollama.err
Want to connect the Ollama WebUI to a natively running Ollama service on macOS? Here's how:
-
Open
docker-compose.yml
. -
Comment out the entire
ollama
service block. -
Comment out the
depends_on
section under theollama-webui
service block. -
In your
.env
file, set:OLLAMA_HOSTNAME="${OLLAMA_HOSTNAME:-localhost}" LLAMA_WEBUI_BASE_URL="${OLLAMA_WEBUI_BASE_URL:-http://ollama:11434}"
Now your WebUI is ready to connect to your native Ollama service!
After configuring your .env
file and updating docker-compose.yml
, run:
docker-compose up -d ollama-webui
To check logs for the WebUI service:
docker-compose logs -f ollama-webui
To stop the WebUI service:
docker-compose stop ollama-webui
You've successfully set up Ollama for local AI model deployment! π Now you're ready to unleash the power of local AI without the cloud's shackles.
For advanced configurations, don't forget to visit the Ollama documentation. Happy AI-ing!