Your lightweight, private, local AI chatbot (no GPU needed
A lightweight yet powerful web interface for Ollama with markdown rendering, syntax highlighting, and intelligent conversation management.
- π¬ Multiple Conversations - Create, manage, and rename chat sessions
- π Persistent History - SQLite database storage with search functionality
- π€ Model Selection - Choose from downloaded Ollama models
- π Lightweight - Minimal resource usage for local development
- π Full Markdown rendering - with GitHub-flavored syntax
- π Response metrics - time, tokens, and speed tracking
For most users (auto-install):
curl -fsSL https://github.com/ukkit/chat-o-llama/raw/main/install.sh | bash
What happens?
- Installs Python/Ollama if missing (takes time)
- Downloads recommended model (~380MB)
- Installs chat-o-llama
Access at:
http://localhost:3000
π§ Advanced Setup (Manual Install)
For detailed manual installation steps, see install.md
git clone https://github.com/ukkit/chat-o-llama.git
cd chat-o-llama
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
./chat-manager.sh start
π· App Screenshots
First screen after installation
New chat screen with default model
Chat bubble, reply from the model
Quick Fixes:
- Port in use? β ./chat-manager.sh start 8080
- No models? β ollama pull tinyllama
Document | Description |
---|---|
Installation Guide | Installation Guide |
Features | Detailed features guide |
Startup & Process Guide | Startup & Process Management via chat-manager.sh |
Config Guide | Configuration Guide |
Config Comparison | Compare different configs |
API Guide | API Guide |
Troubleshooting Guide | Troubleshooting Guide |
Device | CPU | RAM | OS |
---|---|---|---|
Raspberry Pi 4 Model B Rev 1.4 | ARM Cortex-A72 | 8GB | Raspberry Pi OS |
Dell Optiplex 3070 | i3-9100T | 8GB | Debian 12 |
Nokia Purebook X14 | i5-10210U | 16 GB | Windows 11 Home |
Made with β€οΈ for the AI community
β Star this project if you find it helpful!
MIT License - see LICENSE file.