A lightweight chat interface built with Python/Flask that connects to a locally-running Ollama server. Inspired by Open WebUI — models are configured via JSON files and selected from the sidebar at runtime.
- Multi-model support — add or swap models by dropping a
.cfgfile intomodels/, no code changes needed - Clean chat UI — dark-themed interface with a sidebar model selector
- Fully local — all inference runs through your own Ollama server, nothing leaves your machine
- Tested — full pytest suite covering config loading, the Ollama client, and all API routes
| Layer | Technology |
|---|---|
| Backend | Python / Flask |
| LLM Integration | Ollama (/api/chat) |
| Frontend | HTML5, CSS3, JavaScript |
| Tests | pytest, pytest-cov |
- Python 3.10+
- Ollama running locally with at least one model pulled
pip install -r requirements.txt
git clone https://github.com/malgorath/chatbot.git
cd chatbot
python -m venv venv
source venv/bin/activate
pip install -r requirements.txtEdit models/*.cfg to point to your Ollama host and preferred model. The default config targets http://192.168.1.3:11434 with llama3.2:3b.
python app.pyOpen http://localhost:5000 — select a model from the sidebar and start chatting.
pytest
pytest --cov=. # with coverageapp.py # Flask routes (GET /, GET /api/models, POST /api/chat)
ollama_client.py # Ollama API wrapper with error handling
models_config.py # Parses models/*.cfg into typed ModelConfig objects
templates/chat.html # Chat UI template
static/css/chat.css # Styles
static/js/chat.js # Frontend behaviour
MIT