A clean, efficient web interface for interacting with language models running locally via Ollama. No cloud services, no accounts required. Just you and your local AI models.
- Vite + React
- TypeScript
- Tailwind
- HeroUI
Focused on efficiency in a local environment, with customizable system prompts and no required setup beyond Ollama itself.
- CHAT: Multiple conversations with auto-generated titles, streaming responses, and live reasoning
- MODELS: Browse local Ollama models with parameter size, quantization, and loaded status
- IMAGES: Send images to multimodal models
- SETTINGS: Custom system message, initial greeting, and context window override
- STATS: Live token counts, tokens/sec, and context usage on every message
- HISTORY: Persistent storage in your browser, why overcomplicate things?
The recommended setup is self-hosting. A hosted version is also available if you'd prefer not to run another local service, though it needs an Ollama CORS tweak to connect to your local instance.
All chat data lives in your browser's localStorage. Nothing is sent anywhere except your local Ollama instance.
Requirements:
- Node.js v20+
- Ollama running locally
- pnpm
Quick start:
pnpm install
pnpm build
pnpm startApplication runs at http://localhost:3000. Ollama allows localhost origins by default, so no extra CORS configuration is needed.
You can check out the hosted version at https://webui.kerajarvi.com/. Note: It cannot connect to your local Ollama instance due to default CORS settings. To enable this, start Ollama with the OLLAMA_ORIGINS environment variable set to allow the hosted origin:
OLLAMA_ORIGINS=https://webui.kerajarvi.com ollama serveThis step is only needed for the hosted version. Self-hosters can skip it.
pnpm dev # development server
pnpm typecheck # TypeScript checks
pnpm build # production build
pnpm start # production server (requires build first)
pnpm format # format with Prettier
pnpm format:check # check formattingLicensed under the GNU AGPL v3. Feel free to contribute or fork.
