A modern Windows desktop chat application for interacting with Ollama LLM models. Built with .NET 8 and WPF, featuring a clean interface inspired by Claude and ChatGPT.
- Modern Dark Theme UI - Clean, responsive interface similar to ChatGPT/Claude
- Multiple Chat Support - Create and manage multiple conversations
- Project Organization - Organize chats into projects and subdirectories
- File Attachments - Upload and attach files to your messages (including images for vision models)
- Artifact Storage - Automatically extracts and stores code blocks from responses
- Configurable Ollama Connection - Connect to local or remote Ollama servers
- Streaming Responses - Real-time streaming of model responses
- Model Selection - Choose from available models on your Ollama server
- Persistent Storage - All chats and artifacts stored locally in SQLite
- Windows 10/11
- .NET 8 SDK or later
- Ollama running locally or on a remote server
-
Clone the repository:
git clone <repository-url> cd chat-interface
-
Build the application:
dotnet build OllamaChat.sln -c Release
-
Run the application:
dotnet run --project OllamaChat/OllamaChat.csproj
To create a self-contained executable:
dotnet publish OllamaChat/OllamaChat.csproj -c Release -r win-x64 --self-contained true -o ./publishThe executable will be in the ./publish folder.
The application can be configured via appsettings.json located in the application directory:
{
"Ollama": {
"BaseUrl": "http://localhost:11434",
"DefaultModel": "llama3.2",
"TimeoutSeconds": 300,
"StreamResponses": true,
"DefaultOptions": {
"Temperature": 0.7,
"NumCtx": 4096,
"TopP": 0.9,
"TopK": 40,
"RepeatPenalty": 1.1
}
}
}| Option | Description | Default |
|---|---|---|
BaseUrl |
URL of your Ollama server | http://localhost:11434 |
DefaultModel |
Default model for new chats | llama3.2 |
TimeoutSeconds |
Request timeout in seconds | 300 |
StreamResponses |
Enable streaming responses | true |
Temperature |
Model creativity (0.0-2.0) | 0.7 |
NumCtx |
Context window size | 4096 |
TopP |
Nucleus sampling | 0.9 |
TopK |
Top-K sampling | 40 |
RepeatPenalty |
Repetition penalty | 1.1 |
To connect to a remote Ollama server:
- Click the ⚙ (Settings) button in the sidebar
- Enter the remote server URL (e.g.,
http://192.168.1.100:11434) - Click Save
Or edit appsettings.json:
{
"Ollama": {
"BaseUrl": "http://your-server-ip:11434"
}
}Note: Ensure your Ollama server is configured to accept remote connections by setting OLLAMA_HOST=0.0.0.0 on the server.
- Click the + New Chat button in the sidebar
- Select a model from the dropdown in the header
- Type your message and press Enter or click Send
- Projects: Create projects to organize related chats
- Search: Use the search box to find chats by title or content
- Recent Chats: Recently accessed chats appear in the sidebar
- Click the 📎 (paperclip) button next to the input
- Select one or more files
- Files will be attached to your next message
Supported for Vision Models:
- Images:
.png,.jpg,.jpeg,.gif,.webp
The application automatically extracts code blocks from assistant responses:
- View artifacts in the right panel (click 📄 to toggle)
- Copy code with the 📋 button
- Save to file with the 💾 button
All data is stored locally in:
- Windows:
%LOCALAPPDATA%\OllamaChat\
Contents:
ollama_chat.db- SQLite database with chats and messagesUploads/- Uploaded file attachmentsArtifacts/- Saved artifacts
The application follows the MVVM pattern:
OllamaChat/
├── Models/ # Data models
├── ViewModels/ # View models (MVVM)
├── Views/ # WPF views (XAML)
├── Services/ # Business logic services
├── Data/ # Database context
├── Converters/ # WPF value converters
└── Resources/ # Styles and colors
- .NET 8 - Application framework
- WPF - Windows UI framework
- Entity Framework Core - Database ORM
- SQLite - Local database
- CommunityToolkit.Mvvm - MVVM framework
- Markdig - Markdown rendering
- Ensure Ollama is running (
ollama serve) - Check the server URL in settings
- Click the ↻ button to refresh connection
- Install models via Ollama CLI:
ollama pull llama3.2 - Click refresh to reload available models
- Reduce context length in settings
- Use a smaller model
- Ensure adequate system resources
MIT License - See LICENSE file for details.
Contributions are welcome! Please feel free to submit pull requests.