Patina is a native desktop chat client built in Rust with an egui interface, designed to connect seamlessly with large language models (LLMs) through both cloud and local providers. The current implementation enables direct interaction with OpenAI models, providing a clean, responsive, and fully functional chat experience. While many advanced capabilities — such as local LLM integration and extended provider support — are still in active development, the application is evolving quickly toward a full-featured, independent alternative to proprietary AI clients.
Beyond serving as a chat interface, Patina is also conceived as an experimental platform for rapid AI integration and prototyping. Its modular design and support for the Model Context Protocol (MCP) allow developers to attach new agent models or services without altering the core code. This makes it ideal for fast iteration and experimentation, whether testing local LLMs, exploring new AI workflows, or building decoupled agent systems. As the project expands, Patina aims to remain both a practical everyday tool and a flexible testbed for AI-driven desktop innovation.
patina/
├── app/ # Graphical user interface built with eframe/egui
├── core/ # Shared business logic, state, LLM providers, auth, and MCP
├── tests/ # Unit, integration, and end-to-end style tests
└── xtask/ # Automation helpers (smoke tests, fixtures, CI hooks)
Each crate has its own Cargo.toml and uses workspace dependencies declared at the root.
- Chat experience: Markdown-rendered conversations with syntax highlighting for code blocks via
egui_commonmarkandsyntect. - LLM provider abstraction: Unified driver for OpenAI, Azure OpenAI, and a mock provider used by tests. Streaming responses are planned but not yet implemented.
- Authentication orchestration: Handles server- and client-managed OAuth modes with persisted secrets ready for reuse.
- MCP integration scaffolding: JSON-RPC ready client registry capable of simulating tool invocations and auth handshakes.
- Persistent history: Conversations are stored as JSON Lines files and reloaded on startup.
- Automation: An
xtask smokecommand exercises the core logic without launching the UI.
- Rust 1.76 or newer with
cargo - A recent graphics driver capable of running
egui/eframe
Optional environment variables configure LLM providers:
LLM_PROVIDER=openai # or azure_openai, mock
OPENAI_API_KEY=... # required for OpenAI provider
OPENAI_MODEL=gpt-4o-mini
AZURE_OPENAI_ENDPOINT=https://example.openai.azure.com/
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o
cargo run -p patina
The first launch creates a data directory under your platform’s application data folder (for example, ~/Library/Application Support/Patina on macOS). Conversations persist between sessions.
Here’s the same section rewritten to perfectly match the tone, structure, and indentation style of your current README.md — consistent headings, spacing, and formatting conventions included:
Patina allows you to configure and fine-tune AI behavior directly through the Settings window — without editing configuration files manually.
Patina Settings UI
The App Settings panel defines global parameters that apply across all projects:
- Theme — choose between System, Light, or Dark mode
- LLM Provider — select your preferred provider (currently OpenAI; others planned)
- Provider Details — enter API key, endpoint, API version, and deployment name
- Available Model Names — provide a comma- or semicolon-separated list of model names
These preferences are stored automatically in the user configuration directory:
Linux: ~/.config/patina/ui_settings.json
macOS: ~/Library/Application Support/Patina/ui_settings.json
Windows: %APPDATA%\Patina\ui_settings.json
Each project can either inherit the global settings or define its own configuration. Within the Project Settings section you can:
- Enable Inherit system settings to reuse global configuration
- Disable it to specify project-specific provider details and model lists
Project-specific configuration files are stored inside the project folder:
<project>/.patina/ui_settings.json
<project>/.patina/patina.yaml
- The list of available models is loaded from
patina.yaml - The current selection (model, temperature, and theme) is stored in
ui_settings.json - Any change in the Settings UI is applied immediately and persists between sessions
- No environment variables or
.envfiles are used — configuration is entirely file-based
cargo test --workspace
To execute the smoke test provided by the automation crate:
cargo run -p xtask -- smokeTagging a commit with v* (for example git tag v0.3.0 && git push --tags) triggers
.github/workflows/release.yml. The workflow builds single-file binaries named patina
(patina.exe on Windows) with embedded assets for Linux, macOS, and Windows, strips the symbols,
and uploads the artifacts as workflow outputs.
Patina organizes your conversations into projects — self-contained directories that store all chat history and settings. Each project is independent and portable, making it easy to organize different workstreams, share conversations, or back up your data.
Each Patina project follows a simple directory layout:
MyProject/
├── MyProject.pat # Project manifest (TOML format)
├── (your files and folders) # Optional user content
└── .patina/ # Hidden project data
└── conversations/ # Chat history (JSONL format)
└── 2025/
├── conv1.jsonl
└── conv2.jsonl
ProjectName.pat: A TOML manifest file containing project metadata (name, creation date, internal paths).patina/conversations/: Contains all conversation history in JSONL format, organized by year- The project directory can contain any additional files or folders you need
- Launch Patina
- Choose File → New Project from the menu
- Select a location and enter a project name
- Patina creates the project directory and opens it automatically
# Create a new project directory
patina --new /path/to/MyProject --name "MyProject"
# Or specify just the .pat file location
patina --new /path/to/MyProject.pat --name "MyProject"- Choose File → Open Project from the menu
- Navigate to either:
- The project directory (e.g.,
MyProject/) - The
.patmanifest file (e.g.,MyProject.pat)
- The project directory (e.g.,
- Patina loads the project and displays all conversations
# Open by project directory
patina --project /path/to/MyProject/
# Or open by .pat file
patina --project /path/to/MyProject/MyProject.patExport creates a ZIP archive containing the entire project directory:
# Command line export
patina export --project /path/to/MyProject --out /path/to/backup.zipThe exported ZIP contains:
- The project manifest (
.patfile) - All conversation history
- Any additional files in the project directory
Import extracts a project ZIP archive to a new location:
# Command line import
patina import --zip /path/to/backup.zip --into /path/to/destination/After import, you can open the project normally. The imported project retains all conversations and settings.
Patina remembers recently opened projects for quick access. Recent projects appear in:
- The welcome screen when no project is open
- File → Open Recent menu (if implemented in UI)
Each Patina project is completely self-contained:
- No global settings: Each project stores its own conversation history and preferences
- IDE independent: Projects don't interfere with VS Code workspaces or other development tools
- Portable: Copy or move project directories freely between machines
- Isolated: Different projects can use different LLM providers or settings
- Organize by purpose: Create separate projects for different work areas (e.g., "WebDev", "Research", "Personal")
- Regular exports: Export important projects periodically for backup
- Meaningful names: Use descriptive project names that reflect their purpose
- Keep it simple: The project directory can contain additional files, but avoid complex nested structures
- "Project directory is not empty": When creating a project, ensure the target directory doesn't exist or is completely empty
- "Project manifest not found": Verify the
.patfile exists and matches the directory name exactly - Import fails: Ensure the destination directory is empty or doesn't exist yet
Implements the eframe application. It renders the conversation list, message view, and composer. Background tasks spawn on a dedicated Tokio runtime and synchronize with the UI using unbounded channels. Streaming UI updates are planned, but the current client displays each response once it has fully completed.
Holds the domain logic:
state.rs– application state machine, conversation management, persistence hooks.llm.rs– provider abstractions for OpenAI, Azure OpenAI, and a mock driver used by tests.mcp.rs– lightweight MCP client and registry with auth-aware handshake scaffolding.auth.rs– server/client OAuth coordination that persists refreshed tokens alongside transcripts.store.rs– JSONL transcript storage and secret persistence.telemetry.rs– idempotent tracing initialization for binaries and tools.
Hosts unit, integration, and end-to-end tests. The initial suite validates conversation persistence and response generation using the mock LLM driver. Additional tests can be added under unit/, integration/, and e2e/.
Provides automation entry points. cargo run -p xtask -- smoke spins up the core logic with the mock LLM driver and logs the resulting conversation metadata, suitable for CI smoke checks.
High-level and contributor-oriented documentation lives in docs/README.md. It explains the overall architecture, coding expectations, and how the reference material inside docs/ is organized so you can navigate deeper without this file needing constant updates.
- Fork and clone the repository.
- Run
cargo fmtbefore committing. - Add tests for any new functionality in the appropriate crate.
- Use
cargo run -p xtask -- smoketo validate end-to-end behavior.
Everyone is invited and welcome to contribute: open issues, propose pull requests, share ideas, or help improve documentation. Participation is open to all, regardless of background or viewpoint.
This project follows the FOSS Pluralism Manifesto, which affirms respect for people, freedom to critique ideas, and space for diverse perspectives.
Copyright (c) 2025, Iwan van der Kleijn
This project is licensed under the MIT License. See the LICENSE file for details.



