An integrated AI chat assistant plugin for DankMaterialShell with support for multiple AI providers, streaming responses, and markdown rendering.
- Multiple AI Provider Support: OpenAI, Anthropic, Google Gemini, and custom OpenAI-compatible APIs
- Streaming Responses: Real-time streaming of AI responses with proper cancellation support
- Markdown Rendering: Full markdown support with syntax highlighting for code blocks
- Persistent Chat History: Conversations are saved and restored across sessions
- Flexible Configuration: Per-provider settings for model, temperature, max tokens, and more
- API Key Management: Store API keys securely or use environment variables
- Session-based Keys: Option to use in-memory API keys that don't persist to disk
- Monospace Font Option: Toggle monospace rendering for technical discussions
| Chat Interface | Settings Panel |
|---|---|
![]() |
![]() |
- DankMaterialShell: Latest version with plugin toggle support
- Qt/QML: Qt 6.x with QtQuick support (provided by Quickshell)
- Dependencies:
curlfor HTTP requests,wl-copyfor clipboard operations
-
Clone this repository to your DMS plugins directory:
mkdir -p ~/.config/DankMaterialShell/plugins cd ~/.config/DankMaterialShell/plugins git clone https://github.com/devnullvoid/dms-ai-assistant.git AIAssistant
-
Restart DankMaterialShell:
dms restart
-
Enable the plugin:
- Open DMS Settings → Plugins
- Find "AI Assistant" in the list
- Toggle it to enabled
The plugin supports multiple AI providers. Configure your preferred provider in the settings panel:
Note: Model names change frequently as providers release new versions. Check official provider documentation for the latest available models:
Provider: openai
Base URL: https://api.openai.com
Model: gpt-5.2 (or gpt-5.2-chat-latest, gpt-5.2-2025-12-11, gpt-5.2-codex, etc.)
API Key: Your OpenAI API key
Provider: anthropic
Base URL: https://api.anthropic.com
Model: claude-sonnet-4-5-20250929 (or claude-haiku-4-5-20251001, etc.)
API Key: Your Anthropic API key
Provider: gemini
Base URL: https://generativelanguage.googleapis.com
Model: gemini-2.5-flash (production) or gemini-3-flash-preview (experimental)
API Key: Your Google API key
For any OpenAI-compatible API (LocalAI, Ollama, LM Studio, etc.):
Provider: custom
Base URL: http://localhost:1234/v1 (your API endpoint)
Model: Your model name
API Key: Optional (leave empty for local APIs)
-
Store in Settings (Remember API Key toggle ON)
- API key is saved to
~/.config/DankMaterialShell/plugin_settings.json - Persists across restarts
- More convenient but stored on disk
- API key is saved to
-
Environment Variable (Recommended for security)
- Set API Key Env Var field (e.g.,
OPENAI_API_KEY) - API key read from environment variable
- Not stored in settings files
- More secure
- Set API Key Env Var field (e.g.,
-
Session-only (Remember API Key toggle OFF)
- Enter API key each session
- Stored in memory only
- Cleared on restart
-
Temperature (0.0 - 2.0): Controls randomness in responses
- Lower (0.0-0.5): More focused and deterministic
- Higher (1.0-2.0): More creative and varied
-
Max Tokens (128 - 32768): Maximum response length
- Adjust based on your needs and model limits
- Higher values = longer responses but more API cost
The AI Assistant can be triggered via:
-
IPC Command:
dms ipc call plugins toggle aiAssistant
-
Keybind: Configure in your compositor configuration
# Niri example:
Mod+A { spawn "dms" "ipc" "call" "plugins" "toggle" "aiAssistant"; }# Hyprland example:
bind = SUPER, A, exec, dms ipc call plugins toggle aiAssistant
- Send Message: Type your message and press
Ctrl+Enteror click "Send" - Stop Generation: Click "Stop" while streaming
- Clear History: Click trash icon to clear conversation
- Copy Response: Use overflow menu → "Copy last reply"
- Retry: If a request fails, use overflow menu → "Retry"
Ctrl+Enter: Send messageEscape: Close assistant
All settings are stored in ~/.config/DankMaterialShell/plugin_settings.json under the aiAssistant key.
Example configuration:
{
"aiAssistant": {
"enabled": true,
"provider": "custom",
"baseUrl": "https://api.example.com/v1",
"model": "glm-4.7",
"apiKeyEnvVar": "PROVIDER_API_KEY",
"saveApiKey": false,
"useMonospace": true,
"temperature": 0.7,
"maxTokens": 4096
}
}Chat history is saved to ~/.local/state/DankMaterialShell/plugins/aiAssistant/session.json:
- Automatically saved after each message
- Limited to last 50 messages (configurable in code)
- Cleared when chat history is manually cleared
- Invalidated when provider settings change
If settings don't persist after restart:
- Check
~/.config/DankMaterialShell/plugin_settings.jsonexists and is writable - Ensure DMS has write permissions to config directory
- 401 Unauthorized: Check API key is correct
- 404 Not Found: Verify base URL and model name
- Connection Failed: Check internet connection and API endpoint
- Timeout: Increase timeout setting or check network latency
- Ensure code blocks use triple backticks with language specifier
- Check if markdown2html.js is present in plugin directory
AIAssistant/
├── plugin.json # Plugin manifest
├── AIAssistantDaemon.qml # Main daemon/slideout controller
├── AIAssistant.qml # Chat interface UI
├── AIAssistantService.qml # Backend service (API calls, state)
├── AIAssistantSettings.qml # Settings panel UI
├── AIApiAdapters.js # Provider-specific API adapters
├── markdown2html.js # Markdown to HTML conversion
├── MessageBubble.qml # Individual message component
└── MessageList.qml # Message list container
To add a new provider, edit AIApiAdapters.js:
- Add provider configuration to
PROVIDER_CONFIGS - Implement request formatting in
formatRequest() - Implement response parsing in
parseStreamChunk()
Example:
PROVIDER_CONFIGS: {
myProvider: {
name: "My Provider",
streamEndpoint: "/v1/chat/completions",
authHeader: "Authorization",
authPrefix: "Bearer ",
supportsStreaming: true
}
}Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly with DMS
- Submit a pull request
MIT License - see LICENSE file for details
- Author: Jon Rogers - devnullvoid
- DankMaterialShell: DankMaterialShell Project
- QML/Qt: Qt Project
For issues, questions, or feature requests:
- Open an issue on GitHub
- Multi-turn conversation context management
- Conversation branching/forking
- Export conversations to markdown
- Custom system prompts
- Conversation templates/presets

