A Matrix chatbot written in Rust that provides AI-powered responses using any OpenAI-compatible API (Ollama Cloud, Gemini, OpenAI, etc.). Features end-to-end encryption, SAS emoji verification, mention-only responses, and admin-controlled configuration.
Requires Rust (install via rustup):
cargo build --releaseThe binary will be at target/release/llmbot.
For development (faster compilation, slower runtime):
cargo buildCopy the example config and edit it:
cp config.example.toml config.toml| Key | Description |
|---|---|
homeserver |
Your Matrix homeserver URL (e.g. https://matrix.example.com) |
username |
Bot account username |
password |
Bot account password |
data_dir |
Directory for session data and crypto keys (default: data) |
| Key | Description |
|---|---|
api_url |
OpenAI-compatible API endpoint (e.g. https://ollama.com/v1) |
api_key |
API key (leave empty if not required) |
model |
Model name (e.g. llama3, gemini-pro, gpt-4o) |
system_message |
Default system prompt for all rooms |
max_tokens |
Max tokens for AI response (default: 4096) |
temperature |
Response creativity 0.0-1.0 (default: 0.7) |
| Key | Description |
|---|---|
admin |
Admin Matrix user ID — the only user who can run !llmbot commands (e.g. @you:server.com) |
display_name |
Bot display name in Matrix (default: LLMBot) |
allowed_users |
List of Matrix user IDs allowed to chat (not configure) |
allowed_rooms |
List of room IDs where anyone can chat |
allowed_servers |
List of server names whose users can all chat |
| Key | Description |
|---|---|
path |
SQLite database file path (default: llmbot.db) |
There are three access levels:
- Admin (single user set in
bot.admin): Can use!llmbotcommands and chat with the bot. - Allowed (users/rooms/servers lists): Can chat with the bot but cannot use
!llmbotcommands. - Everyone else: Gets "Sorry, you can't message me."
# With default config.toml in current directory
cargo run --release
# With a specific config file
cargo run --release -- /path/to/config.tomlOn first launch, the bot will:
- Log in with your username/password
- Save the session to
data/session.json - Join any rooms it's invited to
On subsequent launches, it restores the saved session — no new device is created, no re-verification needed.
After the first run, verify the bot from another Matrix client (e.g. Element):
- Go to Settings > Sessions (or Security & Privacy)
- Find the LLMBot session
- Click Verify and choose emoji verification
- The bot auto-accepts and auto-confirms — check the terminal output for the emojis
- Confirm the emojis match on your client
This only needs to be done once. The verification persists across restarts.
If you need to start fresh (new device, new verification):
rm -rf data/Then run again. You will need to re-verify.
The bot only responds when mentioned — use @botusername or the display name in your message.
@llmbot is the earth flat?
Each message to the bot is independent — it has no conversation memory on its own. However, it follows reply chains. When you reply to one of the bot's messages, it walks up the chain of replies to reconstruct the conversation, so it knows what was said before.
This means you can have a back-and-forth conversation:
- Mention the bot with a question — it replies.
- Reply to its reply — it reads the chain and responds with context.
- Reply to that reply — the chain grows, and so does the context.
As long as you keep replying to its messages (rather than sending a new standalone message), the bot sees the full thread. A new message with no reply is a fresh start with no history.
You can also reply to any message in a room and mention the bot — it will read that message as context:
> The earth is 6,000 years old
@llmbot is this accurate?
All commands are admin-only and use the !llmbot prefix.
Shows all available commands.
Displays bot configuration: display name, AI model, API URL, version.
Sends a message directly to the AI (without needing to @mention).
!llmbot chat what is the speed of light?
View or set the system message for the current room.
# View current system message
!llmbot systemmessage
# Set a new system message for this room
!llmbot systemmessage You are a pirate. Answer everything like a pirate.
Room-specific system messages override the global one from config.toml.
View or change per-room settings.
# View all settings for this room
!llmbot roomsettings
# View a specific setting
!llmbot roomsettings always_reply
# Change a setting
!llmbot roomsettings always_reply false
Available settings:
| Setting | Type | Default | Description |
|---|---|---|---|
system_message |
string | global default | System prompt for this room |
always_reply |
bool | true |
Reply to all mentions |
docker compose up -dBy default, the bot logs at info level. For more detail:
RUST_LOG=debug cargo run| Path | Description |
|---|---|
config.toml |
Your configuration (not committed to git) |
config.example.toml |
Example configuration template |
data/ |
Session data and crypto keys (not committed to git) |
data/session.json |
Saved Matrix session |
llmbot.db |
SQLite database for room settings |