A private, self-hosted AI agent you talk to through SimpleX Chat. No accounts, no corporate servers, no metadata. Built on Mistral Vibe with persistent memory that actually remembers who you are across conversations.
Why "Mistress"? A playful nod to Mistral, but the name draws on the original meaning of the word — a woman of authority, power, or skill. Intended with the utmost respect.
Every AI agent today requires either a corporate chat platform (Telegram, Discord, Slack) or a terminal. Mistress is different:
- Fully private — SimpleX Chat means no user IDs, no phone numbers, no central server. Messages are E2E encrypted with 2-layer encryption. The relay sees nothing.
- Remembers everything — 3-tier memory system: core facts always in context, daily session logs, and semantic vector search. The agent extracts and curates its own memories after each conversation.
- Sandboxed and safe — the agent can write scripts and install skills, but cannot touch its own config, memory, or anything outside its workspace. Every bash command is filtered, every file write is policy-checked.
- EU-hosted AI — built on Mistral Vibe specifically because Mistral is EU-based and operates under the EU AI Act, ensuring stricter privacy and compliance guarantees than other providers.
- Self-hosted — runs in Docker on your machine. Your data never leaves your network (except the LLM API call to Mistral, which is your own subscription).
- Docker with Compose plugin
- A Mistral Le Chat Pro or Team subscription
- Linux or macOS
One command:
curl -LsSf https://raw.githubusercontent.com/mistress-agent/mistress/main/install.sh | bashThe installer walks you through:
- Mistral authentication (browser sign-in with your Le Chat Pro/Team subscription)
- SimpleX Chat setup (auto-generates a connection link — scan it from your phone)
- Deploys as Docker container
Supports multiple instances on the same machine — run the installer again to create another.
You (SimpleX App) ──E2E encrypted──> simplex-chat CLI ──WebSocket──> Mistress
(in Docker) │
MistressRuntime
│
Vibe AgentLoop
(Mistral LLM)
You message the bot through SimpleX. The message flows through rate limiting, ACL checks, and audit logging, then hits Vibe's AgentLoop which calls the Mistral LLM. Every tool call the LLM makes is checked against a sandbox policy before execution. After the response is sent, memories are extracted in the background.
See ARCHITECTURE.md for the full routing scheme, permissions matrix, and security layers.
- SimpleX Chat — zero-metadata E2E encrypted messaging
- No accounts, no phone numbers, no bot tokens
- Auto-accept contacts, connection link printed on start
- Core Memory — persistent facts per user (
MEMORY.md), always in the system prompt - Session Logs — daily timestamped notes, today + yesterday auto-loaded
- Vector Search — sqlite-vec + FTS5 hybrid retrieval (70% semantic / 30% keyword), local embeddings via fastembed (no API calls)
- LLM auto-extracts facts after each conversation exchange
- Human-readable Markdown files — browse and edit your own memories
- No auto_approve — every tool call goes through a custom approval callback
- Sandbox policy — config, .env, memory, and database are protected; workspace, skills, mcp, cron are writable
- Bash command filter — denies
sudo,pip install,rmon protected paths,curl | bash, etc. - Rate limiting — per-user token bucket
- ACL — admin/user/readonly roles
- Audit logging — all tool executions logged to JSONL
- Secrets vault — Fernet-encrypted store for API keys; agent can write but never read back values
- Skills — drop a
SKILL.mdintoskills/and Vibe's SkillManager auto-discovers it - MCP Tools — define servers in
mcp/servers.jsonand they're auto-connected - Cron Scheduler — agent can create scheduled tasks that run scripts from
workspace/scripts/ - Workspace — dedicated folder where the agent can write scripts and output files
- Docker deployment with healthcheck and graceful shutdown
- FastAPI gateway with
/healthand/sessionsendpoints - SQLite persistence for users, sessions, and memory vectors
- Multi-instance support (multiple bots on one machine)
Each instance gets a mistress-ctl script:
~/.mistress/instances/mistress/mistress-ctl start # Start the bot
~/.mistress/instances/mistress/mistress-ctl stop # Stop
~/.mistress/instances/mistress/mistress-ctl restart # Restart
~/.mistress/instances/mistress/mistress-ctl logs # Tail logs
~/.mistress/instances/mistress/mistress-ctl status # Check status
~/.mistress/instances/mistress/mistress-ctl update # Pull latest + restart~/.mistress/instances/{name}/
├── config.toml # Protected — agent cannot modify
├── .env # Protected — API keys
├── data/
│ ├── mistress.db # Protected — session/user persistence
│ └── memory/ # Protected — only MemoryManager writes here
├── workspace/ # Agent sandbox — scripts, output
│ ├── scripts/
│ └── output/
├── skills/ # Vibe SkillManager auto-discovers
├── mcp/ # MCP server configs
└── cron/ # Scheduled task definitions
git clone https://github.com/mistress-agent/mistress.git
cd mistress
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
# Authenticate with Mistral
mistress auth
# Local chat (no SimpleX needed)
mistress chat
# Run tests
pytest tests/ -v- Mistral Vibe — agent runtime (AgentLoop, tools, skills, LLM backend)
- SimpleX Chat — private messaging (CLI WebSocket API)
- FastAPI — REST API gateway
- SQLAlchemy + aiosqlite — async persistence
- sqlite-vec — vector search
- fastembed — local embeddings (bge-small-en-v1.5, ONNX)
- Python 3.12+, fully async