A rationalist social network you run locally from your own desktop — and, underneath, a personal relationship manager (PRM), a lightweight CRM, and a JIRA-style task tracker for your own life. QueryKey uses a local AI agent to help you keep up with the people and commitments in your life, while respecting your privacy and the privacy of everyone you talk to.
The name. "QueryKey" comes from the Q / K / V (query, key, value) projections in a transformer's attention matrix. Your day, relationships, and tasks are the values; the local agent attends over them by computing queries from your current intent against keys built from your notes, chat logs, and prior conversations.
🌐 Website: https://querykey.emmaleonhart.com
Status: early. The QueryKey vision below is the target; the codebase is an in-progress engine being built and reoriented toward it. See Status for what is real today versus planned.
QueryKey runs on your machine. It watches the messy, unstructured streams of how you actually communicate — chat logs, pasted conversations, screenshots, voice notes — and uses local AI to build a private model of the people and commitments in your life. It then helps you, proactively and quietly, keep those relationships in good standing.
Principles:
- Local-first for privacy. The server runs on your own machine. Nothing has to leave your desktop. The privacy that matters is not just yours — it's the privacy of the people you talk about too.
- The tool serves you. You never reformat your life to fit a form. You communicate the way you already do; the system meets you there.
- AI does the hard work, and admits when it's unsure. Extraction carries confidence scores. When the system isn't sure, it asks instead of guessing silently. Everything it records is visible and auditable — nothing hidden.
- Markdown on your disk is the source of truth. Tasks, events, people, and notes live as markdown files (YAML frontmatter for structured fields + freeform body — Obsidian-style, useful even without QueryKey installed), tracked in a git repository. The AI operates on those files; it does not lock your life inside an app database.
- The knowledge graph is derived, not canonical. People, commitments, and their links are projected out of the markdown into an embedded graph (Loca / SutraDB — see Architecture) for fast structured queries. The graph is a secondary index you can always rebuild from the files.
- Agent-agnostic via MCP. QueryKey exposes itself as an MCP server, so any agent (local Gemma by default — cheap and private — or Claude/GPT if you choose) can attend over your graph and act on your files.
- Rationalist by disposition. Confidence scores, "I'm not sure, want me to ask?", and an auditable record are the central UX, not a footnote.
- Optionally peer-to-peer — and private by default. You can use it 100% solo. But each person can also broadcast a card (a markdown file: what they're offering and looking for — their key and query). Cards sync directly peer-to-peer (no central server; GitHub bootstraps identity). Your own card is git-tracked (so you can revert); other people's cards are git-ignored on your machine (use them in the moment, don't archive their history). Card changes propagate on a 24-hour delay — a drunk mistake at 11pm is fixable by morning and no one ever saw it. The network inverts the usual model: absence of history is the default; persistence takes deliberate effort by an observer.
It is PRM + lightweight CRM + JIRA-style task tracker in one, for one person — and then, by selectively surfacing nodes you've already built, an opt-in positive-sum social network. Sequencing: the private PRM is built first (useful solo, zero network effects, and it builds the graph the cards are later a window into); the peer-to-peer card layer comes second; the MCP server is present from day one. See Status for what is real today versus planned.
| Component | Stack | Where |
|---|---|---|
| Desktop/mobile app | Flutter (Dart sdk ^3.10.8); provider, web_socket_channel, http, uuid, intl |
app/ |
| Server | Rust (crate querykey-server: axum, tokio, reqwest) — compiles & runs; structural port with TODOs |
server/ |
| Source of truth | Markdown files + git — implemented; YAML frontmatter + body, the graph is derived & rebuilt from it | server/src/vault/ → $VAULT_DIR |
| AI engine | Model-agnostic via an MCP server (default agent: local Gemma — cheap & private; Claude/GPT optional). Today's implementation: OpenClaw via a local WSL gateway (port 18789) |
server/src/openclaw/ |
| Knowledge graph | Loca (formerly SutraDB) — the author's embedded Rust graph-vector-time DB; the graph is derived from the markdown, not canonical. Wired via loka-core behind --features loca; in-memory fallback otherwise. Fuseki is not used (removed with the Go server) |
server/src/graph/ + [../SutraDB] |
| Ingest surface | Local markdown + pasted text / screenshots / voice notes (Discord deprioritized — todo.md Phase Z) | server/src/ingest.rs |
| Identity / sync | GitHub (usernames as identity, repo as sync) — a thin, swappable abstraction | (planned) |
| Peer-to-peer | Card exchange — pure P2P, no central server, 24h propagation delay | (planned) |
| Real-time | WebSocket hub | server/src/ws.rs |
Local endpoints when running: server http://127.0.0.1:8000, health
/health, WebSocket ws://127.0.0.1:8000/ws/chat, OpenClaw gateway
http://127.0.0.1:18789.
This is early. Roughly: planning and data models are complete; the AI bridge is functional; most product behavior is scaffolding.
Working / functional
- Canonical markdown vault (
server/src/vault/,$VAULT_DIR): the store of record. API + ingest writepeople/tasks/eventsmarkdown (YAML frontmatter + body) first; the Loca graph is a derived index rebuilt from the vault on startup.update_taskmutates the markdown; reads are full-fidelity (the lossy-graph / epoch-timestamp problem is gone). Round-trip is lossless (unit-tested) and survives restarts. - Rust server (
server/) is the only server — Go fully ported then deleted (recoverable from git history). Compiles in all three configs (cargo build,--features loca,--features discord), zero warnings; boots, detects the OpenClaw gateway, opens a Loca.sdbstore, serves the HTTP API +/health+ WebSocket + SPARQL passthrough + an MCP endpoint (/mcp). - OpenClaw bridge: gateway detect, incremental SSE streaming, analyze, supervised retry + health-check gateway lifecycle, graceful stop.
- Data models: full entity set ported to Rust, JSON contract preserved.
- Loca/SutraDB derived graph (
--features loca): person/task/message/ conflict persisted with full fields; SPARQL query bridge works; typed read-back of persons & tasks (smoke-verified);insert_triplesvia N-Triples. - Ingest pipeline: relaxed-schema parse → typed models → store + typed GraphDiff broadcast over the WebSocket hub.
- MCP server (
/mcp): JSON-RPCinitialize/tools/list/tools/call.
Honest limitations / not yet built
- Conflict/OpenQuestion/FollowUp on-disk forms — DONE (Round 6):
canonical markdown + vault-first wiring;
resolve_conflict,resolve_question,create_followupare real markdown mutations (no morenot_implemented). - Semantic wikilinks — DONE (Round 8):
[[Target]]/[[property:Target]](single-colon typed triples) in any entity body become derived edges with explicit resolution precedence + dangling handling;GET /api/links+ per-entity backlinks live from the vault. - Status-workflow enforcement — DONE (Round 9): Task/Conflict/
Question state machines enforced at the API mutation boundary (a
resolved conflict can't be un-resolved;
donecan't rewind toextracted) — hand-edited markdown stays legal. - Full canonical entity set on disk — DONE (Round 10): Instruction + VoiceProfile vault forms added; nothing is graph-only or unimplemented anymore. Instruction is written by ingest; both have read/upsert API.
- Calendar — DONE (Round 11): optional Event
recurrence(RFC-5545 subset) +GET /api/calendar?from&tomerged agenda (event occurrences + deadlined tasks, movable-vs-fixed), live from the vault. - Agent-drafted card — DONE (Round 12):
POST /api/card/draftdrafts your key/query from a model-agnostic PRM digest within the editableagents.mdenvelope; deterministic humble heuristic when offline; never saved (approve viaPUT /api/card). Still parked: the P2P transport (+ discovery) — explicit user steering. - Peer-to-peer card layer — format + local layer DONE (Round 7):
card format/parse, the
.gitignoreasymmetry, the 24h propagation safety valve + revert-before-propagation, read-onlypeers/,/api/card|identity|peers, swappable GitHub identity abstraction (docs/card-format.md). Still open: the P2P transport itself (what actually moves a card between peers) + discovery — the format deliberately does not assume it; this is now the gating question. - MCP stdio/SSE transports +
agents.md-governed write tools. - The follow-up engine, conflict resolution, daily check-ins; calendar/scheduling; audio/voice pipeline; external tool sync.
- Discord is deprioritized (feature-gated serenity skeleton only) —
see
todo.mdPhase Z.
See queue.md for the authoritative near-term plan and
todo.md for the full phased roadmap.
Windows + WSL is the current target. Prerequisites:
- Rust (stable, via rustup) —
cargoonPATH - Flutter (Dart SDK 3.10.8+) on
PATH - WSL Ubuntu with OpenClaw installed (for AI features; the server runs without it, but AI chat/extraction needs the gateway)
- Optional: the sibling
../SutraDBcheckout for the Loca graph store (--features loca); without it the server uses an in-memory graph
Then, from the repo root:
!run.batThat script builds the Rust server (cargo build --features loca, falling
back to the in-memory build), runs flutter pub get, starts the OpenClaw
gateway in WSL, launches the server, and runs the Flutter app on Windows
(flutter run -d windows). Closing the app window tears everything back down.
!runClaude.bat just opens Claude Code at the repo root.
| Path | What it is |
|---|---|
app/ |
Flutter app (Dart) — desktop-first; Chat / Tasks / Ingest screens |
server/ |
Rust server (querykey-server) — the only server: ingest, agent bridge, WebSocket, MCP, Loca graph store |
docs/ |
architecture.md, data-model.md, markdown-schema.md, card-format.md, versions-comparison.md, why-go.md |
chat/ |
Vision corpus (chat-log exports); gitignored except its README — private context, not a spec |
dev_scheduling/ |
Dev-time agent data (receipts/discord/), committed so CI can write to it |
queue.md |
Authoritative near-term plan / recovery dump |
todo.md |
Full phased roadmap |
CLAUDE.md |
Workflow rules and architecture decisions for working in this repo |
!run.bat, !runClaude.bat |
Windows run scripts |