Your AI finally remembers.
Website · Docs · Dashboard · GitHub
Memory Crystal is a persistent cognitive memory layer for AI assistants. It captures every conversation, extracts what matters, stores it in a vector-indexed knowledge graph, and injects the right memories before each response. Your AI stops forgetting between sessions.
Ships as an OpenClaw plugin, an MCP server for any compatible host, a Next.js dashboard, and a Convex-backed multi-tenant cloud.
This isn't a vector database with a chat wrapper. The Context Engine is an active memory system that runs before every AI response.
User message arrives
│
▼
┌──────────────────────────────────────────────┐
│ CONTEXT ENGINE │
│ │
│ 1. Time-ordered recent window (last ~30 msgs, │
│ 7k char budget) │
│ 2. Semantic search across STM + LTM │
│ 3. Knowledge graph boost — connected │
│ memories ranked higher │
│ 4. Adaptive recall mode (general/focused/deep)│
│ 5. Inject top memories + recent context into │
│ model context │
│ │
└──────────────────────────────────────────────┘
│
▼
AI responds with full context
│
▼
┌──────────────────────────────────────────────┐
│ MEMORY EXTRACTION │
│ │
│ 1. Capture raw message → STM │
│ 2. LLM extracts durable memories → LTM │
│ 3. Async graph enrichment connects memories │
│ │
└──────────────────────────────────────────────┘
Every response is informed by what came before. Every conversation feeds the next one.
| Layer | What it stores | Retention |
|---|---|---|
| Short-term (STM) | Raw messages, verbatim | Rolling window (7–90 days by tier) |
| Long-term (LTM) | Extracted facts, decisions, lessons, people, rules | Forever, vector-indexed |
STM gives your AI perfect short-term recall. LTM gives it permanent knowledge. Both are searched together, every turn.
Memories don't exist in isolation. An async background job connects related memories into a graph — decisions link to the lessons that informed them, people link to the projects they worked on, rules link to the events that created them.
When the Context Engine searches, memories with strong graph connections to the current topic get ranked higher. Your AI doesn't just remember facts — it understands relationships.
| Store | Purpose | Example |
|---|---|---|
sensory |
Raw observations and signals | "Andy sounds frustrated about the deploy" |
episodic |
Events and experiences | "We shipped v2 on March 15" |
semantic |
Facts and knowledge | "The API uses Convex for the backend" |
procedural |
How-to and workflows | "Deploy with npm run convex:deploy" |
prospective |
Plans and future intentions | "Need to add billing webhooks next sprint" |
Each store has different retention rules and search weights. The Context Engine knows which stores matter for which questions.
decision · lesson · person · rule · event · fact · goal · workflow · conversation
Memories are tagged on extraction so recall is precise. Ask "why did we choose Convex?" and you get decisions. Ask "how do I deploy?" and you get procedures.
Three recall modes, automatically selected based on context:
- General — broad semantic search, good for open-ended questions
- Focused — narrow search with high relevance threshold, good for specific lookups
- Deep — multi-pass search with graph traversal, good for complex reasoning
The Context Engine picks the right mode. You don't configure anything.
curl -fsSL https://memorycrystal.ai/crystal | bashThis installs the OpenClaw plugin and sets up your memory backend. Choose during install:
- Cloud — hosted at memorycrystal.ai, zero config
- Self-hosted — your own Convex deployment, full data sovereignty
- Local — SQLite only, no cloud, context engine only
After install, your AI has memory. Every conversation is captured, extracted, and searchable.
14 tools exposed via MCP and the OpenClaw plugin:
| Tool | What it does |
|---|---|
crystal_remember |
Store a memory manually — decisions, facts, lessons, anything worth keeping |
crystal_recall |
Semantic search across all long-term memory |
crystal_what_do_i_know |
Snapshot of everything known about a topic |
crystal_why_did_we |
Decision archaeology — understand why a past decision was made |
crystal_checkpoint |
Save a memory snapshot at a milestone |
crystal_search_messages |
Search verbatim conversation history (STM) |
crystal_preflight |
Pre-flight check before risky actions — returns relevant rules and lessons |
crystal_forget |
Archive a memory |
crystal_wake |
Session startup — loads briefing and guardrails |
crystal_recent |
Fetch recent messages for short-term context |
crystal_stats |
Memory and usage statistics |
crystal_who_owns |
Find who owns a file, module, or area |
crystal_explain_connection |
Explain the relationship between two concepts |
crystal_dependency_chain |
Trace dependency chains between entities |
These tools work in any MCP-compatible host (Claude Desktop, Cursor, Windsurf, etc.) or automatically within OpenClaw.
memorycrystal/
├── plugin/ OpenClaw plugin (crystal-memory)
│ ├── index.js Plugin entry, hooks into conversation lifecycle
│ └── store/ Local SQLite store (offline fallback)
├── mcp-server/ MCP server (@memorycrystal/mcp-server)
│ └── src/index.ts Exposes crystal_* tools over MCP protocol
├── packages/
│ └── mcp-server/ Streamable HTTP MCP server variant
├── apps/
│ └── web/ Next.js 15 dashboard (React 19, Tailwind 4)
│ ├── Memories viewer, session browser, API key management
│ └── Device flow auth (RFC 8628-style)
├── convex/ Backend (Convex)
│ ├── schema.ts Multi-tenant schema
│ └── crystal/ Capture, recall, sessions, graph enrichment
└── scripts/ Install, bootstrap, doctor, enable/disable
Unit tests (convex/crystal/__tests__/) — 5 test files using Vitest + convex-test:
| File | Covers |
|---|---|
message-search.test.ts |
Message vector search |
messageEmbeddings.test.ts |
Embedding generation and storage |
messageTurns.test.ts |
Multi-turn message handling |
multitenancy.test.ts |
Cross-tenant isolation |
recall-ranking.test.ts |
Recall result ranking and scoring |
Integration tests (packages/mcp-server/test/) — end-to-end tests against the MCP server HTTP API.
# Run unit tests
npx vitest # all unit tests (watch mode)
npx vitest run # single run (CI)
# Run integration tests (requires MEMORY_CRYSTAL_API_KEY env var)
node packages/mcp-server/test/integration.test.js
# Smoke test (plugin health check)
npm run test:smoke
# Capture end-to-end test
npm run test:capture-e2e- Multi-tenant isolation — each user's memories are fully isolated at the database level; owner checks on every memory retrieval
- API keys — SHA-256 hashed at rest; plaintext keys are never stored; transient device-flow tokens cleared after retrieval
- Bearer auth — all API and MCP endpoints require
Authorization: Bearer <key> - Per-key rate limiting — rate limits enforced per API key on all endpoints
- Audit logging — all API actions (admin, impersonation, data access) are logged to
crystalAuditLog - Prompt injection mitigation — recalled memories are injected as informational context only; wake briefings include a security header instructing the model to treat recalled content as non-directive
- Auto-updater integrity —
plugin/update.shverifies SHA-256 checksums againstchecksums.txtwhen available; update aborts on mismatch - Device flow auth — RFC 8628-style device code flow for CLI key provisioning
- Local mode — SQLite fallback, your data never leaves your machine
Run everything on your own infrastructure. You need:
Important: The default config points to the hosted Convex deployment. To self-host, you must deploy your own Convex backend and set the
CONVEX_DEPLOYMENTenvironment variable so all data stays on your infrastructure.
# Clone the repo
git clone https://github.com/illumin8ca/memorycrystal.git
cd memorycrystal
npm install
# 1. Create a Convex project at https://dashboard.convex.dev and note
# your deployment name (e.g. "your-project-123")
# 2. Deploy the schema and functions to YOUR Convex backend
CONVEX_DEPLOYMENT=prod:your-project-123 npx convex deploy
# 3. Set env vars for the plugin / MCP server
# In mcp-server/.env (and your shell):
# CONVEX_URL=https://your-project-123.convex.cloud
# OPENAI_API_KEY=sk-...
# 4. Enable the plugin and verify
npm run crystal:enable
npm run crystal:doctorIf you skip setting CONVEX_DEPLOYMENT / CONVEX_URL, the system will fall back to the hosted cloud backend at <your-deployment>.convex.cloud, which is not self-hosting.
Full guide: docs/02-setup-guides/INSTALL.md
| Plan | Price | Memories | Support |
|---|---|---|---|
| Free | $0 | Self-hosted, unlimited | Community |
| Pro | $29/mo | 25,000 managed | |
| Ultra | $79/mo | Unlimited managed | Priority |
| Enterprise | Custom | Custom limits, SLAs | Dedicated |
Free is the full product — same code, same features, your Convex, your data. Paid plans are managed cloud at memorycrystal.ai so you don't have to run anything.
Memory Crystal is MIT open source. PRs welcome.
git clone https://github.com/illumin8ca/memorycrystal.git
cd memorycrystal
npm install
npm run devMIT — do whatever you want with it.
The hosted service at memorycrystal.ai is operated by Illumin8 Inc. The "Memory Crystal" name and brand are trademarks of Illumin8 Inc.