Local graph-based memory plugin for OpenClaw — inspired by Supermemory. Runs entirely on your machine with no cloud dependencies.
Disclaimer: This is an independent project. It is not affiliated with, endorsed by, or maintained by the Supermemory team. The name reflects architectural inspiration, not a partnership.
- LLM Fact Extraction — Extracts discrete, entity-centric facts from each conversation turn via an LLM subagent, matching Supermemory's cloud approach locally.
- Graph Memory — Automatic entity extraction, relationship tracking (Updates / Extends / Derives), memory versioning with
parent_memory_idchains. - User Profiles — Static long-term facts + dynamic recent context, automatically maintained and injected into system prompt. Static memories (
is_static) are protected from decay. - Automatic Forgetting — Temporal expiration for time-bound facts (including absolute dates like "January 15"), decay for low-importance unused memories, contradiction resolution.
- Hybrid Search — BM25 keyword (FTS5) + graph-augmented multi-hop retrieval with MMR diversity re-ranking. Superseded memories are filtered at the query level. Vector similarity (sqlite-vec) used when available.
- Auto-Recall — Injects relevant memories + user profile before every AI turn via the
before_prompt_buildhook. - OpenClaw Runtime Integration — Registers memory tools, a built-in memory search manager, and a pre-compaction memory flush plan when the host API supports them.
flowchart LR
subgraph input ["💬 Conversation"]
A[User message] --> B[AI response]
end
subgraph extract ["🧠 Memory Engine"]
C[Extract discrete facts via LLM]
C --> D[Deduplicate]
D --> E[Classify & embed]
end
subgraph graph ["🔗 Knowledge Graph"]
F["Link entities\n(people, projects)"]
F --> G{Relationship detection}
G --> H["🔄 Updates — new fact\nsupersedes old"]
G --> I["➕ Extends — enriches\nexisting fact"]
G --> J["🔮 Derives — inferred\nconnection"]
end
subgraph recall ["🔎 Recall"]
K["User Profile\n(static + dynamic facts)"]
L["Hybrid Search\n(vector + keyword + graph)"]
K --> M[Inject into next AI turn]
L --> M
end
B --> C
E --> F
J --> K
H --> K
I --> K
- You talk to your AI normally. Share preferences, mention projects, discuss problems.
- Auto-capture uses your configured LLM to extract discrete facts from the last conversation turn (both user and assistant messages).
- Graph engine links each extracted fact to entities and detects relationships:
- Updates — "Iván moved to Copenhagen" supersedes "Iván lives in Madrid"
- Extends — "Iván leads a research team of 4" enriches "Iván is an AI Scientist at Santander"
- Derives — Inferred connections from shared entities
- Auto-recall injects your user profile + relevant memories before each AI turn.
- Automatic forgetting cleans up expired time-bound facts and decays unused low-importance memories.
openclaw plugins install openclaw-memory-supermemoryEdit ~/.openclaw/openclaw.json and add both the memory slot and the plugin entry:
{
plugins: {
// REQUIRED: Assign this plugin to the memory slot
slots: {
memory: "openclaw-memory-supermemory"
},
// RECOMMENDED: Suppress the auto-load security warning
allow: ["openclaw-memory-supermemory"],
// Plugin configuration
entries: {
"openclaw-memory-supermemory": {
enabled: true,
config: {
embedding: {
provider: "openai",
model: "text-embedding-3-small",
apiKey: "${OPENAI_API_KEY}" // reads from env var
},
autoRecall: true,
autoCapture: true
}
}
}
}
}Important: The
slots.memoryline is required — without it, OpenClaw won't use the plugin even if it's installed.
Restart the OpenClaw gateway for the plugin to load.
openclaw supermemory statsYou should see output like:
Total memories: 0
Active memories: 0
Superseded memories: 0
Entities: 0
Relationships: 0
Vector search: unavailable
Zero counts are normal on first run. Vector search: unavailable is expected — see Vector Search below.
You need an embedding provider for semantic search. Choose one:
embedding: {
provider: "openai",
model: "text-embedding-3-small",
apiKey: "${OPENAI_API_KEY}"
}Set the environment variable before starting OpenClaw:
export OPENAI_API_KEY="sk-..."Install Ollama and pull a model:
ollama pull nomic-embed-textembedding: {
provider: "ollama",
model: "nomic-embed-text"
}Any provider with an OpenAI-compatible /v1/embeddings endpoint works:
embedding: {
provider: "openai",
model: "your-model-name",
apiKey: "${YOUR_API_KEY}",
baseUrl: "https://your-provider.com/v1"
}| Model | Provider | Dimensions |
|---|---|---|
nomic-embed-text |
Ollama | 768 |
text-embedding-3-small |
OpenAI | 1536 |
text-embedding-3-large |
OpenAI | 3072 |
mxbai-embed-large |
Ollama | 1024 |
all-minilm |
Ollama | 384 |
snowflake-arctic-embed |
Ollama | 1024 |
For other models, set embedding.dimensions explicitly.
The AI uses these tools autonomously:
| Tool | Description |
|---|---|
memory_search |
Hybrid search across all memories (vector + keyword + graph) |
memory_store |
Save information with automatic entity extraction, relationship detection, and optional isStatic flag for permanent facts |
memory_forget |
Delete memories by ID or search query |
memory_profile |
View/rebuild the automatically maintained user profile |
openclaw supermemory stats # Show memory statistics
openclaw supermemory search <query> # Search memories
openclaw supermemory search "rust" --limit 5
openclaw supermemory profile # View user profile
openclaw supermemory profile --rebuild # Force rebuild profile
openclaw supermemory wipe --confirm # Delete all memoriesAfter chatting with the AI, you can verify memories are being captured:
# Check memory counts increased
openclaw supermemory stats
# Search for something you mentioned
openclaw supermemory search "your topic"
# View your auto-built profile
openclaw supermemory profileThe plugin uses FTS5 keyword search + graph traversal by default. Vector similarity search requires sqlite-vec, which is bundled with OpenClaw's built-in memory system but not automatically available to external plugins.
If your OpenClaw build includes sqlite-vec, the plugin will detect and use it automatically.
Suppress it by adding:
plugins: {
allow: ["openclaw-memory-supermemory"]
}| Option | Type | Default | Description |
|---|---|---|---|
embedding.provider |
string | "ollama" |
Embedding provider (ollama, openai, etc.) |
embedding.model |
string | "nomic-embed-text" |
Embedding model name |
embedding.apiKey |
string | — | API key (cloud providers only, supports ${ENV_VAR} syntax) |
embedding.baseUrl |
string | — | Custom API base URL |
embedding.dimensions |
number | auto | Vector dimensions (auto-detected for known models) |
autoCapture |
boolean | true |
Auto-capture memories from conversations |
captureMode |
string | "extract" |
"extract" (LLM fact extraction) or "off" (disable auto-capture) |
autoRecall |
boolean | true |
Auto-inject memories + profile into context |
profileFrequency |
number | 50 |
Rebuild user profile every N interactions |
entityExtraction |
string | "pattern" |
Current implementation is pattern-based. "llm" is reserved and currently behaves the same as "pattern". |
forgetExpiredIntervalMinutes |
number | 60 |
Minutes between forgetting cleanup runs |
temporalDecayDays |
number | 90 |
Days before low-importance unused memories decay |
maxRecallResults |
number | 10 |
Max memories injected per auto-recall |
vectorWeight |
number | 0.5 |
Weight for vector similarity in hybrid search |
textWeight |
number | 0.3 |
Weight for BM25 keyword search |
graphWeight |
number | 0.2 |
Weight for graph-augmented retrieval |
dbPath |
string | ~/.openclaw/memory/supermemory.db |
SQLite database path |
captureMaxChars |
number | 2000 |
Max message length for auto-capture |
debug |
boolean | false |
Enable verbose logging |
By default, the plugin uses your configured LLM to extract discrete, entity-centric facts from each conversation turn.
Input conversation:
"Caught up with Iván today. He's working at Santander as an AI Scientist now, doing research on knowledge graphs. He lives in Madrid and mentioned a deadline next Tuesday for a paper submission."
Extracted memories:
- Iván works at Santander as an AI Scientist
- Iván researches knowledge graphs
- Iván lives in Madrid
- Iván has a paper submission deadline next Tuesday
Each fact is stored as a separate memory with automatic entity linking, relationship detection (Updates/Extends/Derives), and temporal expiration.
Set captureMode: "off" to disable auto-capture entirely.
openclaw-memory-supermemory/
├── index.ts # Plugin entry
├── openclaw.plugin.json # Plugin manifest (kind: "memory")
├── tests/
│ └── integration/
│ └── longmemeval/
│ ├── fixtures/ # Bundled LongMemEval test artifacts
│ ├── README.md # Test layout and artifact notes
│ └── run.ts # Local OpenClaw integration battery / benchmark runner
├── src/
│ ├── config.ts # Config parsing + defaults
│ ├── db.ts # SQLite: memories, entities, relationships, profiles
│ ├── embeddings.ts # Ollama + OpenAI-compatible embedding providers
│ ├── fact-extractor.ts # LLM fact extraction via OpenClaw subagent
│ ├── graph-engine.ts # Entity extraction, relationship detection, temporal parsing
│ ├── memory-text.ts # Injected/synthetic memory filtering and prompt-safe sanitization
│ ├── search.ts # Hybrid search (vector + FTS5 + graph)
│ ├── profile-builder.ts # Static + dynamic user profile
│ ├── forgetting.ts # Temporal decay, expiration, cleanup
│ ├── tools.ts # Agent tools (search, store, forget, profile)
│ ├── hooks.ts # Auto-recall + guarded auto-capture hooks
│ └── cli.ts # CLI commands
All data stored in a single SQLite database:
- memories — Text, embeddings, importance, category, expiration, access tracking,
is_static,parent_memory_id - entities — Extracted entities (people, projects, tech, emails, URLs)
- entity_mentions — Links between memories and entities
- relationships — Graph edges (updates / extends / derives)
- profile_cache — Cached static + dynamic user profile
- memories_fts — FTS5 virtual table for keyword search
- memories_vec — sqlite-vec virtual table for vector similarity (when available)
The repo includes a LongMemEval runner that evaluates this plugin through a real local OpenClaw agent invocation while keeping benchmark state isolated from your normal ~/.openclaw profile.
# One example per main LongMemEval category + one abstention case
bun run test:integration:longmemeval
# Run the whole bundled oracle fixture
bun run test:integration:longmemeval --preset full
# Run the official LongMemEval evaluator afterwards
bun run test:integration:longmemeval --run-official-eval --official-repo /tmp/LongMemEvalThe runner auto-loads repo-root .env.local and .env before reading env defaults. Start from .env.sample. The only supported runner env defaults are LONGMEMEVAL_SOURCE_STATE_DIR and LONGMEMEVAL_OFFICIAL_REPO.
What the runner does:
- Uses the bundled oracle fixture by default, or a file passed via
--data-file - Creates an isolated
~/.openclaw-<profile>profile - Copies auth and model metadata from
LONGMEMEVAL_SOURCE_STATE_DIR(default:~/.openclaw) - Imports each benchmark instance into a fresh plugin DB
- Asks the benchmark question through
openclaw agent --local - Writes a
predictions.jsonlfile plus a run summary JSON