A shared memory plugin for OpenClaw that lets multiple agents on the same gateway share knowledge, decisions, and context — while keeping per-agent privacy boundaries.
OpenClaw's multi-agent setup gives each agent full isolation: separate workspace, separate session history, separate memory. That isolation is useful, but it means agents can't learn from each other. When Agent A discovers something relevant to Agent B's domain, B has no way to access it. When the user makes a decision with one agent, every other agent starts from scratch.
The current community workaround is symlinked directories and Google Sheets. This plugin replaces that with a structured, queryable shared memory layer that is native to OpenClaw's architecture — same SQLite foundation, same Markdown-as-source-of-truth philosophy, zero external dependencies.
- Plugin, not a fork. Installs via
openclaw plugins install. No source modifications to OpenClaw required. - SQLite only. No external databases, no daemons, no network hops. One file:
~/.openclaw/shared/memory.db. - Markdown as source of truth. Every shared memory fragment is written to human-readable Markdown files (
SHARED_MEMORY.md+ daily logs). The SQLite index can always be rebuilt from Markdown. - Three-tier access control. Private (agent-only), shared-read (anyone reads, originator writes), shared-write (anyone reads and writes).
- Explicit sharing by default. Nothing is shared automatically unless you configure it. Agents must consciously call
memory_share, or you opt specific agents into implicit (auto-share) mode. - Configurable embedding and extraction providers. Works with OpenAI, Anthropic, Google, Voyage, or fully local/offline with no API keys required.
openclaw plugins install openclaw-shared-memoryThat's it. The plugin initializes its SQLite database automatically the first time the gateway starts.
Open your openclaw.json and add the plugin under plugins.entries:
{
"plugins": {
"entries": {
"openclaw-shared-memory": {
"config": {
"global": {},
"agents": {}
}
}
}
}
}All config fields are optional — the plugin works out of the box with sensible defaults.
{
"plugins": {
"entries": {
"openclaw-shared-memory": {
"config": {
"global": {},
"agents": {}
}
}
}
}
}Uses a local embedding model (all-MiniLM-L6-v2, downloads ~23MB on first use). No API keys needed. Agents default to explicit sharing mode.
{
"plugins": {
"entries": {
"openclaw-shared-memory": {
"config": {
"global": {
"sharedStorePath": "~/.openclaw/shared/memory.db",
"embeddingDimensions": 384,
"privateBoost": 1.2,
"recencyDecayDays": 90,
"maxSharedResults": 5,
"embeddingProvider": {
"type": "local",
"model": "Xenova/all-MiniLM-L6-v2"
},
"extractionProvider": {
"type": "anthropic",
"apiKey": "sk-ant-..."
}
},
"agents": {
"agent-name": {
"mode": "explicit",
"autoShare": [],
"neverShare": ["working_note"],
"readAccess": true,
"writeAccess": true
}
}
}
}
}
}
}| Field | Default | Description |
|---|---|---|
sharedStorePath |
~/.openclaw/shared/memory.db |
Path to shared SQLite database |
embeddingDimensions |
384 |
Vector size — must match your embedding model |
privateBoost |
1.2 |
Score multiplier for private results vs shared |
recencyDecayDays |
90 |
Days until a fragment's recency weight hits its floor (0.1) |
maxSharedResults |
5 |
Max shared fragments injected before each agent turn |
embeddingProvider |
{ type: "local" } |
Embedding model config (see below) |
extractionProvider |
(none) | Extraction model config — required for implicit mode (see below) |
| Field | Default | Description |
|---|---|---|
mode |
"explicit" |
"explicit": agent uses memory_share tool consciously. "implicit": system auto-shares after each turn. |
autoShare |
[] |
Fragment types to auto-share in implicit mode. Empty = share all non-neverShare types. |
neverShare |
["working_note"] |
Fragment types never auto-shared, even in implicit mode. |
readAccess |
true |
Whether this agent receives shared memory context |
writeAccess |
true |
Whether this agent can write to shared memory |
The plugin needs an embedding model to convert text into vectors for semantic search. Pick the one that fits your setup:
"embeddingProvider": {
"type": "local",
"model": "Xenova/all-MiniLM-L6-v2"
}Downloads the model on first use (~23MB). Set embeddingDimensions: 384.
"embeddingProvider": {
"type": "openai",
"apiKey": "sk-...",
"model": "text-embedding-3-small"
}Set embeddingDimensions: 1536. Use text-embedding-3-large for 3072 dims.
"embeddingProvider": {
"type": "voyage",
"apiKey": "pa-...",
"model": "voyage-3-lite"
}Set embeddingDimensions: 512 for voyage-3-lite, 1024 for voyage-3.
"embeddingProvider": {
"type": "ollama",
"model": "nomic-embed-text",
"dimensions": 768
}Requires Ollama running locally. Set embeddingDimensions to match your model.
Important:
embeddingDimensionsmust match the model you configure. If you change models, runopenclaw shared-memory rebuildto reindex with the new dimensions.
Extraction is used in implicit mode — a cheap model reads each conversation turn and automatically extracts memory-worthy fragments, entities, and relationships. If you only use explicit mode, you don't need this.
"extractionProvider": {
"type": "anthropic",
"apiKey": "sk-ant-...",
"model": "claude-haiku-4-5-20251001"
}"extractionProvider": {
"type": "openai",
"apiKey": "sk-...",
"model": "gpt-4o-mini"
}"extractionProvider": {
"type": "google",
"apiKey": "AIza...",
"model": "gemini-2.0-flash-lite"
}"extractionProvider": {
"type": "ollama",
"model": "llama3"
}Agents receive shared context automatically before each turn (injected silently), but only share something when they consciously call the memory_share tool. The system prompt tells them when to use it.
Best for: agents where you want deliberate control over what gets shared.
After each turn, a cheap extraction model reads the conversation and automatically shares qualifying fragments. The agent doesn't need to think about it.
"agents": {
"my-agent": {
"mode": "implicit",
"autoShare": ["decision", "fact"],
"neverShare": ["working_note"]
}
}Best for: high-volume agents where manual sharing would be friction.
When the plugin is active, agents get two new tools:
Share a memory fragment with other agents.
memory_share({
content: "We decided to use PostgreSQL for the main database",
fragment_type: "decision",
scope: "shared-read",
tags: ["database", "architecture"],
confidence: 0.95,
supersedes_id: "<old-fragment-id>" // optional: retire a previous memory
})
Fragment types:
decision— a choice or conclusion was reachedpreference— a user or system preference was expressedfact— objective information was establishedworking_note— temporary working context (not worth sharing in most cases)
Scopes:
shared-read— any agent can read, only the originating agent can updateshared-write— any agent can read and write
Search shared memory explicitly.
memory_search_shared({
query: "what database did we decide on",
limit: 5,
fragment_type: "decision", // optional filter
entity_name: "Alice", // optional: search by entity
entity_type: "person" // optional: narrow entity search
})
Before every agent turn, the plugin searches shared memory using the user's message as a query, and injects the most relevant results as context. The agent sees this as a ## Shared Memory Context block at the top of its context window. This happens automatically — no tool call needed.
## Shared Memory Context
**[decision] [database, architecture]** (from agent-petra)
We decided to use PostgreSQL for the main database.
✓ Supported by 2 other fragment(s)
**[preference] [auth]** (from agent-techo)
The user prefers JWT tokens over session cookies for authentication.
# Overview stats
openclaw shared-memory
# Search shared memory
openclaw shared-memory search "what have we decided about auth"
# List all known entities and their relationships
openclaw shared-memory entities
# Show unresolved contradictions between agents
openclaw shared-memory contradictions
# Rebuild the SQLite index from Markdown files
openclaw shared-memory rebuildThe plugin creates one new directory alongside your existing OpenClaw data:
~/.openclaw/
├── agents/
│ ├── petra/ # Agent-private (unchanged)
│ └── techo/ # Agent-private (unchanged)
│
└── shared/ # Created by this plugin
├── memory.db # Shared SQLite index
├── SHARED_MEMORY.md # Human-readable curated memory
└── memory/
├── 2026-03-12.md # Daily shared memory log
└── 2026-03-13.md
SHARED_MEMORY.md and the daily logs are the source of truth. If the SQLite index is ever lost or corrupted, run openclaw shared-memory rebuild to reconstruct it from the Markdown files.
{
"plugins": {
"entries": {
"openclaw-shared-memory": {
"config": {
"global": {
"embeddingProvider": { "type": "local" },
"extractionProvider": {
"type": "anthropic",
"apiKey": "sk-ant-..."
},
"maxSharedResults": 5
},
"agents": {
"petra": {
"mode": "explicit"
},
"techo": {
"mode": "implicit",
"autoShare": ["decision", "fact"],
"neverShare": ["working_note"]
}
}
}
}
}
}
}In this setup:
petrauses explicit mode — she callsmemory_sharewhen she wants to share somethingtechouses implicit mode — the system automatically extracts and shares decisions and facts from his conversations- Both agents receive shared context before every turn
- OpenClaw gateway (any recent version)
- Node.js >= 18
- No external services required (local embedding model is the default)