| Memory Browser | Recall Workbench |
|---|---|
![]() |
![]() |
| Recall Analytics | Dream Diary |
|---|---|
![]() |
![]() |
| Settings |
|---|
![]() |
Lore is a long-term memory system for AI agents. It gives an agent a durable memory graph, a fixed startup baseline, per-prompt recall, explicit read tracking, and cautious write tools.
Supported runtimes:
| Runtime | Integration | Notes |
|---|---|---|
| Pi | pi-extension/ |
Best fit. Pi leaves long-term memory to extensions and keeps its system prompt compact, so Lore can act as the primary memory layer with little prompt competition. |
| Claude Code | claudecode-plugin/ |
MCP tools, session-start boot injection, per-prompt recall injection, and guidance rules. |
| Codex | codex-plugin/ |
Local marketplace plugin, MCP config, and optional hooks for boot / recall injection. |
| OpenClaw | openclaw-plugin/ |
Runtime plugin with boot, recall, and Lore tools. |
| Hermes | hermes-plugin/ |
MemoryProvider plugin with Lore tools and recall support. |
| Generic MCP clients | /api/mcp |
Streamable HTTP MCP endpoint for clients that can connect to remote tools. |
Most agent memory systems stop at retrieval. Lore focuses on the full memory lifecycle:
- Boot baseline — every session starts with stable identity, workflow, user, and runtime memories.
- Recall before reply — the agent receives a small
<recall>block with relevant candidates before answering. - Read before trust — recalled candidates are cues; the agent opens the memory node before relying on it.
- URI-first graph — memories live at durable URIs such as
core://agent,preferences://user, andproject://my_project. - Disclosure triggers — each memory carries a natural-language condition that explains when it should surface.
- Policy-guided writes — priority budgets, read-before-modify checks, and boot-node protection keep the graph stable.
- Dream maintenance — scheduled review can inspect recall quality, structure, and stale nodes with rollback history.
Lore is built for agents that need continuity across sessions, tools, and runtimes.
Create a docker-compose.yml with the production images:
services:
postgres:
image: fffattiger/pgvector-zhparser:pg16
platform: linux/amd64
restart: unless-stopped
environment:
POSTGRES_DB: lore
POSTGRES_USER: lore
POSTGRES_PASSWORD: replace-this
ports:
- "5432:5432"
volumes:
- lore_postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U lore -d lore"]
interval: 10s
timeout: 5s
retries: 10
web:
image: fffattiger/lore:latest
platform: linux/amd64
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
DATABASE_URL: postgresql://lore:replace-this@postgres:5432/lore
API_TOKEN: replace-this-if-exposed
SNAPSHOT_DIR: /app/snapshots
ports:
- "18901:18901"
volumes:
- lore_snapshots:/app/snapshots
volumes:
lore_postgres_data:
lore_snapshots:Start Lore:
docker compose up -dCheck health:
curl http://127.0.0.1:18901/api/healthOpen the UI:
http://127.0.0.1:18901
After the server is running, open:
http://127.0.0.1:18901/setup
Complete the setup flow:
- Embedding setup — configure an OpenAI-compatible embedding endpoint.
Embedding Base URL, for examplehttp://host.docker.internal:8090/v1Embedding API KeyEmbedding Model, for exampleQwen/Qwen3-Embedding-0.6B
- View LLM setup — configure the model used by view refinement and Dream.
View LLM Base URLView LLM API KeyView LLM Model, for exampleglm-5.1
- Global boot memories — review or save defaults for:
core://agentcore://soulpreferences://user
- Channel agent memories — review or save defaults for runtime-specific memories:
core://agent/claudecodecore://agent/codexcore://agent/openclawcore://agent/hermescore://agent/pi
The Skip button saves the default value for an empty boot node and moves forward.
Open /settings after setup for:
- recall scoring strategy and thresholds
- View LLM for view refinement and Dream
- Dream schedule
- backup settings
- write policy settings
Embedding is required for semantic recall and index rebuilds. View LLM is required during setup so Dream and view refinement are ready when you enable them.
For local development or custom builds:
git clone https://github.com/FFatTiger/lore.git
cd lore
cp .env.example .env
# edit .env first
docker compose up -d --buildSet the Lore server URL for plugins:
export LORE_BASE_URL=http://127.0.0.1:18901
export LORE_API_TOKEN=replace-this-if-you-set-API_TOKENLore ships as a Claude Code plugin on the plugin branch.
export LORE_BASE_URL=http://127.0.0.1:18901
claude plugins marketplace add FFatTiger/lore --ref plugin
claude plugins install lore@loreRestart Claude Code after installing.
What it adds:
- MCP tools at
${LORE_BASE_URL}/api/mcp?client_type=claudecode - session-start boot injection
- per-prompt recall injection
- Lore guidance rules
export LORE_BASE_URL=http://127.0.0.1:18901
cd codex-plugin
./scripts/install.shIf the server uses API_TOKEN:
export LORE_API_TOKEN=replace-this
./scripts/install.shRestart Codex after installing.
What it adds:
- local Codex marketplace entry
lore@lore - MCP server at
${LORE_BASE_URL}/api/mcp?client_type=codex - optional hooks for boot and recall injection
export LORE_BASE_URL=http://127.0.0.1:18901
./pi-extension/scripts/install-local.shThen run /reload in Pi or restart Pi.
What it adds:
- Lore tools registered with
pi.registerTool - boot and recall context through Pi startup hooks
- API attribution with
client_type=pi
Copy or symlink openclaw-plugin/ into your OpenClaw plugin path, then configure OpenClaw with the Lore base URL.
Example config shape:
Restart OpenClaw after changing plugin config.
Install the Hermes memory provider plugin and set environment variables:
export LORE_BASE_URL=http://127.0.0.1:18901
export LORE_API_TOKEN=replace-this-if-neededSymlink or copy hermes-plugin/lore_memory into the Hermes plugin path configured on your machine. Hermes loads Lore as a MemoryProvider and exposes Lore memory tools to the agent.
Lore exposes a Streamable HTTP MCP endpoint:
http://127.0.0.1:18901/api/mcp
Use a client-specific query parameter when possible:
http://127.0.0.1:18901/api/mcp?client_type=mcp
If API_TOKEN is configured, pass it as a bearer token.
Once connected, the agent workflow is:
- load boot memories at session start
- receive
<recall>candidates before user prompts - open relevant nodes with
lore_get_node - create or update durable memories when something should survive the session
- use the Web UI to inspect recall quality, memory history, settings, backup, and Dream maintenance
Useful UI pages:
/memory— browse and edit the memory graph/recall— inspect retrieval stages and scoring/dream— run structural maintenance/settings— configure runtime behavior
cd web
cp .env.local.example .env.local
npm install
npm run devRequires Node.js 20+ and PostgreSQL with the vector extension.





{ "plugins": { "load": { "paths": ["/path/to/openclaw-plugin"] }, "entries": { "lore": { "enabled": true, "config": { "baseUrl": "http://127.0.0.1:18901", "apiToken": "replace-this-if-needed", "recallEnabled": true, "startupHealthcheck": true } } } } }