Skip to content

Lore-Context/lore-context

Lore Context

The control plane for AI-agent memory, eval, and governance.

Know what every agent remembered, used, and should forget — before memory becomes production risk.

CI License: Apache 2.0 Version Node

Getting Started · API Reference · Architecture · Integrations · Deployment · Changelog

🌐 Read this in your language: English · 简体中文 · 繁體中文 · 日本語 · 한국어 · Tiếng Việt · Español · Português · Русский · Türkçe · Deutsch · Français · Italiano · Ελληνικά · Polski · Українська · Bahasa Indonesia


What is Lore Context

Lore Context is an open-core control plane for AI-agent memory: it composes context across memory, search, and tool traces; evaluates retrieval quality on your own datasets; routes governance review for sensitive content; and exports memory as a portable interchange format you can move between backends.

It does not try to be another memory database. The unique value is what sits on top of memory:

  • Context Query — single endpoint composes memory + web + repo + tool traces, returns a graded context block with provenance.
  • Memory Eval — runs Recall@K, Precision@K, MRR, stale-hit-rate, p95 latency on datasets you own; persists runs and diffs them for regression detection.
  • Governance Review — six-state lifecycle (candidate / active / flagged / redacted / superseded / deleted), risk-tag scanning, poisoning heuristics, immutable audit log.
  • MIF-like Portability — JSON + Markdown export/import preserving provenance / validity / confidence / source_refs / supersedes / contradicts. Works as a migration format between memory backends.
  • Multi-Agent Adapter — first-class agentmemory integration with version probe + degraded-mode fallback; clean adapter contract for additional runtimes.

When to use it

Use Lore Context when... Use a memory database (agentmemory, Mem0, Supermemory) when...
You need to prove what your agent remembered, why, and whether it was used You just need raw memory storage
You run multiple agents (Claude Code, Cursor, Qwen, Hermes, Dify) and want shared trustable context You're building a single agent and OK with a vendor-locked memory tier
You require local or private deployment for compliance You prefer a hosted SaaS
You need eval on your own datasets, not vendor benchmarks Vendor benchmarks are sufficient signal
You want to migrate memory between systems You don't plan to ever switch backends

Quick Start

# 1. Clone + install
git clone https://github.com/Lore-Context/lore-context.git
cd lore-context && pnpm install

# 2. Generate a real API key (do not use placeholders in any environment beyond local-only dev)
export LORE_API_KEY=$(openssl rand -hex 32)

# 3. Start the API (file-backed, no Postgres required)
pnpm build && PORT=3000 LORE_STORE_PATH=./data/lore-store.json pnpm start:api

# 4. Write a memory
curl -H "Authorization: Bearer $LORE_API_KEY" -H "Content-Type: application/json" \
  -X POST http://127.0.0.1:3000/v1/memory/write \
  -d '{"content":"Use Postgres pgvector for Lore Context production storage.","memory_type":"project_rule","project_id":"demo"}'

# 5. Query context
curl -H "Authorization: Bearer $LORE_API_KEY" -H "Content-Type: application/json" \
  -X POST http://127.0.0.1:3000/v1/context/query \
  -d '{"query":"production storage","project_id":"demo","token_budget":1200}'

For full setup (Postgres, Docker Compose, Dashboard, MCP integration), see docs/getting-started.md.

Architecture

                       ┌─────────────────────────────────────────────┐
   MCP clients ──────► │ apps/api  (REST + auth + rate limit + logs) │
   (Claude Code,       │   ├── context router (memory/web/repo/tool) │
    Cursor, Qwen,      │   ├── context composer                      │
    Dify, Hermes...)   │   ├── governance + audit                    │
                       │   ├── eval runner                           │
                       │   └── MIF import/export                     │
                       └────────┬────────────────────────────────────┘
                                │
                  ┌─────────────┼──────────────────────────┐
                  ▼             ▼                          ▼
           Postgres+pgvector   agentmemory adapter     packages/search
           (incremental        (version-probed,        (BM25 / hybrid
            persistence)        degraded-mode safe)     pluggable)
                                                                 ▲
                       ┌─────────────────────────────┐           │
                       │ apps/dashboard  (Next.js)   │ ──────────┘
                       │   protected by Basic Auth   │
                       │   memory · traces · eval    │
                       │   governance review queue   │
                       └─────────────────────────────┘

For detail, see docs/architecture.md.

What's in v0.4.0-alpha

Capability Status Where
REST API with API-key auth (reader/writer/admin) ✅ Production apps/api
MCP stdio server (legacy + official SDK transport) ✅ Production apps/mcp-server
Next.js dashboard with HTTP Basic Auth gating ✅ Production apps/dashboard
Postgres + pgvector incremental persistence ✅ Optional apps/api/src/db/
Governance state machine + audit log ✅ Production packages/governance
Eval runner (Recall@K / Precision@K / MRR / staleHit / p95) ✅ Production packages/eval
MIF v0.2 import/export with supersedes + contradicts ✅ Production packages/mif
agentmemory adapter with version probe + degraded mode ✅ Production packages/agentmemory-adapter
Rate limiting (per-IP + per-key with backoff) ✅ Production apps/api
Structured JSON logging with sensitive-field redaction ✅ Production apps/api/src/logger.ts
Docker Compose private deployment ✅ Production docker-compose.yml
Demo dataset + smoke tests + Playwright UI test ✅ Production examples/, scripts/
Hosted multi-tenant cloud sync ⏳ Roadmap

See CHANGELOG.md for the full v0.4.0-alpha release notes.

Integrations

Lore Context speaks MCP and REST and integrates with most agent IDEs and chat frontends:

Tool Setup guide
Claude Code docs/integrations/claude-code.md
Cursor docs/integrations/cursor.md
Qwen Code docs/integrations/qwen-code.md
OpenClaw docs/integrations/openclaw.md
Hermes docs/integrations/hermes.md
Dify docs/integrations/dify.md
FastGPT docs/integrations/fastgpt.md
Cherry Studio docs/integrations/cherry-studio.md
Roo Code docs/integrations/roo-code.md
OpenWebUI docs/integrations/openwebui.md
Other / generic MCP docs/integrations/README.md

Deployment

Mode Use when Doc
Local file-backed Solo dev, prototype, smoke testing This README, Quick Start above
Local Postgres+pgvector Production-grade single-node, semantic search at scale docs/deployment/README.md
Docker Compose private Self-hosted team deployment, isolated network docs/deployment/compose.private-demo.yml
Cloud-managed Coming in v0.6

All deployment paths require explicit secrets: POSTGRES_PASSWORD, LORE_API_KEYS, DASHBOARD_BASIC_AUTH_USER/PASS. The scripts/check-env.mjs script refuses production startup if any value matches a placeholder pattern.

Security

v0.4.0-alpha implements a defense-in-depth posture appropriate for non-public alpha deployments:

  • Authentication: API-key bearer tokens with role separation (reader/writer/admin) and per-project scoping. Empty-keys mode fails closed in production.
  • Rate limiting: per-IP + per-key dual bucket with auth-failure backoff (429 after 5 fails in 60s, 30s lockout).
  • Dashboard: HTTP Basic Auth middleware. Refuses to start in production without DASHBOARD_BASIC_AUTH_USER/PASS.
  • Containers: all Dockerfiles run as non-root node user; HEALTHCHECK on api + dashboard.
  • Secrets: zero hardcoded credentials; all defaults are required-or-fail variables. scripts/check-env.mjs rejects placeholder values in production.
  • Governance: PII / API key / JWT / private-key regex scanning on writes; risk-tagged content auto-routed to review queue; immutable audit log on every state transition.
  • Memory poisoning: heuristic detection on consensus + imperative-verb patterns.
  • MCP: zod schema validation on every tool input; mutating tools require reason (≥8 chars) and surface destructiveHint: true; upstream errors sanitized before client return.
  • Logging: structured JSON with auto-redaction of content, query, memory, value, password, secret, token, key fields.

Vulnerability disclosures: SECURITY.md.

Project structure

apps/
  api/                # REST API + Postgres + governance + eval (TypeScript)
  dashboard/          # Next.js 16 dashboard with Basic Auth middleware
  mcp-server/         # MCP stdio server (legacy + official SDK transports)
  web/                # Server-side HTML renderer (no-JS fallback UI)
  website/            # Marketing site (handled separately)
packages/
  shared/             # Shared types, errors, ID/token utilities
  agentmemory-adapter # Bridge to upstream agentmemory + version probe
  search/             # Pluggable search providers (BM25 / hybrid)
  mif/                # Memory Interchange Format (v0.2)
  eval/               # EvalRunner + metric primitives
  governance/         # State machine + risk scan + poisoning + audit
docs/
  i18n/<lang>/        # Localized README in 17 languages
  integrations/       # 11 agent-IDE integration guides
  deployment/         # Local + Postgres + Docker Compose
  legal/              # Privacy / Terms / Cookies (Singapore law)
scripts/
  check-env.mjs       # Production-mode env validation
  smoke-*.mjs         # End-to-end smoke tests
  apply-postgres-schema.mjs

Requirements

  • Node.js >=22
  • pnpm 10.30.1
  • (Optional) Postgres 16 with pgvector for semantic-search-grade memory

Contributing

Contributions are welcome. Please read CONTRIBUTING.md for the development workflow, commit message protocol, and review expectations.

For documentation translations, see the i18n contributor guide.

Operated by

Lore Context is operated by REDLAND PTE. LTD. (Singapore, UEN 202304648K). Company profile, legal terms, and data handling are documented under docs/legal/.

License

The Lore Context repository is licensed under Apache License 2.0. Individual packages under packages/* declare MIT to enable downstream consumption. See NOTICE for upstream attribution.

Acknowledgments

Lore Context builds on top of agentmemory as a local memory runtime. Upstream contract details and version-compatibility policy are documented in UPSTREAM.md.

About

AI agent context control plane for memory, evaluation, migration, governance, and MCP integrations.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors