LLM-optimized knowledge graph for AI-coding teams. Session notes auto-extracted into durable knowledge, repo-scoped context injected at session start, pluggable team briefings. No vector DB needed for small vaults; a full hybrid search + MCP server for larger ones.
Status: 0.1 alpha. Core linter + schema + migration in place. Skills, search, MCP, curator being implemented. Track progress at PLAN.md (soon).
When you work with an AI agent, the decisions, reasoning, and open threads live in a chat window that disappears. PRs capture the diff; nothing captures why. Lore closes the loop:
Session with AI → /lore:session → extracted concept / decision note
→ briefing published to team sink
→ surfaces again next session, scoped
to the repo you're in
The flagship is the session-note pipeline. Everything else (search, MCP, curator) serves it.
$LORE_ROOT/ # default ~/lore (or set LORE_ROOT=...)
├── sessions/ # personal logs (optional)
├── inbox/ # personal triage inbox (optional)
├── drafts/ # WIP notes (optional)
├── templates/ # note templates (optional)
└── wiki/ # always present — ≥1 mounted wiki
└── <name>/ # symlink to a wiki git repo (or inline dir)
Each wiki is an independent git repo. Access control, shipping, history stay at the repo boundary; Obsidian sees one unified graph via symlinks.
One command, coexists with anything you already have:
git clone https://github.com/buchbend/lore.git
cd lore && ./install.sh --with-hooksWhat the installer does:
- Installs the
loreCLI (viapipx,uv tool, orpip --user, in that order of preference) - Symlinks every skill into
~/.claude/skills/lore:*— Claude Code picks them up automatically. Existing skills are left alone. - If you passed
--with-hooks: merges SessionStart / PreCompact / Stop entries into~/.claude/settings.json(idempotent — re-running is a no-op).
Uninstall / disable hooks is just the inverse: delete the symlinks in
~/.claude/skills/lore:* and the hooks entries in settings.json.
The repo is a self-describing marketplace. Once published, you can do:
/plugin marketplace add buchbend/lore
/plugin install lore@lore
This gives you the skills but not the Python CLI or hooks — the
install.sh path above remains the most complete install.
You have multiple knowledge domains (work, research, personal). Mount them all under one root:
mkdir -p ~/lore/wiki
cd ~/lore/wiki
ln -s ~/git/myorg/team-knowledge team
ln -s ~/git/research/knowledge research
# personal wiki lives inline at ~/lore/wiki/personal/
Then /lore:init to write the root CLAUDE.md and you're set.
You just want the team vault and its skills:
mkdir -p ~/lore/wiki
ln -s ~/git/myorg/team-knowledge ~/lore/wiki/team
All /lore:* commands work with a single mount; no routing prompts.
The curator (flags stale notes, detects superseded decisions, keeps
_index.md fresh) can run several ways. The README picks no default for
you; pick your trade-off:
| Pattern | Cost | Cadence | For |
|---|---|---|---|
/schedule /lore:curator <wiki> on laptop |
free | any | individuals |
cron + claude -p "/lore:curator <wiki>" |
free | any | power users, no /schedule |
| GitHub Actions, on push to a wiki repo | API $ | per-push, incremental | shared team wikis |
| GitHub Actions, cron | API $ | nightly | always-on, no laptop |
| Home server + cron | free | any | users with always-on box |
Reference workflows in examples/. Every LLM invocation
costs tokens; no default forces a cost on you.
Point LORE_ROOT at your vault (anything matching the canonical shape
— a directory with a wiki/ subfolder containing at least one mounted
wiki) and add schema_version: 1 to existing notes:
LORE_ROOT=/path/to/your/vault lore migrate --add-schema-version
# review the dry-run diff, then:
LORE_ROOT=/path/to/your/vault lore migrate --add-schema-version --apply
No files move. If your vault does not yet match the canonical shape,
lore init scaffolds it without touching your notes.
- Markdown + git stay authoritative. No database the vault can't be rebuilt from.
- Cheap context is automatic; expensive context is explicit. Inject bounded, deterministic context at SessionStart and PreCompact (reading cached files the linter regenerates). Invoke the LLM only at judgment points: session extraction, contradiction checks, import enrichment, curator review, briefing prose.
- Compose, don't replace. Skills orchestrate; MCP and CLI tools provide retrieval primitives; peer knowledge tools layer alongside.
- No PreToolUse auto-enrichment. Auto-injecting vault content on every tool call burns tokens and risks misleading the agent when the vault is stale. Lore is token-preserving by default: deterministic context is injected once at session start; the agent pulls more via MCP when it decides retrieval would help.
MIT. See LICENSE.