DAG-based Lossless Context Management for Claude Code.
Every message preserved forever. Summaries cascade, never delete. Full recall across sessions.
Getting Started Β· MCP Server Β· Commands Β· Terminal UI Β· How It Works Β· Configuration Β· Contributing
Claude Code forgets. claude-mem remembers fragments. lossless-code remembers everything.
/plugin marketplace add GodsBoy/lossless-code
/plugin install lossless-code
That's it. Start a new session and search your history:
lcc_grep "database migration"
Claude Code forgets everything between sessions. Memory tools like ClawMem, context-memory, and claude-mem use flat retrieval: keyword search over snippets, no structure, no hierarchy, no way to trace a summary back to its source conversation.
When a project spans weeks and hundreds of sessions, flat search fails. You get fragments without lineage.
lossless-code uses DAG-based lossless preservation, the same approach pioneered by lossless-claw for OpenClaw:
- Nothing is ever deleted. Every message stays in
vault.dbforever. - Summaries form a directed acyclic graph. Messages cascade to depth-0 summaries, which roll up to depth-1, depth-2, and beyond.
- Full drill-down.
lcc_expandtraces any summary back to the exact messages that created it. - Automatic. Claude Code hooks capture every turn and trigger summarisation transparently. Zero manual effort.
- Cross-session recall. Start a new session and your full project history is immediately searchable.
ββββββββββββββββββββ
β Claude Code β
β Session β
ββββββββββ¬ββββββββββ
β
ββββββββββββββββββββββββββΌβββββββββββββββββββββββββ
β β β β
βββββββΌββββββ ββββββββΌβββββββ ββββββββΌβββββββ ββββββΌββββββ
β Hooks β β Skills β β CLI β β MCP β
β (write) β β (shell) β β Tools β β Server β
β β β β β β β (stdio) β
β SessionStartβ β lcc_grep β β lcc_status β β β
β Stop β β lcc_expand β β β β 6 tools β
β PreCompact β β lcc_context β β β β read-onlyβ
β PostCompactβ β lcc_sessionsβ β β β β
β UserPrompt β β lcc_handoff β β β β β
βββββββ¬βββββββ ββββββββ¬βββββββ ββββββββ¬βββββββ ββββββ¬ββββββ
β β β β
ββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββ
β β
ββββββββββΌβββββββββββββββββΌβββ
β vault.db β
β (SQLite) β
β β
β messages summaries β
β summary_sources sessions β
β FTS5 indexes β
ββββββββββββββββββββββββββββββββ
| lossless-code | ClawMem | context-memory | claude-mem | |
|---|---|---|---|---|
| Storage | SQLite with FTS5 | SQLite + vector DB | Markdown files | SQLite + Chroma |
| Structure | DAG (summaries cascade) | Flat RAG retrieval | Flat retrieval | Flat retrieval |
| Drill-down | Full (summary to source messages) | None | None | None |
| Auto-capture | Hooks (zero manual effort) | Hooks + watcher | Manual | Hooks + worker |
| Cross-session | Yes (vault persists) | Yes | Yes | Yes |
| Summarisation | Cascading DAG (depth-N) | Single-level | None | Single-level |
| Search | FTS5 full-text | Hybrid (BM25 + vector + reranker) | Keyword | Hybrid (BM25 + vector) |
| MCP tools | 6 | 28 | 0 | 10+ |
| Background services | None | watcher + embed timer + GPU servers | None | Worker on port 37777 |
| Runtime | Python (stdlib) | Bun + llama.cpp (optional) | None | Bun |
| Models required | None (optional for summarisation) | 2GB+ GGUF (embed + reranker) | None | Chroma embeddings |
| Idle cost | Zero | CPU/RAM for services + embedding sweeps | Zero | Worker process |
Memory tools that inject context on every prompt are silently expensive. Here's why lossless-code's design saves tokens:
ClawMem injects relevant memory into 90% of prompts automatically (their stated design). claude-mem injects a context index on every SessionStart. Both approaches front-load tokens whether or not the agent needs that context.
lossless-code injects nothing by default. Context surfaces only when the agent explicitly calls an MCP tool or the PreCompact hook fires. Most coding turns (writing code, running tests, reading files) don't need historical context at all. You pay for recall only when recall matters.
Every MCP tool registered in ~/.claude.json has its schema injected into every single API call as available tools. Claude Code's own docs warn: "Prefer CLI tools when available... they don't add persistent tool definitions."
- ClawMem: 28 MCP tools (query, intent_search, find_causal_links, timeline, similar, etc.)
- claude-mem: 10+ search endpoints via worker service
- lossless-code: 6 MCP tools (grep, expand, context, sessions, handoff, status)
Over a 200-turn session, that difference in tool schema overhead compounds significantly.
ClawMem runs a watcher service (re-indexes on file changes) and an embed timer (daily embedding sweep across all collections). These require GGUF models (~2GB minimum) and consume CPU/GPU continuously. claude-mem runs a persistent worker service on port 37777.
lossless-code has zero background processes. Hooks fire only during Claude Code events. The vault is pure SQLite with FTS5 (built into SQLite, no external models). Nothing runs between sessions.
When Claude Code hits its context limit, it compacts: summarising earlier context to make room. With flat memory systems, compaction loses fidelity and the agent may re-explore territory it forgot, costing more tokens ("debugging in circles").
lossless-code's DAG captures the full conversation before compaction happens (PreCompact hook). After compaction, the PostCompact hook re-injects only the top-level summaries. The agent can drill down via lcc_expand if it needs detail, but the DAG ensures nothing is truly lost. This means:
- Fewer repeated explorations after compaction
- One long session is cheaper than multiple short sessions covering the same ground
- Context survives compaction without paying to re-read everything
| Dependency | lossless-code | ClawMem | claude-mem |
|---|---|---|---|
| Python 3.10+ | Yes (usually pre-installed) | No | No |
| Bun | No | Required | Required |
| llama.cpp / GGUF models | No | Optional (2GB+) | No |
| Chroma / vector DB | No | No | Required |
| systemd services | No | Recommended | No |
mcp Python SDK |
Yes (pip install) | No (TypeScript) | No |
Fewer dependencies means less to maintain, fewer failure modes, and lower resource consumption.
/plugin marketplace add GodsBoy/lossless-code
/plugin install lossless-code
Hooks, MCP server, and skill are activated automatically. No manual setup needed.
git clone https://github.com/GodsBoy/lossless-code.git
cd lossless-code
bash install.shThe installer:
- Creates
~/.lossless-code/withvault.dband scripts - Configures Claude Code hooks in
~/.claude/settings.json - Installs the skill to
~/.claude/skills/lossless-code/ - Adds CLI tools to PATH
Idempotent: safe to run again to upgrade.
- Python 3.10+
- SQLite 3.35+ (for FTS5)
- Claude Code CLI
Optional: anthropic Python package for AI-powered summarisation (falls back to extractive summaries without it).
lossless-code includes an MCP (Model Context Protocol) server so Claude Code can access the vault as native tools without shelling out to CLI commands.
The installer (install.sh) automatically:
- Copies the MCP server to
~/.lossless-code/mcp/server.py - Installs the
mcpPython SDK - Registers the server in
~/.claude.json
After installation, every new Claude Code session auto-discovers 6 MCP tools:
| Tool | Description |
|---|---|
lcc_grep |
Full-text search across messages and summaries |
lcc_expand |
Expand a summary back to source messages (DAG traversal) |
lcc_context |
Get relevant context for a query |
lcc_sessions |
List sessions with metadata |
lcc_handoff |
Generate session handoff documents |
lcc_status |
Vault statistics (sessions, messages, DAG depth, DB size) |
If you need to register the MCP server manually:
// ~/.claude.json
{
"mcpServers": {
"lossless-code": {
"command": "python3",
"args": ["~/.lossless-code/mcp/server.py"]
}
}
} Claude Code ββstdioβββΆ MCP Server ββread-onlyβββΆ vault.db
(server.py)
6 tools
The MCP server is read-only. All writes to the vault happen through hooks (SessionStart, Stop, UserPromptSubmit, PreCompact, PostCompact). The MCP server imports db.py directly for SQLite access.
Full-text search across all messages and summaries.
lcc_grep "database migration"
lcc_grep "auth refactor"Expand a summary node back to its source messages.
lcc_expand sum_abc123def456
lcc_expand sum_abc123def456 --fullSurface relevant DAG nodes for a query. Without a query, returns highest-depth summaries.
lcc_context "auth system"
lcc_context --limit 10List recorded sessions with timestamps and handoff status.
lcc_sessions
lcc_sessions --limit 5Show or generate a session handoff.
lcc_handoff
lcc_handoff --generate --session "$CLAUDE_SESSION_ID"Show vault statistics: message count, summary count, DAG depth, and FTS index health.
lcc_statuslcc-tui is a terminal-based browser for your vault. Built with Textual.
lcc-tui| Tab | Key | Description |
|---|---|---|
| Sessions | 1 |
Browse all sessions; select to view messages |
| Search | 2 |
Full-text search across messages and summaries |
| Summaries | 3 |
Browse DAG summaries by depth; select to expand |
| Stats | 4 |
Dashboard: sessions, messages, summaries, vault size |
1to4: switch tabs/: open search modal from any viewEnter: drill into selected session or summaryEsc: go backq: quit
Full reference: docs/tui.md
| Hook | Event | Purpose |
|---|---|---|
session_start.sh |
SessionStart | Register session, inject handoff + summaries |
stop.sh |
Stop | Persist each turn to vault.db |
user_prompt_submit.sh |
UserPromptSubmit | Surface relevant context for the prompt |
pre_compact.sh |
PreCompact | Run DAG summarisation before compaction |
post_compact.sh |
PostCompact | Record compaction, re-inject top summaries |
- Collect unsummarised messages, chunk into groups of ~20
- Summarise each chunk (via Claude API or extractive fallback)
- Write summary nodes to
summariestable (depth=0) - Link to sources in
summary_sources - Mark source messages as summarised
- If depth-N exceeds threshold: cascade to depth-N+1
- Repeat until under threshold at every depth
~/.lossless-code/
vault.db # SQLite: all messages, summaries, DAG, sessions
config.json # Settings (summary model, thresholds)
scripts/ # Python modules and CLI tools
hooks/ # Shell scripts called by Claude Code hooks
~/.lossless-code/config.json:
{
"summaryModel": "claude-haiku-4-5-20251001",
"summaryProvider": "anthropic",
"chunkSize": 20,
"depthThreshold": 10,
"incrementalMaxDepth": -1,
"workingDirFilter": null
}| Key | Default | Description |
|---|---|---|
summaryModel |
claude-haiku-4-5-20251001 |
Model for compactions |
summaryProvider |
anthropic |
LLM provider: anthropic or openai |
chunkSize |
20 |
Messages per compaction chunk |
depthThreshold |
10 |
Max nodes at any depth before cascading |
incrementalMaxDepth |
-1 |
Max cascade depth (-1 = unlimited) |
workingDirFilter |
null |
Only capture messages from this directory |
lossless-code supports multiple LLM providers for compactions. Configure your provider in ~/.lossless-code/config.json:
{
"summaryModel": "gpt-4.1-mini",
"summaryProvider": "openai",
"chunkSize": 20,
"depthThreshold": 10
}Anthropic (default)
Authentication is resolved automatically from multiple sources (in priority order):
ANTHROPIC_API_KEYenvironment variable (standard API key)- Claude Code OAuth token from
~/.claude/.credentials.json(setup-token) CLAUDE_CODE_OAUTH_TOKENenvironment variable
Note: OAuth/setup-tokens require
ANTHROPIC_BASE_URLpointing to a compatible proxy (e.g. OpenClaw), sinceapi.anthropic.comdoes not accept OAuth tokens directly. If you only have a setup-token and no proxy, use the OpenAI provider instead.
{ "summaryProvider": "anthropic", "summaryModel": "claude-haiku-4-5-20251001" }Model examples: claude-haiku-4-5-20251001, claude-sonnet-4-20250514
OpenAI
Set OPENAI_API_KEY in your environment.
{ "summaryProvider": "openai", "summaryModel": "gpt-4.1-mini" }Model examples: gpt-4.1-mini, gpt-4.1-nano, gpt-4o-mini
You can use any model your provider supports. These are just common choices.
| Model | Input cost (per 1M tokens) |
|---|---|
gpt-4.1-nano |
~$0.10 |
gpt-4o-mini |
~$0.15 |
gpt-4.1-mini |
~$0.40 |
claude-haiku-4-5-20251001 |
~$0.80 |
claude-sonnet-4-20250514 |
~$3.00 |
For typical compaction workloads using gpt-4.1-mini:
| Usage | Estimated cost |
|---|---|
| Light (1-2 sessions/day) | $1-3/month |
| Moderate (3-5 sessions/day) | $3-7/month |
| Heavy (10+ sessions/day) | $7-15/month |
Compactions are triggered automatically before context compaction (PreCompact hook) and at session end (Stop hook). The extractive fallback runs automatically when no API key is configured: no hard dependency on any LLM provider.
The lcc CLI provides direct access to vault operations.
# Run compaction manually
lcc summarise --run
# Run compaction for a specific session
lcc summarise --run --session <session-id>
# Check vault status
lcc status
# Search all messages and summaries
lcc grep "auth refactor"
# Show handoff from last session
lcc handoff
# Generate and save a handoff for current session
lcc handoff --generate --session "$CLAUDE_SESSION_ID"
# List recent sessions
lcc sessions
# Expand a summary node
lcc expand sum_abc123def456sessions -- session_id, working_dir, started_at, last_active, handoff_text
messages -- id, session_id, turn_id, role, content, tool_name, working_dir, timestamp, summarised
summaries -- id, session_id, content, depth, token_count, created_at
summary_sources -- summary_id, source_type, source_id
messages_fts -- FTS5 index on messages.content
summaries_fts -- FTS5 index on summaries.contentrm -rf ~/.lossless-code
# Remove hooks from ~/.claude/settings.json manually
# Remove skill: rm -rf ~/.claude/skills/lossless-codelossless-code is a Claude Code adaptation of the Lossless Context Management (LCM) architecture created by Jeff Lehman and the Martian Engineering team. Their lossless-claw plugin for OpenClaw proved that DAG-based context preservation eliminates the information loss problem in long-running AI sessions. lossless-code brings that same architecture to Claude Code.
Additional references:
- ClawMem by yoloshii (hooks architecture patterns)
- Voltropy LCM paper (theoretical foundation)
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feat/your-feature) - Write tests for new functionality
- Ensure tests pass
- Open a pull request
lossless-code currently supports Claude Code natively. The hook and plugin ecosystem across coding agents is converging fast, and we're tracking compatibility:
| Agent | Hook Support | MCP | Status | Notes |
|---|---|---|---|---|
| Claude Code | 20+ lifecycle events | β | β Supported | Full plugin with hooks, MCP, skills |
| Copilot CLI | Claude Code format | β | π’ Next | Reads hooks.json natively; lowest adaptation effort |
| Codex CLI | SessionStart, Stop, UserPromptSubmit | β | π‘ Planned | Experimental hooks engine (v0.114.0+); MCP works today |
| Gemini CLI | BeforeTool, AfterTool, lifecycle | β | π‘ Planned | Different event names; needs thin adapter layer |
| OpenCode | session.compacting + plugin hooks | β | π΅ Researching | Plugin architecture differs; compacting hook maps to PreCompact |
MCP works everywhere today. Any agent that supports MCP servers can already use
lcc_grep,lcc_expand,lcc_context,lcc_sessions,lcc_handoff, andlcc_statusfor manual recall. The roadmap above tracks automatic capture via hooks.
Contributions welcome for any of the planned integrations.
MIT
If lossless-code helps your workflow, consider giving it a β

