The name comes from the French lier — to connect, to bind.
A portable external brain for local AI agents — one file, structured by relationships.
pip install liel
liel-demoRuns fully local. No API keys required (LLM optional).
liel is a single-file graph memory layer for people using local AI agents while coding. One .liel file stores decisions, tasks, sources, files, facts, and the relationships between them, so tools can recall why decisions were made, not just what was said.
The core is a small Rust property graph engine with Python (PyO3) bindings and optional MCP tools. No server, no cloud, no daemon.
- Your code stays on your machine. No API keys, no telemetry, no cloud round-trips.
- Works with any LLM. Local (Ollama, LM Studio) or cloud (Claude, GPT) — only memory stays local.
- Offline-friendly. Memory persists across sessions without network access.
- One file, no lock-in. Copy, commit, archive, and open with any tool that speaks
.liel.
Use liel as project memory through MCP:
pip install "liel[mcp]"Configure your LLM client to start the liel MCP server. In Claude Code, edit
.mcp.json in the project root like this:
{
"mcpServers": {
"liel": {
"type": "stdio",
"command": "/absolute/path/to/liel-mcp",
"args": ["--path", "/absolute/path/to/agent-memory.liel"]
}
}
}Use the installed liel-mcp executable for command, and set --path to the
.liel file the AI should use as durable memory. For other LLM/MCP clients,
use the equivalent MCP server setting with the same command and args.
Do not put mcpServers in .claude/settings.json; that file is for Claude
Code settings such as permissions and environment variables.
For first-time setup, --path is the clearest option. If the file does not
exist yet, liel creates it on first open. Without --path, the server checks
only the startup directory: if no *.liel file exists there, it uses
./memory.liel; if one exists, it uses that file; if multiple files exist, it
prints the candidates and asks you to register the intended file with --path
instead of choosing one silently.
Then add a memory policy to the agent's project instructions. Start with the
AI memory playbook,
or use the
sample CLAUDE.md as a longer Claude
template.
When using liel as project memory:
- Always check existing memory before asking the user to repeat context.
- Save only durable, high-signal information: decisions, preferences, tasks, sources, and important project facts.
- Do not store temporary reasoning, speculative notes, noisy logs, or every tool result.
- Write at meaningful checkpoints, not every turn.
- Use nodes for entities and edges for relationships.
import liel
with liel.open("agent-memory.liel") as db:
task = db.add_node(
["Task"],
description="Migrate auth from JWT to server-side sessions",
)
question = db.add_node(
["OpenQuestion"],
content="Use Redis or PostgreSQL for the session store?",
)
rejected = db.add_node(
["RejectedOption"],
option="Redis",
reason="Adds another infrastructure dependency",
)
decision = db.add_node(
["Decision"],
content="Use a PostgreSQL session table",
)
source = db.add_node(["Source"], title="Auth migration notes")
db.add_edge(task, "RAISED", question)
db.add_edge(question, "REJECTED", rejected)
db.add_edge(question, "RESOLVED_BY", decision)
db.add_edge(decision, "SUPPORTED_BY", source)
db.commit()
for node in db.neighbors(question, edge_label="RESOLVED_BY"):
print(node["content"])liel is intentionally lower-level and local-first. It ships as a single .liel file with no server, no API keys, and no required vector index. Relationships are explicit edges you write and traverse, not only facts inferred from chat history.
Mem0, Letta, and Zep may be a better fit when you want a hosted service, a full agent runtime, automatic memory extraction, temporal graph intelligence, dashboards, or production-scale context assembly. liel is the smaller substrate: local coding agents and project-adjacent tools that need durable, inspectable graph memory they can copy, commit, archive, and open from Python or MCP.
- One file, any place.
- No server, no waiting.
- Minimal dependencies, simple environments.
- Start small, stay local.
- Why liel - what it solves and what it does not
- Quickstart - demo, Python, and MCP paths
- AI memory playbook - recommended LLM memory pattern
- Sample CLAUDE.md - Claude project-instructions template
- Architecture - system layers and the Mermaid diagram
- Python guide - API, transactions, traversal
- MCP guide - Claude and other MCP-capable tools
- Feature list - what is provided at a glance
- Reliability - commit semantics, crash recovery, repair
- Format spec - byte-level
.lielfile format - Product trade-offs - what liel does not do, and why
liel is currently a Beta package. The supported contract is the Python-first API plus the single-writer, single-file reliability model. There is no semantic/vector search in core, and commit() defines crash-safe boundaries. Breaking changes before 1.0 are tracked in the changelog.
Pull requests and issues are welcome. A good first step is to run liel-demo and note anything confusing about the output, memory model, or docs.
See CONTRIBUTING.md.
Built by Hayato under hy-token, a personal namespace for small local-first tools and AI infrastructure experiments.