The first thing a new engineer needs is context. The same is true for your AI coding agent. Borg compiles it automatically from every prior session — one Postgres, no SDKs, no re-explaining.
curl -fsSL https://raw.githubusercontent.com/villanub/borgmemory/main/install.sh | shborg initThat's it. The installer detects Docker, sets up the stack with your OpenAI API key, and starts the engine. borg init detects which AI tools you use (Claude Code, Copilot, Codex, Kiro) and writes the right project settings file so your agent starts recording and retrieving automatically.
1. Learns automatically via CLAUDE.md.
Drop one line in your project's CLAUDE.md and your AI coding agent calls borg_learn after significant decisions and discoveries. No SDK import. No wrapper code. Borg rides the existing tool's hook system.
2. Extracts structure. In the background, Borg processes each episode: generates embeddings, extracts entities and facts via LLM, resolves them against the existing graph, and tracks which facts supersede earlier ones. Stale decisions don't contaminate future answers.
3. Compiles context.
When your agent calls borg_think, Borg classifies the query intent, selects retrieval profiles, ranks candidates across four dimensions (relevance, recency, stability, provenance), and delivers a formatted context package. Not raw search results — compiled ranked context.
| Mem0 | Zep | Cognee | Borg | |
|---|---|---|---|---|
| Databases required | 2 (Qdrant + Neo4j) | 2 (Neo4j + vectors) | 3+ | 1 (Postgres) |
| Integration | Python SDK | Python SDK | Python SDK | One line in CLAUDE.md |
| Retrieval | Raw results | Raw results | Raw results | Compiled ranked context |
| Stale facts | No tracking | Temporal (Neo4j) | No tracking | Temporal validity (valid_until) |
Every competitor returns raw search results and requires 2-3 databases. Borg is a single Postgres with pgvector and recursive CTEs, and it compiles context rather than dumping matches.
10 tasks modeled on real engineering work patterns. Three conditions: A (no memory), B (simple top-10 vector retrieval), C (full Borg compilation).
| Condition | Task Success | Retrieval Precision | Stale Fact Rate |
|---|---|---|---|
| A — No Memory | 0 / 10 | 0.060 | 0.000 |
| B — Simple RAG | 8 / 10 | 0.810 | 0.115 |
| C — Borg Compiled | 10 / 10 | 0.913 | 0.025 |
Borg compiled context solved every task, raised precision by 12.7 points over vector RAG, and cut the stale fact rate by 78%.
Developer works in Claude Code / Copilot / Codex / Kiro
|
| CLAUDE.md triggers borg_learn automatically
v
borg-engine receives episode
|
| Background worker
| - Embed + extract entities and facts via LLM
| - Resolve against existing knowledge graph
| - Supersede contradicting facts (valid_until = now)
v
PostgreSQL knowledge graph
(episodes, entities, facts, predicates)
|
| Next session: borg_think called before complex task
v
Compiler pipeline
- Classify intent
- Select retrieval profiles (vector + graph + recency)
- Rank by relevance x recency x stability x provenance
- Format for target model
|
v
Compiled context injected into assistant prompt
|
v
Better answers on the first attempt
Borg is open source under the Apache 2.0 license. Single-user, local, no auth — run it on your workstation or a VM you control.
- Getting Started — Local single-user setup in 3 commands
- Architecture — Engine design, compiler pipeline, offline worker
- Benchmark Details — Full per-task results and evaluation reasoning
Apache 2.0. See LICENSE for details.
Independent open-source project. All trademarks and product names are the property of their respective owners.