Skip to content

villanub/borgmemory

Repository files navigation

Borg — A memory stronghold for your AI coding agent.

The first thing a new engineer needs is context. The same is true for your AI coding agent. Borg compiles it automatically from every prior session — one Postgres, no SDKs, no re-explaining.


Quick Start

curl -fsSL https://raw.githubusercontent.com/villanub/borgmemory/main/install.sh | sh
borg init

That's it. The installer detects Docker, sets up the stack with your OpenAI API key, and starts the engine. borg init detects which AI tools you use (Claude Code, Copilot, Codex, Kiro) and writes the right project settings file so your agent starts recording and retrieving automatically.


What It Does

1. Learns automatically via CLAUDE.md. Drop one line in your project's CLAUDE.md and your AI coding agent calls borg_learn after significant decisions and discoveries. No SDK import. No wrapper code. Borg rides the existing tool's hook system.

2. Extracts structure. In the background, Borg processes each episode: generates embeddings, extracts entities and facts via LLM, resolves them against the existing graph, and tracks which facts supersede earlier ones. Stale decisions don't contaminate future answers.

3. Compiles context. When your agent calls borg_think, Borg classifies the query intent, selects retrieval profiles, ranks candidates across four dimensions (relevance, recency, stability, provenance), and delivers a formatted context package. Not raw search results — compiled ranked context.


Why Not Mem0 / Zep / Cognee?

Mem0 Zep Cognee Borg
Databases required 2 (Qdrant + Neo4j) 2 (Neo4j + vectors) 3+ 1 (Postgres)
Integration Python SDK Python SDK Python SDK One line in CLAUDE.md
Retrieval Raw results Raw results Raw results Compiled ranked context
Stale facts No tracking Temporal (Neo4j) No tracking Temporal validity (valid_until)

Every competitor returns raw search results and requires 2-3 databases. Borg is a single Postgres with pgvector and recursive CTEs, and it compiles context rather than dumping matches.


Benchmark Results

10 tasks modeled on real engineering work patterns. Three conditions: A (no memory), B (simple top-10 vector retrieval), C (full Borg compilation).

Condition Task Success Retrieval Precision Stale Fact Rate
A — No Memory 0 / 10 0.060 0.000
B — Simple RAG 8 / 10 0.810 0.115
C — Borg Compiled 10 / 10 0.913 0.025

Borg compiled context solved every task, raised precision by 12.7 points over vector RAG, and cut the stale fact rate by 78%.


How It Works

Developer works in Claude Code / Copilot / Codex / Kiro
        |
        | CLAUDE.md triggers borg_learn automatically
        v
 borg-engine receives episode
        |
        | Background worker
        | - Embed + extract entities and facts via LLM
        | - Resolve against existing knowledge graph
        | - Supersede contradicting facts (valid_until = now)
        v
 PostgreSQL knowledge graph
 (episodes, entities, facts, predicates)
        |
        | Next session: borg_think called before complex task
        v
 Compiler pipeline
 - Classify intent
 - Select retrieval profiles (vector + graph + recency)
 - Rank by relevance x recency x stability x provenance
 - Format for target model
        |
        v
 Compiled context injected into assistant prompt
        |
        v
 Better answers on the first attempt

License

Borg is open source under the Apache 2.0 license. Single-user, local, no auth — run it on your workstation or a VM you control.


Documentation


License

Apache 2.0. See LICENSE for details.


Disclaimer

Independent open-source project. All trademarks and product names are the property of their respective owners.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors