Persistent, searchable memory for AI agents. Index your codebase, git history, documents, and any custom data into a single SQLite file — then search it all with hybrid vector + keyword retrieval.
BrainBank gives LLMs a long-term memory that persists between sessions.
- Pluggable —
.use()only what you need: code, git, docs, or custom - Hybrid search — vector + BM25 fused with Reciprocal Rank Fusion
- Dynamic collections —
brain.collection('errors')for any structured data - Pluggable embeddings — local WASM (free), OpenAI, or Perplexity
- Portable — single
.brainbank/brainbank.dbSQLite file - Modular — lightweight core + optional
@brainbank/*packages
npm i -g brainbank @brainbank/code @brainbank/git @brainbank/docsIf you get
ERESOLVEerrors, usenpm i --legacy-peer-deps— tree-sitter grammars have overlapping peer dep ranges.
brainbank index . # scans repo → interactive select → index
brainbank index . --yes # skip prompts, auto-select all
brainbank hsearch "rate limiting" # hybrid search
brainbank kv add decisions "Use Redis..." # store a memory
brainbank kv search decisions "caching" # recall itimport { BrainBank } from 'brainbank';
import { code } from '@brainbank/code';
import { git } from '@brainbank/git';
const brain = new BrainBank({ repoPath: '.' })
.use(code())
.use(git());
await brain.index();
const results = await brain.hybridSearch('authentication middleware');
const log = brain.collection('decisions');
await log.add('Switched to argon2id for password hashing', { tags: ['security'] });
brain.close();brainbank is the core framework. Plugins are separate @brainbank/* packages — install only what you need:
Data sources that feed into BrainBank's hybrid search engine.
| Package | Description | Install |
|---|---|---|
@brainbank/code |
AST chunking, import graph, symbol index (20 languages) | npm i @brainbank/code |
@brainbank/git |
Git history indexing + co-edit analysis | npm i @brainbank/git |
@brainbank/docs |
Document collection search with smart chunking | npm i @brainbank/docs |
Extensions that connect BrainBank to external tools and workflows.
| Package | Description | Install |
|---|---|---|
@brainbank/memory |
Fact extraction + entity graph for conversations | npm i @brainbank/memory |
@brainbank/mcp |
MCP server for Antigravity, Claude, Cursor | npm i @brainbank/mcp |
| Guide | Description |
|---|---|
| Getting Started | Installation, quick start, first search |
| CLI Reference | Complete command reference |
| Plugins | Built-in plugins overview + configuration |
| Collections | Dynamic KV store with semantic search |
| Search | Hybrid search, scoped queries, context generation |
| Custom Plugins | Build plugins + publish as npm packages |
| Configuration | .brainbank/config.json, env vars |
| Embeddings & Reranker | Providers, benchmarks, per-plugin overrides |
| Multi-Repo | Index multiple repositories into one DB |
| MCP Server | AI tool integration (stdio) |
| Memory | Agent patterns + @brainbank/memory |
| Indexing | Code graph, incremental indexing, re-embedding |
| Architecture | System internals, data flows, design patterns |
| Example | Description |
|---|---|
| notes-plugin | Programmatic plugin — reads .txt files |
| custom-plugin | CLI auto-discovery plugin |
| custom-package | Standalone npm package scaffold |
| collection | Collections, search, tags, metadata |
| rag | RAG chatbot — docs retrieval + generation ¹ |
| memory | Memory chatbot — fact extraction + entity graph ¹ |
¹ Requires
OPENAI_API_KEY. RAG also requiresPERPLEXITY_API_KEY.
Early benchmarks on Apple Silicon — single SQLite file, no external vector DB.
| Benchmark | Corpus | Metric | Score |
|---|---|---|---|
| BEIR SciFact | 5,183 scientific abstracts, 300 queries | NDCG@10 | 0.761 |
| Custom RAG eval | 127 Pinecall.io docs, 20 queries — 1 miss | R@5 | 83% |
Pipeline progression — each stage's impact on the custom eval:
| Stage | R@5 | Δ |
|---|---|---|
| Vector-only (HNSW) | 57% | — |
| + BM25 → RRF | 78% | +21pp |
| + Qwen3 reranker | 83% | +5pp |
More benchmarks (code+graph retrieval, large-scale stress tests, multi-provider comparisons) are in progress. Full methodology and reproduction commands → docs/benchmarks.md
See CONTRIBUTING.md for development setup and guidelines.
