Dynamic Intelligence Graph · Agent Memory · Multi-Agent Coordination
A Dynamic Intelligence Graph (DIG) MCP server that gives AI coding assistants persistent memory,
structural code understanding, and safe multi-agent coordination — beyond static RAG and GraphRAG.
Works with: VS Code Copilot · Claude Code · Claude Desktop · Cursor · any MCP-compatible AI assistant
lxDIG MCP (lexic Dynamic Intelligence Graph) is an open-source Model Context Protocol (MCP) server that adds a persistent code intelligence layer to AI coding assistants. Unlike static RAG or batch-oriented GraphRAG, lxDIG is a live, incrementally-updated intelligence graph that turns any repository into a queryable knowledge graph — so agents can answer architectural questions, track decisions across sessions, coordinate safely in multi-agent workflows, and run only the tests that actually changed — without re-reading the entire codebase on every turn.
It is purpose-built for the agentic coding loop: the cycle of understand → plan → implement → verify → remember that AI agents (Claude, Copilot, Cursor) repeat continuously.
The core problem it solves: most AI coding assistants are stateless and architecturally blind. They re-read unchanged files on every session, miss cross-file relationships, forget past decisions, and collide when multiple agents work in parallel. lxDIG is the memory and structure layer that fixes all four.
- Why lxDIG?
- Key capabilities
- How it works
- Quick start
- 38 MCP tools — at a glance
- Use cases
- Comparison with alternatives
- Performance
- Roadmap
- Contributing
- Support the project
- License
Most code intelligence tools solve one of these problems. lxDIG solves all of them together:
| Problem | Without lxDIG | With lxDIG |
|---|---|---|
| Context loss between sessions | Agent re-reads everything on restart | Persistent episode + decision memory survives restarts |
| Architecturally blind retrieval | Embeddings miss cross-file relationships | Graph traversal finds structural dependencies |
| Probabilistic search misses | Semantic search returns nearest chunks, not facts | Hybrid graph + vector + BM25 fused with RRF |
| Multi-agent collisions | Two agents edit the same file simultaneously | Claims/release protocol with conflict detection |
| Wasted CI time | Full test suite on every change | Impact-scoped test selection — only affected tests run |
| Stale architecture knowledge | Agent guesses at layer boundaries | Graph-validated architecture rules + placement suggestions |
| Queries eat context budget | Raw file dumps, hundreds of tokens per answer | Cross-file answers in compact, budget-aware responses |
Turn your repository into a queryable property graph of files, functions, classes, imports, and their relationships. Ask questions in plain English or Cypher.
- Natural-language + Cypher graph queries (
graph_query) - Symbol-level explanation with full dependency context (
code_explain) - Pattern detection and architecture rule validation (
find_pattern,arch_validate) - Architecture placement suggestions for new code (
arch_suggest) - Semantic code slicing — targeted line ranges from a natural query (
semantic_slice) - Find duplicate or similar code across the codebase (
find_similar_code,code_clusters)
Your agent remembers what it decided, what it changed, what broke, and what it observed — even after a VS Code restart or a Claude Desktop session ends.
- Episode memory: observations, decisions, edits, test results, errors, learnings (
episode_add,episode_recall) - Decision log with semantic query (
decision_query) - Reflection synthesis from recent episodes (
reflect) - Temporal graph model: query any past code state with
asOf, compare drift withdiff_since
Run multiple AI agents in parallel on the same repository without conflicts.
- Claim/release protocol for file, function, or task ownership (
agent_claim,agent_release) - Fleet-wide coordination view — see what every agent is doing (
coordination_overview,agent_status) - Context packs that assemble high-signal task briefings under strict token budgets (
context_pack) - Blocker detection across agents and tasks (
blocking_issues)
Stop running your full test suite on every change. Know exactly what's affected.
- Change impact analysis — blast radius of modified files (
impact_analyze) - Selective test execution — only the tests that can fail (
test_select,test_run) - Test categorization for parallelization and prioritization (
test_categorize,suggest_tests)
Your READMEs, ADRs, and changelogs become searchable graph nodes, linked to the code they describe.
- Index all markdown docs in one call (
index_docs) - Full-text BM25 search across headings and content (
search_docs?query=...) - Symbol-linked lookup — every doc that references a class or function (
search_docs?symbol=MyClass) - Incremental re-index: only changed files are re-parsed
Enforce architectural boundaries automatically and get placement guidance for new code.
- Layer/boundary rule validation (
arch_validate) - Graph-topology-aware placement suggestions (
arch_suggest) - Circular dependency and unused-code detection (
find_pattern)
Go from a fresh clone to a fully wired AI assistant in one tool call.
init_project_setup— sets workspace, rebuilds graph, generates Copilot instructionssetup_copilot_instructions— generates.github/copilot-instructions.mdfrom your repo's topology- Works with VS Code Copilot, Claude Code, Claude Desktop, and any MCP-compatible client
lxDIG runs as an MCP server over stdio or HTTP and coordinates three data planes behind a single tool interface:
┌─────────────────────────────────────────────────────────────┐
│ MCP Tool Surface (39 tools) │
│ stdio transport (local) │ HTTP transport (remote/fleet) │
└──────────────┬────────────┴────────────────┬────────────────┘
│ │
┌───────────▼────────────┐ ┌────────────▼────────────┐
│ Graph Plane │ │ Vector Plane │
│ Memgraph (Bolt) │ │ Qdrant │
│ ───────────────── │ │ ───────────────────── │
│ FILE · FUNC · CLASS │ │ Semantic embeddings │
│ IMPORT · CALL edges │ │ Nearest-neighbor search│
│ Temporal tx history │ │ Natural-language code │
└────────────────────────┘ └─────────────────────────┘
│
┌───────────▼────────────────────────────────────────────┐
│ Hybrid Retrieval (RRF fusion) │
│ Graph expansion + Vector similarity + BM25 lexical │
└────────────────────────────────────────────────────────┘
When you call graph_query in natural language mode, retrieval runs as hybrid fusion:
- Vector similarity search (semantic concepts)
- BM25 lexical search (keyword matches)
- Graph expansion from seed nodes (structural relationships)
- Reciprocal Rank Fusion (RRF) merges all three signals into a single ranked result
The result: structurally accurate, semantically relevant answers — not just the closest embedding match.
Recommended setup: Memgraph + Qdrant in Docker, MCP server on your host via stdio. Your editor spawns and owns the process — no HTTP ports, no session headers.
| Requirement | Version |
|---|---|
| Node.js | 24+ |
| Docker + Docker Compose | 24+ (v2) |
git clone https://github.com/lexCoder2/lxDIG-MCP.git
cd lxDIG-MCP
npm install && npm run builddocker compose up -d memgraph qdrant
docker compose ps # wait for "healthy" (~30 s)VS Code — add to .vscode/mcp.json:
{
"servers": {
"lxdig": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/lxDIG-MCP/dist/server.js"],
"env": {
"MCP_TRANSPORT": "stdio",
"MEMGRAPH_HOST": "localhost",
"MEMGRAPH_PORT": "7687",
"QDRANT_HOST": "localhost",
"QDRANT_PORT": "6333"
}
}
}
}Claude Desktop — add to claude_desktop_config.json:
{
"mcpServers": {
"lxdig": {
"command": "node",
"args": ["/absolute/path/to/lxDIG-MCP/dist/server.js"],
"env": {
"MCP_TRANSPORT": "stdio",
"MEMGRAPH_HOST": "localhost",
"MEMGRAPH_PORT": "7687",
"QDRANT_HOST": "localhost",
"QDRANT_PORT": "6333"
}
}
}
}{
"name": "init_project_setup",
"arguments": {
"workspaceRoot": "/absolute/path/to/your-project",
"sourceDir": "src",
"projectId": "my-repo"
}
}This single call sets the workspace context, rebuilds the code graph, and generates .github/copilot-instructions.md for your project. Your agent is ready to query.
Total setup time: ~5 minutes. See QUICK_START.md for the full guide including Docker, Claude Desktop, and HTTP transport.
| Category | Tools | What they do |
|---|---|---|
| Graph / querying | graph_set_workspace graph_rebuild graph_health graph_query |
Index and query the code graph |
| Code intelligence | code_explain find_pattern semantic_slice context_pack diff_since |
Understand structure and change |
| Architecture | arch_validate arch_suggest |
Enforce boundaries, guide placement |
| Semantic / similarity | semantic_search find_similar_code code_clusters semantic_diff |
Find related code by meaning |
| Test intelligence | test_select test_categorize impact_analyze test_run suggest_tests |
Run only what matters |
| Progress / ops | progress_query task_update feature_status blocking_issues |
Track delivery and blockers |
| Agent memory | episode_add episode_recall decision_query reflect |
Persist and retrieve agent knowledge |
| Coordination | agent_claim agent_release agent_status coordination_overview |
Safe multi-agent parallelism |
| Documentation | index_docs search_docs |
Search your READMEs and ADRs like code |
| Reference | ref_query |
Query a sibling repo for patterns and examples |
| Setup | init_project_setup setup_copilot_instructions contract_validate tools_list |
One-shot onboarding |
- Ask "what calls
AuthService.loginacross the whole repo?" and get a graph answer, not a file dump - Resume a refactoring task after a VS Code restart — your agent remembers every decision
- Run
impact_analyzebefore committing — know exactly which tests to run - Use
arch_validateto catch layer violations before they become bugs
- Run a planning agent and an implementation agent in parallel without file conflicts
- Use
coordination_overviewto see what every agent is working on context_packhands off a high-signal task briefing between agents in one call- Persistent decision memory means the second agent doesn't repeat work the first already did
graph_healthas a startup readiness gatetest_select+test_runfor impact-scoped CI that's 5–10x faster than full suitearch_validateas an automated architecture compliance check on every PR
init_project_setupon a new codebase — graph + copilot instructions in ~30 secondscode_explainto understand unfamiliar subsystems with full dependency contextsetup_copilot_instructionsgenerates AI assistant instructions tailored to your repo's topology
| Feature | lxDIG MCP | Plain RAG / embeddings | GitHub Copilot (built-in) | Custom LangChain agent |
|---|---|---|---|---|
| Cross-file structural reasoning | ✅ Graph edges | ❌ Chunks only | ||
| Persistent agent memory | ✅ Episodes + decisions | ❌ Stateless | ❌ Stateless | |
| Multi-agent coordination | ✅ Claims/releases | ❌ None | ❌ None | ❌ Custom setup |
| Temporal code model | ✅ asOf + diff_since |
❌ | ❌ | ❌ |
| Impact-scoped test selection | ✅ Built-in | ❌ | ❌ | ❌ |
| Architecture validation | ✅ Rule-based | ❌ | ❌ | ❌ |
| MCP-native (any AI client) | ✅ 39 tools | ❌ | ❌ | ❌ |
| Open source / self-hosted | ✅ MIT | ❌ Closed | ✅ | |
| Setup complexity | Medium (Docker) | Low | None | High |
Benchmarks run against a synthetic 20-scenario agent task suite (benchmarks/):
| Metric | Result |
|---|---|
| Scenarios where lxDIG was faster than baseline | 15 / 20 |
| MCP-only successful scenarios (baseline could not complete) | 4 / 20 |
| vs Grep / manual file reads | 9x–6000x faster, <1% false positives |
| vs pure vector RAG | 5x token savings, 10x more relevant results |
Benchmarks are workload-dependent. Run
npm run benchmark:check-regressionagainst your own repository for accurate numbers.
Every feature below is production-ready today:
- ✅ Hybrid retrieval for
graph_query— vector + BM25 + graph expansion fused with RRF - ✅ AST-accurate parsers via tree-sitter for TypeScript, TSX, JS/MJS/CJS, JSX, Python, Go, Rust, Java
- ✅ Watcher-driven incremental rebuilds — graph stays fresh without manual intervention (requires
LXDIG_ENABLE_WATCHER=true) - ✅ Temporal code model —
asOfqueries any past graph state;diff_sinceshows what changed - ✅ Indexing-time symbol summaries — compact-profile answers stay useful in tight token budgets
- ✅ Leiden community detection + PageRank PPR with JS fallbacks for non-MAGE environments
- ✅ SCIP IDs on all FILE, FUNCTION, and CLASS nodes for precise cross-tool symbol references
- ✅ Episode memory, agent coordination, context packs, and response budget shaping
- ✅ Docs & ADR indexing — markdown parsed into graph nodes; queried by text or symbol association
- ✅ 402 tests across parsers, builders, engines, and tool handlers — all green
| Mode | Best for | Command |
|---|---|---|
| stdio ✅ recommended | VS Code Copilot, Claude Code, Claude Desktop, Cursor | npm run start |
| HTTP | Remote agents, multi-client fleets, CI pipelines | npm run start:http |
npm run start # stdio server (recommended)
npm run start:http # HTTP supervisor (multi-session)
npm run build # compile TypeScript
npm test # run all 402 tests
npm run benchmark:check-regression # check latency/token regressions| Path | What's inside |
|---|---|
src/server.ts, src/mcp-server.ts |
MCP + HTTP transport surfaces |
src/tools/ |
Tool handlers, registry, all 39 tool implementations |
src/graph/ |
Graph client, orchestrator, hybrid retriever, watcher, docs builder |
src/engines/ |
Architecture, test, progress, coordination, episode, docs engines |
src/parsers/ |
AST + markdown parsers (tree-sitter + regex fallback) |
src/response/ |
Response shaping, profile budgets, summarization |
docs/GRAPH_EXPERT_AGENT.md |
Full agent runbook — tool priority, path rules, response shaping |
docs/MCP_INTEGRATION_GUIDE.md |
Deep-dive integration guide |
QUICK_START.md |
Step-by-step deployment + editor wiring (~5 min) |
- Start every session with
graph_set_workspace→graph_rebuild(or configureinit_project_setupto run automatically) - Prefer
graph_queryover file reads for discovery — far fewer tokens, cross-file context included - Use
profile: compactin autonomous loops; switch tobalancedordebugwhen you need detail - Rebuild incrementally after meaningful edits; the file watcher handles this automatically during active sessions
- Run
impact_analyzebefore tests so your agent only executes what's actually affected
lxDIG is open source and self-hosted today. Planned work ahead — see ROADMAP.md for the full prioritized backlog with detail on each item.
- Language server protocol (LSP) integration for deeper symbol resolution
- Go, Rust, Java parser improvements
- MCP
resourcessurface (expose graph nodes as MCP resources) - Webhook-triggered graph rebuilds for CI environments
- Plugin API for custom tool registration
- Real-time transparent graph sync — continuous file-watching with live graph and vector index updates surfaced as observable events, so agents and users always know when the graph is current without polling
graph_healthor triggering manual rebuilds - Domain knowledge layer — attach external knowledge sources (documentation, standards, specs, research articles) directly to code symbols as graph nodes; a
calculateBMIfunction links to CDC/WHO references, a payment function links to PCI-DSS rules, a GDPR-scoped model links to regulation articles — giving agents real-world context alongside structural context - Multi-user coordination — shared agent memory, task ownership, and conflict detection across multiple developers on the same repository
- lxDIG Cloud — hosted, zero-infrastructure version for individuals and teams
Pull requests are welcome. Whether it's a new parser, a tool improvement, a bug fix, or better docs — contributions of all sizes move this project forward.
- Bugs / features — open an issue first to align on scope
- New tools — follow the handler + registration pattern in
src/tools/; include tests - New language parsers — add tree-sitter grammar + tests in
src/parsers/ - Docs — typos, clarifications, and examples are always appreciated
→ Open a pull request · → Browse open issues
lxDIG MCP is built and maintained in personal time — researching graph retrieval techniques, designing the tool surface, writing tests, and keeping everything working across MCP protocol updates. If it saves you time or makes your AI-assisted workflows meaningfully better, consider supporting the work:
- GitHub Sponsors → github.com/sponsors/lexCoder2
- Buy Me a Coffee → buymeacoffee.com/hi8g
Q: Does lxDIG require a cloud service or API key? No. lxDIG runs entirely on your machine. Memgraph and Qdrant run in Docker containers you control. No data leaves your environment.
Q: Does it work with Cursor? Yes. Any MCP-compatible client works. Add the stdio config to Cursor's MCP settings the same way as VS Code.
Q: How large a codebase can it handle?
The graph plane (Memgraph) scales to millions of nodes. For very large monorepos, use sourceDir to scope indexing to the relevant subdirectory. Incremental rebuilds keep the graph fresh without re-indexing everything.
Q: Do I need to run Qdrant?
Qdrant is optional but recommended for large codebases. Without it, semantic_search and find_similar_code are unavailable; all other tools continue to work via graph-only or BM25 retrieval.
Q: Can multiple developers on a team share one lxDIG instance? Yes, via HTTP transport. One running instance handles multiple independent sessions. Team-level shared memory is on the lxDIG Cloud roadmap.
Q: Is this production-ready? The core tools are stable and tested (402 tests, all green). Treat it as beta — APIs may change before a 1.0 release. Pin your version and watch the changelog.
MIT — free to use, modify, and distribute.