OFFICIAL CA :
🇺🇸 English | 🇨🇳 简体中文 | 🇯🇵 日本語 | 🇰🇷 한국어 | 🇩🇪 Deutsch | 🇫🇷 Français | 🇪🇸 Español | 🇮🇳 हिन्दी | 🇧🇷 Português | 🇷🇺 Русский | 🇸🇦 العربية | 🇮🇹 Italiano | 🇵🇱 Polski | 🇳🇱 Nederlands | 🇹🇷 Türkçe | 🇺🇦 Українська | 🇻🇳 Tiếng Việt | 🇮🇩 Bahasa Indonesia | 🇸🇪 Svenska | 🇬🇷 Ελληνικά | 🇷🇴 Română | 🇨🇿 Čeština | 🇫🇮 Suomi | 🇩🇰 Dansk | 🇳🇴 Norsk | 🇭🇺 Magyar | 🇹🇭 ภาษาไทย | 🇹🇼 繁體中文
Type /graphify in your AI coding assistant and it maps your entire project — code, docs, PDFs, images, videos — into a knowledge graph you can query instead of grepping through files.
Works in Claude Code, Codex, OpenCode, Cursor, Gemini CLI, GitHub Copilot CLI, VS Code Copilot Chat, Aider, OpenClaw, Factory Droid, Trae, Hermes, Kimi Code, Kiro, Pi, and Google Antigravity.
/graphify .
That's it. You get three files:
graphify-out/
├── graph.html open in any browser — click nodes, filter, search
├── GRAPH_REPORT.md the highlights: key concepts, surprising connections, suggested questions
└── graph.json the full graph — query it anytime without re-reading your files
For a readable architecture page with Mermaid call-flow diagrams, run:
graphify export callflow-html| Requirement | Minimum | Check | Install |
|---|---|---|---|
| Python | 3.10+ | python --version |
python.org |
| uv (recommended) | any | uv --version |
curl -LsSf https://astral.sh/uv/install.sh | sh |
| pipx (alternative) | any | pipx --version |
pip install pipx |
macOS quick install (Homebrew):
brew install python@3.12 uvWindows quick install:
winget install astral-sh.uvUbuntu/Debian:
sudo apt install python3.12 python3-pip pipx
# or install uv:
curl -LsSf https://astral.sh/uv/install.sh | shOfficial package: The PyPI package is
graphifyy(double-y). Othergraphify*packages on PyPI are not affiliated. The CLI command is stillgraphify.
Step 1 — install the package:
# Recommended (uv puts graphify on PATH automatically):
uv tool install graphifyy
# Alternatives:
pipx install graphifyy
pip install graphifyyStep 2 — register the skill with your AI assistant:
graphify installThat's it. Open your AI assistant and type /graphify .
PowerShell note: Use
graphify .not/graphify .— the leading slash is a path separator in PowerShell.
graphify: command not found? Useuv tool install graphifyyorpipx install graphifyy— both put the CLI on PATH automatically. With plainpip, add~/.local/bin(Linux) or~/Library/Python/3.x/bin(Mac) to your PATH, or runpython -m graphify.
| Platform | Install command |
|---|---|
| Claude Code (Linux/Mac) | graphify install |
| Claude Code (Windows) | graphify install --platform windows |
| Codex | graphify install --platform codex |
| OpenCode | graphify install --platform opencode |
| GitHub Copilot CLI | graphify install --platform copilot |
| VS Code Copilot Chat | graphify vscode install |
| Aider | graphify install --platform aider |
| OpenClaw | graphify install --platform claw |
| Factory Droid | graphify install --platform droid |
| Trae | graphify install --platform trae |
| Trae CN | graphify install --platform trae-cn |
| Gemini CLI | graphify install --platform gemini |
| Hermes | graphify install --platform hermes |
| Kimi Code | graphify install --platform kimi |
| Kiro IDE/CLI | graphify kiro install |
| Pi coding agent | graphify install --platform pi |
| Cursor | graphify cursor install |
| Google Antigravity | graphify antigravity install |
Codex users: also add
multi_agent = trueunder[features]in~/.codex/config.toml. Codex uses$graphifyinstead of/graphify.
Install only what you need:
| Extra | What it adds | Install |
|---|---|---|
pdf |
PDF extraction | pip install "graphifyy[pdf]" |
office |
.docx and .xlsx support |
pip install "graphifyy[office]" |
google |
Google Sheets rendering | pip install "graphifyy[google]" |
video |
Video/audio transcription (faster-whisper + yt-dlp) | pip install "graphifyy[video]" |
mcp |
MCP stdio server | pip install "graphifyy[mcp]" |
neo4j |
Neo4j push support | pip install "graphifyy[neo4j]" |
svg |
SVG graph export | pip install "graphifyy[svg]" |
leiden |
Leiden community detection (Python < 3.13 only) | pip install "graphifyy[leiden]" |
ollama |
Ollama local inference | pip install "graphifyy[ollama]" |
openai |
OpenAI / OpenAI-compatible APIs | pip install "graphifyy[openai]" |
gemini |
Google Gemini API | pip install "graphifyy[gemini]" |
bedrock |
AWS Bedrock (uses IAM, no API key) | pip install "graphifyy[bedrock]" |
sql |
SQL schema extraction | pip install "graphifyy[sql]" |
all |
Everything above | pip install "graphifyy[all]" |
Run this once in your project after building a graph:
| Platform | Command |
|---|---|
| Claude Code | graphify claude install |
| Codex | graphify codex install |
| OpenCode | graphify opencode install |
| GitHub Copilot CLI | graphify copilot install |
| VS Code Copilot Chat | graphify vscode install |
| Aider | graphify aider install |
| OpenClaw | graphify claw install |
| Factory Droid | graphify droid install |
| Trae | graphify trae install |
| Trae CN | graphify trae-cn install |
| Cursor | graphify cursor install |
| Gemini CLI | graphify gemini install |
| Hermes | graphify hermes install |
| Kimi Code | graphify install --platform kimi |
| Kiro IDE/CLI | graphify kiro install |
| Pi coding agent | graphify pi install |
| Google Antigravity | graphify antigravity install |
This writes a small config file that tells your assistant to read GRAPH_REPORT.md before answering questions about your codebase. On platforms that support hooks (Claude Code, Codex, Gemini CLI), a hook fires automatically before every file-read call — your assistant navigates by the graph instead of grepping through everything.
To remove graphify from all platforms at once: graphify uninstall (add --purge to also delete graphify-out/). Or use the per-platform command (e.g. graphify claude uninstall).
- God nodes — the most-connected concepts in your project. Everything flows through these.
- Surprising connections — links between things that live in different files or modules. Ranked by how unexpected they are.
- The "why" — inline comments (
# NOTE:,# WHY:,# HACK:), docstrings, and design rationale from docs are extracted as separate nodes linked to the code they explain. - Suggested questions — 4–5 questions the graph is uniquely positioned to answer.
- Confidence tags — every inferred relationship is marked
EXTRACTED,INFERRED, orAMBIGUOUS. You always know what was found vs guessed.
| Type | Extensions |
|---|---|
| Code (29 languages) | .py .ts .js .jsx .tsx .mjs .go .rs .java .c .cpp .h .hpp .rb .cs .kt .scala .php .swift .lua .luau .zig .ps1 .ex .exs .m .mm .jl .vue .svelte .astro .groovy .gradle .dart .v .sv .sql .f .f90 .f95 .f03 .f08 .pas .pp .dpr .dpk .lpr .inc .dfm .lfm .lpk |
| Docs | .md .mdx .qmd .html .txt .rst .yaml .yml |
| Office | .docx .xlsx (requires pip install graphifyy[office]) |
| Google Workspace | .gdoc .gsheet .gslides (opt-in; requires gws auth and --google-workspace; Sheets need pip install graphifyy[google]) |
| PDFs | .pdf |
| Images | .png .jpg .webp .gif |
| Video / Audio | .mp4 .mov .mp3 .wav and more (requires pip install graphifyy[video]) |
| YouTube / URLs | any video URL (requires pip install graphifyy[video]) |
Code is extracted locally with no API calls (AST via tree-sitter). Everything else goes through your AI assistant's model API.
Google Drive for desktop .gdoc, .gsheet, and .gslides files are shortcut
pointers, not document content. To include native Google Docs, Sheets, and Slides
in a headless extraction, install and authenticate the
gws CLI, then run:
pip install "graphifyy[google]" # needed for Google Sheets table rendering
gws auth login -s drive
graphify extract ./docs --google-workspaceYou can also set GRAPHIFY_GOOGLE_WORKSPACE=1. Graphify exports shortcuts into
graphify-out/converted/ as Markdown sidecars, then extracts those files.
/graphify . # build graph for current folder
/graphify ./docs --update # re-extract only changed files
/graphify . --cluster-only # rerun clustering without re-extracting
/graphify . --no-viz # skip the HTML, just the report + JSON
/graphify . --wiki # build a markdown wiki from the graph
graphify export callflow-html # architecture/call-flow HTML from graphify-out/
/graphify query "what connects auth to the database?"
/graphify path "UserService" "DatabasePool"
/graphify explain "RateLimiter"
/graphify add https://arxiv.org/abs/1706.03762 # fetch a paper and add it
/graphify add <youtube-url> # transcribe and add a video
graphify hook install # auto-rebuild on git commit
graphify merge-graphs a.json b.json # combine two graphsSee the full command reference below.
Create a .graphifyignore in your project root — same syntax as .gitignore, including ! negation:
# .graphifyignore
node_modules/
dist/
*.generated.py
# only index src/, ignore everything else
*
!src/
!src/**
graphify-out/ is meant to be committed to git so everyone on the team starts with a map.
Recommended .gitignore additions:
graphify-out/manifest.json # mtime-based, breaks after git clone
graphify-out/cost.json # local only
# graphify-out/cache/ # optional: commit for speed, skip to keep repo small
Workflow:
- One person runs
/graphify .and commitsgraphify-out/. - Everyone pulls — their assistant reads the graph immediately.
- Run
graphify hook installto auto-rebuild after each commit (AST only, no API cost). This also sets up a git merge driver sograph.jsonis never left with conflict markers — two devs committing in parallel get their graphs union-merged automatically. - When docs or papers change, run
/graphify --updateto refresh those nodes.
# query the graph from the terminal
graphify query "show the auth flow"
graphify query "what connects DigestAuth to Response?" --graph graphify-out/graph.json
# expose the graph as an MCP server (for repeated tool-call access)
python -m graphify.serve graphify-out/graph.json
# register with Kimi Code:
kimi mcp add --transport stdio graphify -- python -m graphify.serve graphify-out/graph.jsonThe MCP server gives your assistant structured access: query_graph, get_node, get_neighbors, shortest_path.
WSL / Linux note: Ubuntu ships
python3, notpython. Use a venv to avoid conflicts:python3 -m venv .venv && .venv/bin/pip install "graphifyy[mcp]"
These are only needed for headless / CI extraction (graphify extract). When running via the /graphify skill inside your IDE, the model API is provided by your IDE session — no extra keys needed.
| Variable | Used for | When required |
|---|---|---|
ANTHROPIC_API_KEY |
Claude (Anthropic) backend | --backend claude |
GEMINI_API_KEY or GOOGLE_API_KEY |
Google Gemini backend | --backend gemini |
OPENAI_API_KEY |
OpenAI or OpenAI-compatible APIs | --backend openai |
MOONSHOT_API_KEY |
Kimi Code backend | --backend kimi |
OLLAMA_BASE_URL |
Ollama local inference URL | --backend ollama (default: http://localhost:11434) |
OLLAMA_MODEL |
Ollama model name | --backend ollama (default: auto-detect) |
GRAPHIFY_OLLAMA_NUM_CTX |
Override Ollama KV-cache window size | optional — auto-sized by default |
GRAPHIFY_OLLAMA_KEEP_ALIVE |
Minutes to keep Ollama model loaded | optional — set 0 to unload after each chunk |
AWS_* / ~/.aws/credentials |
AWS Bedrock — standard credential chain | --backend bedrock (no API key, uses IAM) |
GRAPHIFY_MAX_WORKERS |
AST parallelism thread count | optional — also --max-workers flag |
GRAPHIFY_MAX_OUTPUT_TOKENS |
Raise output cap for dense corpora | optional — e.g. 32768 for large files |
GRAPHIFY_API_TIMEOUT |
HTTP timeout in seconds (default: 600) | optional — also --api-timeout flag |
GRAPHIFY_FORCE |
Force graph rebuild even with fewer nodes | optional — also --force flag |
GRAPHIFY_GOOGLE_WORKSPACE |
Auto-enable Google Workspace export | optional — set to 1 |
- Code files — processed locally via tree-sitter. Nothing leaves your machine.
- Video / audio — transcribed locally with faster-whisper. Nothing leaves your machine.
- Docs, PDFs, images — sent to your AI assistant for semantic extraction (via the
/graphifyskill, using whatever model your IDE session runs). Headlessgraphify extractrequiresGEMINI_API_KEY/GOOGLE_API_KEY(Gemini),MOONSHOT_API_KEY(Kimi),ANTHROPIC_API_KEY(Claude),OPENAI_API_KEY(OpenAI), a running Ollama instance (OLLAMA_BASE_URL), AWS credentials via the standard provider chain (Bedrock - no API key needed, uses IAM), or theclaudeCLI binary (Claude Code - no API key needed, uses your Claude subscription). The--dedup-llmflag uses the same key. - No telemetry, no usage tracking, no analytics.
graphify: command not found after pip install graphifyy
pip installs scripts to a user bin directory that may not be on your PATH. Fix:
- macOS: add
~/Library/Python/3.x/binto your PATH in~/.zshrc - Linux: add
~/.local/binto your PATH in~/.bashrc - Or use
uv tool install graphifyy/pipx install graphifyy— both manage PATH automatically.
python -m graphify works but graphify command doesn't
Your shell's PATH doesn't include the Python scripts directory. Use uv or pipx instead of plain pip.
/graphify . causes "path not recognized" in PowerShell
PowerShell treats a leading / as a path separator. Use graphify . (no slash) on Windows.
Graph has fewer nodes after --update or rebuild
If a refactor deleted files, the old nodes linger. Pass --force (or set GRAPHIFY_FORCE=1) to overwrite even when the rebuild has fewer nodes.
Graph has duplicate nodes for the same entity (ghost duplicates) This happens when semantic and AST extraction disagreed on the node ID format. Run a full re-extract to clean up:
graphify extract . --forceOllama runs out of VRAM / context window exceeded The KV-cache window is auto-sized but may be too large for your GPU. Reduce it:
GRAPHIFY_OLLAMA_NUM_CTX=8192 graphify extract ./docs --backend ollama --token-budget 4000Graph HTML is too large to open in a browser (>5000 nodes) Skip HTML generation and use the JSON directly:
graphify cluster-only ./my-project --no-viz
graphify query "..."graph.json has conflict markers after two devs commit at once
Run graphify hook install — it sets up a git merge driver that union-merges graph.json automatically so conflicts never happen.
Extraction returns empty nodes/edges for docs or PDFs Docs and PDFs require an LLM call. Check that your API key is set and the backend is correct:
ANTHROPIC_API_KEY=sk-... graphify extract ./docs --backend claudeSkill version mismatch warning in your IDE Your installed graphify version is different from the skill file. Update:
uv tool upgrade graphifyy
graphify install # overwrites the skill file/graphify # run on current directory
/graphify ./raw # run on a specific folder
/graphify ./raw --mode deep # more aggressive relationship extraction
/graphify ./raw --update # re-extract only changed files
/graphify ./raw --directed # preserve edge direction
/graphify ./raw --cluster-only # rerun clustering on existing graph
/graphify ./raw --no-viz # skip HTML visualization
/graphify ./raw --obsidian # generate Obsidian vault
/graphify ./raw --wiki # build agent-crawlable markdown wiki
/graphify ./raw --svg # export graph.svg
/graphify ./raw --graphml # export for Gephi / yEd
/graphify ./raw --neo4j # generate cypher.txt for Neo4j
/graphify ./raw --neo4j-push bolt://localhost:7687
/graphify ./raw --watch # auto-sync as files change
/graphify ./raw --mcp # start MCP stdio server
/graphify add https://arxiv.org/abs/1706.03762
/graphify add <video-url>
/graphify add https://... --author "Name" --contributor "Name"
/graphify query "what connects attention to the optimizer?"
/graphify query "..." --dfs --budget 1500
/graphify path "DigestAuth" "Response"
/graphify explain "SwinTransformer"
graphify uninstall # remove from all platforms in one shot
graphify uninstall --purge # also delete graphify-out/
graphify hook install # post-commit + post-checkout hooks
graphify hook uninstall
graphify hook status
graphify claude install / uninstall
graphify codex install / uninstall
graphify opencode install
graphify cursor install / uninstall
graphify gemini install / uninstall
graphify copilot install / uninstall
graphify aider install / uninstall
graphify claw install / uninstall
graphify droid install / uninstall
graphify trae install / uninstall
graphify trae-cn install / uninstall
graphify hermes install / uninstall
graphify kiro install / uninstall
graphify antigravity install / uninstall
graphify extract ./docs # headless LLM extraction for CI (no IDE needed)
graphify extract ./docs --backend gemini # explicit backend: gemini, kimi, claude, openai, ollama, bedrock, or claude-cli
graphify extract ./docs --backend gemini --model gemini-3.1-pro-preview
graphify extract ./docs --backend ollama # local Ollama (set OLLAMA_BASE_URL / OLLAMA_MODEL) - no API key needed for loopback
GRAPHIFY_OLLAMA_NUM_CTX=32768 graphify extract ./docs --backend ollama # override KV-cache window (auto-sized by default)
GRAPHIFY_OLLAMA_KEEP_ALIVE=0 graphify extract ./docs --backend ollama # unload model after each chunk (saves VRAM on small GPUs)
graphify extract ./docs --backend bedrock # AWS Bedrock via IAM - no API key, uses AWS credential chain
graphify extract ./docs --backend claude-cli # route through Claude Code CLI - no API key, uses your Claude subscription
graphify extract ./docs --max-workers 16 # AST parallelism (also GRAPHIFY_MAX_WORKERS)
graphify extract ./docs --token-budget 30000 # smaller semantic chunks for local/small models
graphify extract ./docs --max-concurrency 2 # fewer parallel LLM calls (useful for local inference)
graphify extract ./docs --api-timeout 900 # longer HTTP timeout for slow local models (default 600s)
graphify extract ./docs --google-workspace # export .gdoc/.gsheet/.gslides via gws before extraction
graphify extract ./docs --no-cluster # raw extraction only, skip clustering
graphify extract ./docs --force # overwrite graph.json even if new graph has fewer nodes (use after refactors or to clear ghost duplicates)
graphify extract ./docs --dedup-llm # LLM tiebreaker for ambiguous entity pairs (uses same API key)
graphify extract ./docs --global --as myrepo # extract and register into the cross-project global graph
GRAPHIFY_MAX_OUTPUT_TOKENS=32768 graphify extract ./docs --backend claude # raise output cap for dense corpora
graphify export callflow-html # graphify-out/<project>-callflow.html
graphify export callflow-html --max-sections 8 # cap generated architecture sections
graphify export callflow-html --output docs/arch.html
graphify export callflow-html ./some-repo/graphify-out
graphify global add graphify-out/graph.json myrepo # register a project graph into ~/.graphify/global.json
graphify global remove myrepo # remove a project from the global graph
graphify global list # show all registered repos + node/edge counts
graphify global path # print path to the global graph file
graphify clone https://github.com/karpathy/nanoGPT
graphify merge-graphs a.json b.json --out merged.json
graphify --version # print installed version
graphify watch ./src
graphify check-update ./src
graphify update ./src
graphify update ./src --no-cluster # skip reclustering, write raw AST graph only
graphify update ./src --force # overwrite even if new graph has fewer nodes
graphify cluster-only ./my-project
graphify cluster-only ./my-project --graph path/to/graph.json # custom graph location
- How it works — the extraction pipeline, community detection, confidence scoring, benchmarks
- ARCHITECTURE.md — module breakdown, how to add a language
- Optional integrations — Docker MCP Toolkit + SQLite
Penpax is the always-on layer built on top of graphify — it applies the same graph approach to your entire working life: meetings, browser history, emails, files, and code, updating continuously in the background.
Built for people whose work lives across hundreds of conversations and documents they can never fully reconstruct. No cloud, fully on-device.
Free trial launching soon. Join the waitlist →
Contributing
Clone the repo and install in editable mode:
git clone https://github.com/safishamsi/graphify.git
cd graphify
git checkout v7 # active development branch
# Create a virtual environment (Python 3.10+ required):
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install in editable mode with all optional extras:
pip install -e ".[all]"Verify the editable install:
graphify --version
python -c "import graphify; print(graphify.__file__)"pip install pytest
pytest tests/ -q # run the full suite
pytest tests/test_extract.py -q # one module
pytest tests/ -q -k "python" # filter by namemacOS note: the test suite includes both
sample.f90andsample.F90fixtures. These collide on case-insensitive HFS+ / APFS file systems. Run on Linux or in a Docker container if you need to test both Fortran variants simultaneously.
- Active development happens on the
v7branch. - Commit style:
fix: <description>/feat: <description>/docs: <description> - Before opening a PR, run
pytest tests/ -qand confirm it passes. - Add a fixture file to
tests/fixtures/and tests totests/test_languages.pyfor any new language extractor.
Worked examples are the most useful contribution. Run /graphify on a real corpus, save the output to worked/{slug}/, write an honest review.md covering what the graph got right and wrong, and open a PR.
Extraction bugs — open an issue with the input file, the cache entry (graphify-out/cache/), and what was missed or wrong.
See ARCHITECTURE.md for module responsibilities and how to add a language.
