A lightweight MCP server that builds a Python call graph for a project and lets you query, visualize, and summarize architecture β directly from Claude Desktop via tools.
It supports:
- Building a project graph (function-level)
- Querying callers/callees/dependencies/paths
- Exporting a Mermaid / DOT call graph snippet
- Optional Gemini-powered βcall certaintyβ classification: for a given function, Gemini decides which callees are always vs conditional based on the functionβs source code + the graph-extracted callees list.
- βWho calls this function?β
- βWhat does this function call?β
- βShow me a call chain from A β Bβ
- βGive me a focused call graph around this functionβ
- Export subgraphs with focus + direction + depth.
For a specific function:
- Extract its source code
- Extract the callees list from the call graph
- Ask Gemini to classify each callee:
always(definitely called)conditional(called only in certain branches)unlikely/unknown(optional categories, depending on prompt/schema)
- Python 3.10+ (recommended)
- Claude Desktop (to run MCP tools)
- (Optional) Gemini API key for the AI tool
python -m venv .venv
# Windows (PowerShell)
.venv\Scripts\Activate.ps1
# macOS/Linux
source .venv/bin/activatepip install -r requirements.txtCreate a .env file in the project root:
GEMINI_API_KEY=your_key_herepython server.pyIn Claude Desktop, add an MCP server entry (example structure depends on your Claude config). You generally point Claude to run:
- Command: your venv Python executable
- Args:
server.py
Example (conceptual):
{
"mcpServers": {
"debug_graph_mcp": {
"command": "C:/path/to/project/.venv/Scripts/python.exe",
"args": ["C:/path/to/project/server.py"]
}
}
}Restart Claude Desktop, open a chat, and verify the tools appear.
These tools are registered in src/mcp/tools_graph.py:
build_graph(root_path, granularity="function", resolve_calls="jedi" | "fallback_only", ...)graph_overview(graph_id)search_nodes(graph_id, query)query_graph(graph_id, query_type, target, path_target?)export_call_graph(graph_id, focus?, depth, direction, format="mermaid"|"dot")list_cached_graphs()clear_graph_cache(graph_id?)
call_certainty_gemini(graph_id, target, model, api_key?, ...)
This one sends:
- Function source code
- Graph-derived callees list
β¦and returns structured JSON classification from Gemini.
Use a clean path format (avoid hidden control chars like TAB).
build_graph(
root_path="C:/Users/.../test_project",
granularity="function",
resolve_calls="jedi"
)
search_nodes(graph_id="...", query="b.py:process")
Youβll get something like:
func:b.py:process
query_graph(
graph_id="...",
query_type="callees",
target="func:b.py:process"
)
export_call_graph(
graph_id="...",
focus="func:b.py:process",
depth=3,
direction="out",
format="mermaid"
)
call_certainty_gemini(
graph_id="...",
target="func:b.py:process"
)
- More accurate cross-file resolution
- Slower on large repos
- Much faster
- Less accurate in dynamic code patterns
- Still useful for:
- high-level exploration
- hotspots
- quick call graph sketches
Recommended: Use fallback_only for huge repos, and switch to jedi when you need correctness on a specific area.
.
ββ server.py # Entry point (creates FastMCP + registers tools)
ββ src/
β ββ mcp/
β β ββ tools_graph.py # Tool layer: thin wrappers calling GraphService
β β ββ graph_service.py # Service layer: orchestration + cache usage
β β ββ graph_inputs.py # Input normalization helpers
β ββ analysis/
β β ββ graph_builder.py # Builds graph from Python source (AST/Jedi/fallback)
β β ββ graph_cache.py # GraphCache (signature + LRU)
β β ββ graph_queries.py # callers/callees/deps/path logic
β β ββ node_resolver.py # resolves "b.py:process" β "func:b.py:process"
β β ββ graph_viz.py # Mermaid/DOT export with focus+depth
β β ββ graph_stats.py # graph_overview stats (entrypoints/leaves/hotspots)
β β ββ call_certainty_gemini.py# Gemini prompt + request/parse (AI feature)
ββ docs/
ββ image.png # screenshots
This screenshot shows a real end-to-end flow in Claude Desktop using this MCP server:
- Claude calls
build_graphto analyze the project and build a function-level call graph. - Claude calls
export_call_graphwithfocus="func:b.py:process"(plusdepth/direction) to export a focused Mermaid subgraph. - Claude renders/summarizes the result: direct callees and deeper nested calls.
What this demonstrates
- Natural language β tool calls β structured graph output
- A focused call graph around a specific function (not the entire repo)
- A visual, shareable diagram thatβs easier to reason about than raw code navigation
This screenshot demonstrates the AI add-on: the tool sends the target function source code + the graph-extracted callees list to Gemini, and Gemini classifies each callee as always or conditional.
What this demonstrates
- Combines static graph extraction with LLM reasoning
- Classifies callees as always vs conditional (best-effort, no runtime tracing)
- Produces structured JSON thatβs easy to read and screenshot in a report
def build_project_graph(
root: str,
granularity: str = "function",
include_external: bool = False,
resolve_calls: str = "jedi", # "jedi" | "fallback_only"
) -> dict:
if granularity == "file":
g = build_file_graph(root, include_external)
else:
g = build_function_graph(root, include_external, resolve_calls=resolve_calls)
return serialize_graph(g)Screenshot: Gemini Call Certainty result
Use forward slashes in Inspector:
- β
C:/Users/.../test_project - β paths that accidentally include a TAB or weird copy/paste characters
Run build_graph first and use the returned graph_id.
Use search_nodes first and copy the returned id exactly:
func:...class:...file:...
Ensure .env exists with:
GEMINI_API_KEY=...(or pass api_key directly to the tool).
- Call graphs are an approximation: Pythonβs dynamic features can hide or alter call relationships.
fallback_onlymode favors speed over perfect resolution.- AI outputs can vary; treat Gemini classifications as βbest effortβ analysis, not a compiler guarantee.

