docs: competitive deep-dive — codegraph vs narsil-mcp#262
Conversation
Comprehensive feature-by-feature analysis of narsil-mcp (postrv/narsil-mcp), the closest head-to-head competitor to codegraph. Covers all 8 FOUNDATION.md principles, 9 feature comparison sections with 130+ features, gap analysis, and competitive positioning.
|
Claude finished @carlos-alm's task —— View job PR Review: Competitive Deep-dive — codegraph vs narsil-mcpTask List:
Root Cause AnalysisProblem Being Solved: Creating a comprehensive competitive deep-dive document comparing codegraph against narsil-mcp, following the same structure as the competitive analysis framework. This serves to position codegraph clearly against its closest head-to-head competitor. Why This Approach: The document follows the established template from the competitive analysis ranking system where narsil-mcp is ranked #2 (tied with Joern at 4.5 points). Unlike other competitors that target different audiences, narsil-mcp directly competes with codegraph on the same dimensions (tree-sitter, MCP, AI agents, local-first). Risk Assessment: This is a documentation-only PR with no code changes. The risk is minimal—primarily around factual accuracy and positioning claims. Document Structure and Quality Assessment✅ Follows Established TemplateThe document perfectly mirrors the COMPETITIVE_ANALYSIS.md structure:
✅ Comprehensive Feature CoverageThe analysis covers 130+ features across 9 well-organized sections:
✅ Balanced and Fair AnalysisThe document presents a balanced view:
|
Greptile SummaryThis PR adds a comprehensive competitive deep-dive document comparing codegraph to narsil-mcp, ranked #2 in the competitive analysis. All previous review feedback has been properly addressed in commit
The document is well-structured and comprehensive, covering:
No issues found. The document follows the same structure as the Joern deep-dive and provides valuable competitive intelligence. Confidence Score: 5/5
Important Files Changed
Last reviewed commit: 2d599b5 |
generated/competitive/narsil-mcp.md
Outdated
| | 7 | **Security-conscious defaults** — multi-repo opt-in | Single-repo MCP default. `apiKeyCommand` for secrets. `--multi-repo` opt-in | Multi-repo by default (`list_repos`, `discover_repos` always exposed). `--remote` flag enables cloning external repos. No credential isolation model | **Codegraph wins.** Single-repo default vs. multi-repo default. narsil-mcp's `discover_repos` and `add_remote_repo` tools are exposed without opt-in | | ||
| | 8 | **Honest about what we're not** | Code intelligence engine. Not an app, not a coding tool, not an agent | "Comprehensive code intelligence" — tries to be everything: search engine, security scanner, type checker, SBOM generator, license auditor, knowledge graph, visualization server | **Codegraph wins.** Codegraph has a clear boundary. narsil-mcp's 90-tool surface area spans security, compliance, visualization, type checking, and more — a breadth that risks being shallow everywhere | | ||
|
|
||
| **Score: Codegraph 6, narsil-mcp 0, Tie 2** — against codegraph's own principles, codegraph wins on every differentiating dimension. This is expected: the principles were designed around codegraph's value proposition. The feature comparison below examines where narsil-mcp's breadth creates genuine advantages. |
There was a problem hiding this comment.
Math error in scoring. The table above shows 7 Codegraph wins, 0 narsil-mcp wins, and 1 Tie (Principle 2). Should be "Codegraph 7, narsil-mcp 0, Tie 1" not "Codegraph 6, narsil-mcp 0, Tie 2".
| **Score: Codegraph 6, narsil-mcp 0, Tie 2** — against codegraph's own principles, codegraph wins on every differentiating dimension. This is expected: the principles were designed around codegraph's value proposition. The feature comparison below examines where narsil-mcp's breadth creates genuine advantages. | |
| **Score: Codegraph 7, narsil-mcp 0, Tie 1** — against codegraph's own principles, codegraph wins on every differentiating dimension. This is expected: the principles were designed around codegraph's value proposition. The feature comparison below examines where narsil-mcp's breadth creates genuine advantages. |
There was a problem hiding this comment.
Fixed in 2d599b5 — corrected to 7-0-1 to match the actual table counts.
generated/competitive/narsil-mcp.md
Outdated
| | Interactive visualization | Export only (DOT/Mermaid) | Embedded web frontend | narsil-mcp | | ||
| | Community maturity | Company-backed, small team | 3 contributors, 120 stars | Tie | | ||
|
|
||
| **Final score against FOUNDATION.md principles: Codegraph 6, narsil-mcp 0, Tie 2.** |
There was a problem hiding this comment.
Same math error repeated. Should be "Codegraph 7, narsil-mcp 0, Tie 1" to match the actual table counts from the Problem Alignment section.
| **Final score against FOUNDATION.md principles: Codegraph 6, narsil-mcp 0, Tie 2.** | |
| **Final score against FOUNDATION.md principles: Codegraph 7, narsil-mcp 0, Tie 1.** |
There was a problem hiding this comment.
Fixed in 2d599b5 — updated to match the corrected 7-0-1 count.
generated/competitive/narsil-mcp.md
Outdated
|
|
||
| **Date:** 2026-03-02 | ||
| **Competitors:** `@optave/codegraph` v0.x (Apache-2.0) vs `postrv/narsil-mcp` v1.6.x (Apache-2.0 / MIT) | ||
| **Context:** narsil-mcp is ranked #2 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5, tied with Joern at #1. Unlike Joern (which targets security researchers), narsil-mcp competes head-to-head with codegraph — same parsing technology (tree-sitter), same delivery mechanism (MCP), same target audience (AI agents), same local-first philosophy. |
There was a problem hiding this comment.
Incorrect relative path. This file is in generated/competitive/ and COMPETITIVE_ANALYSIS.md is in generated/, so the link should use ../ not ./
| **Context:** narsil-mcp is ranked #2 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5, tied with Joern at #1. Unlike Joern (which targets security researchers), narsil-mcp competes head-to-head with codegraph — same parsing technology (tree-sitter), same delivery mechanism (MCP), same target audience (AI agents), same local-first philosophy. | |
| **Context:** narsil-mcp is ranked #2 in our [competitive analysis](../COMPETITIVE_ANALYSIS.md) with a score of 4.5, tied with Joern at #1. Unlike Joern (which targets security researchers), narsil-mcp competes head-to-head with codegraph — same parsing technology (tree-sitter), same delivery mechanism (MCP), same target audience (AI agents), same local-first philosophy. |
There was a problem hiding this comment.
Fixed in 2d599b5 — changed ./COMPETITIVE_ANALYSIS.md to ../COMPETITIVE_ANALYSIS.md since this file lives in the generated/competitive/ subdirectory.
- Fix principle scoring from 6-0-2 to 7-0-1 (correct count from table) - Fix relative link to COMPETITIVE_ANALYSIS.md (../ not ./)
|
Claude finished @carlos-alm's task —— View job PR Review: Competitive Deep-dive — codegraph vs narsil-mcpTask List:
Phase 1: Root Cause AnalysisProblem Being Solved: Creating a comprehensive competitive deep-dive document comparing codegraph against narsil-mcp, following the same structured framework established in the competitive analysis system. This addresses a strategic need to position codegraph clearly against its closest head-to-head competitor. Why This Approach: The document follows the established template where narsil-mcp is ranked #2 (tied with Joern at 4.5 points). Unlike other competitors that target different audiences, narsil-mcp directly competes on the same dimensions (tree-sitter, MCP, AI agents, local-first). This head-to-head analysis is necessary for strategic positioning. Risk Assessment: This is a documentation-only PR with no code changes. The risk is minimal—primarily around factual accuracy and competitive positioning claims. Phase 2: Configuration Change Detection ✅CLEAR - This is a documentation-only PR adding a single file Phase 3: Test Integrity ✅CLEAR - No test files modified, no test expectations changed, no test coverage impacted. This is pure documentation with no behavioral changes. Phase 4: Code Quality Assessment✅ Document Structure and CompletenessThe document is exceptionally well-structured and comprehensive:
✅ Factual Accuracy Cross-CheckI verified the claims against the competitive analysis rankings:
|
Summary
generated/competitive/narsil-mcp.md— a comprehensive competitive deep-dive comparing codegraph against narsil-mcp (postrv/narsil-mcp), our closest head-to-head competitorjoern.mdin PR docs: competitive deep-dive vs Joern #260)Document Structure
Key Findings
Test plan