Skip to content

docs: competitive deep-dive — codegraph vs narsil-mcp#262

Merged
carlos-alm merged 2 commits intomainfrom
docs/competitive-narsil
Mar 3, 2026
Merged

docs: competitive deep-dive — codegraph vs narsil-mcp#262
carlos-alm merged 2 commits intomainfrom
docs/competitive-narsil

Conversation

@carlos-alm
Copy link
Contributor

Summary

  • Adds generated/competitive/narsil-mcp.md — a comprehensive competitive deep-dive comparing codegraph against narsil-mcp (postrv/narsil-mcp), our closest head-to-head competitor
  • Follows the same structure as the Joern deep-dive (joern.md in PR docs: competitive deep-dive vs Joern #260)
  • narsil-mcp is ranked Bump actions/checkout from 4 to 6 #2 in our competitive analysis (score 4.5, tied with Joern)

Document Structure

  1. Executive Summary — key dimensions table (mission, target user, graph model, core question, rebuild model, runtime)
  2. Problem Alignment with FOUNDATION.md — all 8 principles evaluated and scored (Codegraph 6, narsil-mcp 0, Tie 2)
  3. Feature-by-Feature Comparison — 9 sections, 130+ features:
    • A. Parsing & Language Support (24 features)
    • B. Graph Model & Analysis Depth (18 features)
    • C. Query Language & Interface (12 features)
    • D. Performance & Resource Usage (14 features)
    • E. Installation & Deployment (10 features)
    • F. AI Agent & MCP Integration (12 features)
    • G. Security Analysis (12 features)
    • H. Developer Productivity Features (18 features)
    • I. Ecosystem & Community (11 features)
  4. Where Each Tool is the Better Choice — choose codegraph / choose narsil-mcp / use both
  5. Gap Analysis — 6 features worth adopting, 8 not worth adopting (mapped to FOUNDATION.md)
  6. Competitive Positioning Statement
  7. Key Metrics Summary table

Key Findings

  • Codegraph wins on: persistent incremental builds, confidence-scored edges, compound CLI commands (35+), embeddable JS API, CI gates, impact analysis, code health governance, token efficiency (3,600 vs 12,000 tokens)
  • narsil-mcp wins on: language count (32 vs 11), security analysis (taint + OWASP + CWE + SBOM), neural search, CFG/DFG analysis, SPARQL/RDF, browser WASM, interactive visualization
  • Key insight: narsil-mcp maximizes breadth (90 tools); codegraph maximizes depth-per-tool and always-current guarantees

Test plan

  • Document follows joern.md structure exactly
  • All 8 FOUNDATION.md principles evaluated
  • 130+ features across 9 sections
  • Review factual claims against narsil-mcp GitHub README
  • Verify internal links

Comprehensive feature-by-feature analysis of narsil-mcp (postrv/narsil-mcp),
the closest head-to-head competitor to codegraph. Covers all 8 FOUNDATION.md
principles, 9 feature comparison sections with 130+ features, gap analysis,
and competitive positioning.
@claude
Copy link

claude bot commented Mar 3, 2026

Claude finished @carlos-alm's task —— View job


PR Review: Competitive Deep-dive — codegraph vs narsil-mcp

Task List:

  • Read and analyze the new competitive analysis document
  • Verify document structure matches joern.md template
  • Check factual accuracy of claims about both tools
  • Assess value and completeness of the analysis
  • Provide final rating and recommendation

Root Cause Analysis

Problem Being Solved: Creating a comprehensive competitive deep-dive document comparing codegraph against narsil-mcp, following the same structure as the competitive analysis framework. This serves to position codegraph clearly against its closest head-to-head competitor.

Why This Approach: The document follows the established template from the competitive analysis ranking system where narsil-mcp is ranked #2 (tied with Joern at 4.5 points). Unlike other competitors that target different audiences, narsil-mcp directly competes with codegraph on the same dimensions (tree-sitter, MCP, AI agents, local-first).

Risk Assessment: This is a documentation-only PR with no code changes. The risk is minimal—primarily around factual accuracy and positioning claims.

Document Structure and Quality Assessment

Follows Established Template

The document perfectly mirrors the COMPETITIVE_ANALYSIS.md structure:

  • Executive Summary with key dimensions table
  • Problem Alignment with foundation principles (8 principles evaluated)
  • Feature-by-Feature Comparison (9 sections, 130+ features)
  • Where Each Tool is Better Choice
  • Gap Analysis with FOUNDATION.md alignment
  • Competitive Positioning Statement
  • Key Metrics Summary

Comprehensive Feature Coverage

The analysis covers 130+ features across 9 well-organized sections:

  • A. Parsing & Language Support (24 features)
  • B. Graph Model & Analysis Depth (18 features)
  • C. Query Language & Interface (12 features)
  • D. Performance & Resource Usage (14 features)
  • E. Installation & Deployment (10 features)
  • F. AI Agent & MCP Integration (12 features)
  • G. Security Analysis (12 features)
  • H. Developer Productivity Features (18 features)
  • I. Ecosystem & Community (11 features)

Balanced and Fair Analysis

The document presents a balanced view:

  • Codegraph strengths: Persistent incremental builds, confidence scoring, compound commands, CI integration, token efficiency (3,600 vs 12,000 tokens)
  • narsil-mcp strengths: Language count (32 vs 11), security analysis (147 rules), neural search, CFG/DFG analysis, browser WASM

⚠️ Factual Claims Assessment

Most claims appear accurate based on cross-referencing with the competitive analysis, but some key assertions should be verified:

  • narsil-mcp's "90 tools" and token overhead claims
  • Specific performance benchmarks cited
  • Feature availability claims (some may have changed since analysis)

Clear Positioning and Value Props

The competitive positioning statement effectively articulates the core difference:

"narsil-mcp maximizes surface area (90 tools, 32 languages, security scanning); codegraph maximizes depth-per-tool and always-current guarantees"

Gap Analysis Quality

FOUNDATION.md Alignment

The gap analysis properly evaluates 6 features "worth adopting" and 8 "not worth adopting" against the foundation principles. Examples:

  • Worth adopting: More languages (aligns with Principle 6), BM25 search (aligns with Principle 4)
  • Not worth adopting: 90 MCP tools (violates token efficiency), RDF/SPARQL (doesn't serve target users)

Practical Implementation Guidance

Each "worth adopting" feature includes effort estimates and priority levels, making this actionable for the development team.

Minor Issues Found

  1. Missing joern.md reference: The PR description mentions following "joern.md structure exactly" but I couldn't locate this file. However, the structure appears consistent with the competitive analysis framework.

  2. Foundation document reference: The document extensively references "FOUNDATION.md principles" but this file doesn't exist—it appears these principles are documented in BACKLOG.md instead.

  3. Some performance claims lack attribution: A few benchmark numbers (especially around memory usage and indexing speed) could benefit from source links.

Final Assessment

This is a high-quality competitive analysis that provides substantial strategic value:

Strengths:

  • Comprehensive scope: 130+ features across 9 dimensions
  • Balanced analysis: Acknowledges both tools' strengths and weaknesses
  • Strategic clarity: Clear positioning of when to choose each tool
  • Actionable insights: Gap analysis maps directly to development priorities
  • Professional quality: Well-structured, readable, and thorough

Minor concerns:

  • Some benchmark claims could use verification
  • References to missing documents (joern.md, FOUNDATION.md)

Critical Concerns (if any)

None. This is a documentation-only change that enhances the project's competitive positioning without introducing any technical risks.

Final Recommendation

  • Rating: ⭐⭐⭐⭐☆ (4/5)
  • Action: APPROVE WITH CONDITIONS
  • Reasoning: High-quality competitive analysis that provides clear strategic value. The document is comprehensive, well-structured, and balanced. Minor issues around missing references and some unverified claims prevent a 5-star rating, but these don't affect the core value of the analysis.

Conditions for approval:

  1. Consider verifying a few key benchmark claims mentioned in the performance section
  2. Fix references to missing documents (joern.md → competitive analysis framework, FOUNDATION.md → BACKLOG.md)

This competitive deep-dive successfully positions codegraph against its closest competitor and provides actionable strategic insights for the development team.

--- | PR Branch: docs/competitive-narsil

@greptile-apps
Copy link

greptile-apps bot commented Mar 3, 2026

Greptile Summary

This PR adds a comprehensive competitive deep-dive document comparing codegraph to narsil-mcp, ranked #2 in the competitive analysis. All previous review feedback has been properly addressed in commit 2d599b5:

  • Scoring math corrected: Changed from "Codegraph 6, narsil-mcp 0, Tie 2" to "Codegraph 7, narsil-mcp 0, Tie 1" to match the actual principle-by-principle evaluation table
  • Relative path fixed: Updated link from ./COMPETITIVE_ANALYSIS.md to ../COMPETITIVE_ANALYSIS.md (correct relative path from generated/competitive/ to generated/)

The document is well-structured and comprehensive, covering:

  • Executive summary with key dimensions table
  • 8 FOUNDATION.md principles evaluation
  • 130+ features across 9 comparison sections (parsing, graph model, query interface, performance, installation, AI/MCP integration, security, developer productivity, ecosystem)
  • Where each tool excels and when to use each
  • Gap analysis with actionable recommendations
  • Key metrics summary

No issues found. The document follows the same structure as the Joern deep-dive and provides valuable competitive intelligence.

Confidence Score: 5/5

  • This PR is safe to merge with no concerns
  • Documentation-only PR with a single well-structured markdown file. All previous review feedback has been properly addressed (scoring math and relative path corrections verified). The document is comprehensive, internally consistent, and follows established patterns. No code changes, no runtime impact, no security concerns.
  • No files require special attention

Important Files Changed

Filename Overview
generated/competitive/narsil-mcp.md Comprehensive competitive analysis document comparing codegraph to narsil-mcp. Previous review feedback properly addressed (scoring corrected to 7-0-1, relative path fixed to ../). Well-structured with 130+ feature comparisons across 9 sections.

Last reviewed commit: 2d599b5

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

| 7 | **Security-conscious defaults** — multi-repo opt-in | Single-repo MCP default. `apiKeyCommand` for secrets. `--multi-repo` opt-in | Multi-repo by default (`list_repos`, `discover_repos` always exposed). `--remote` flag enables cloning external repos. No credential isolation model | **Codegraph wins.** Single-repo default vs. multi-repo default. narsil-mcp's `discover_repos` and `add_remote_repo` tools are exposed without opt-in |
| 8 | **Honest about what we're not** | Code intelligence engine. Not an app, not a coding tool, not an agent | "Comprehensive code intelligence" — tries to be everything: search engine, security scanner, type checker, SBOM generator, license auditor, knowledge graph, visualization server | **Codegraph wins.** Codegraph has a clear boundary. narsil-mcp's 90-tool surface area spans security, compliance, visualization, type checking, and more — a breadth that risks being shallow everywhere |

**Score: Codegraph 6, narsil-mcp 0, Tie 2** — against codegraph's own principles, codegraph wins on every differentiating dimension. This is expected: the principles were designed around codegraph's value proposition. The feature comparison below examines where narsil-mcp's breadth creates genuine advantages.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Math error in scoring. The table above shows 7 Codegraph wins, 0 narsil-mcp wins, and 1 Tie (Principle 2). Should be "Codegraph 7, narsil-mcp 0, Tie 1" not "Codegraph 6, narsil-mcp 0, Tie 2".

Suggested change
**Score: Codegraph 6, narsil-mcp 0, Tie 2** — against codegraph's own principles, codegraph wins on every differentiating dimension. This is expected: the principles were designed around codegraph's value proposition. The feature comparison below examines where narsil-mcp's breadth creates genuine advantages.
**Score: Codegraph 7, narsil-mcp 0, Tie 1** — against codegraph's own principles, codegraph wins on every differentiating dimension. This is expected: the principles were designed around codegraph's value proposition. The feature comparison below examines where narsil-mcp's breadth creates genuine advantages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 2d599b5 — corrected to 7-0-1 to match the actual table counts.

| Interactive visualization | Export only (DOT/Mermaid) | Embedded web frontend | narsil-mcp |
| Community maturity | Company-backed, small team | 3 contributors, 120 stars | Tie |

**Final score against FOUNDATION.md principles: Codegraph 6, narsil-mcp 0, Tie 2.**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same math error repeated. Should be "Codegraph 7, narsil-mcp 0, Tie 1" to match the actual table counts from the Problem Alignment section.

Suggested change
**Final score against FOUNDATION.md principles: Codegraph 6, narsil-mcp 0, Tie 2.**
**Final score against FOUNDATION.md principles: Codegraph 7, narsil-mcp 0, Tie 1.**

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 2d599b5 — updated to match the corrected 7-0-1 count.


**Date:** 2026-03-02
**Competitors:** `@optave/codegraph` v0.x (Apache-2.0) vs `postrv/narsil-mcp` v1.6.x (Apache-2.0 / MIT)
**Context:** narsil-mcp is ranked #2 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5, tied with Joern at #1. Unlike Joern (which targets security researchers), narsil-mcp competes head-to-head with codegraph — same parsing technology (tree-sitter), same delivery mechanism (MCP), same target audience (AI agents), same local-first philosophy.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect relative path. This file is in generated/competitive/ and COMPETITIVE_ANALYSIS.md is in generated/, so the link should use ../ not ./

Suggested change
**Context:** narsil-mcp is ranked #2 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5, tied with Joern at #1. Unlike Joern (which targets security researchers), narsil-mcp competes head-to-head with codegraph — same parsing technology (tree-sitter), same delivery mechanism (MCP), same target audience (AI agents), same local-first philosophy.
**Context:** narsil-mcp is ranked #2 in our [competitive analysis](../COMPETITIVE_ANALYSIS.md) with a score of 4.5, tied with Joern at #1. Unlike Joern (which targets security researchers), narsil-mcp competes head-to-head with codegraph — same parsing technology (tree-sitter), same delivery mechanism (MCP), same target audience (AI agents), same local-first philosophy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 2d599b5 — changed ./COMPETITIVE_ANALYSIS.md to ../COMPETITIVE_ANALYSIS.md since this file lives in the generated/competitive/ subdirectory.

- Fix principle scoring from 6-0-2 to 7-0-1 (correct count from table)
- Fix relative link to COMPETITIVE_ANALYSIS.md (../ not ./)
@claude
Copy link

claude bot commented Mar 3, 2026

Claude finished @carlos-alm's task —— View job


PR Review: Competitive Deep-dive — codegraph vs narsil-mcp

Task List:

  • Read and analyze the competitive analysis document
  • Apply Phase 1: Root Cause Analysis
  • Apply Phase 2: Configuration Change Detection
  • Apply Phase 3: Test Integrity Assessment
  • Apply Phase 4: Code Quality Review
  • Address Greptile's identified issues
  • Provide final rating and recommendation

Phase 1: Root Cause Analysis

Problem Being Solved: Creating a comprehensive competitive deep-dive document comparing codegraph against narsil-mcp, following the same structured framework established in the competitive analysis system. This addresses a strategic need to position codegraph clearly against its closest head-to-head competitor.

Why This Approach: The document follows the established template where narsil-mcp is ranked #2 (tied with Joern at 4.5 points). Unlike other competitors that target different audiences, narsil-mcp directly competes on the same dimensions (tree-sitter, MCP, AI agents, local-first). This head-to-head analysis is necessary for strategic positioning.

Risk Assessment: This is a documentation-only PR with no code changes. The risk is minimal—primarily around factual accuracy and competitive positioning claims.

Phase 2: Configuration Change Detection ✅

CLEAR - This is a documentation-only PR adding a single file generated/competitive/narsil-mcp.md. No configuration files, test files, or build files have been modified. Zero risk of configuration changes masking underlying issues.

Phase 3: Test Integrity ✅

CLEAR - No test files modified, no test expectations changed, no test coverage impacted. This is pure documentation with no behavioral changes.

Phase 4: Code Quality Assessment

Document Structure and Completeness

The document is exceptionally well-structured and comprehensive:

  • 354 lines following the established competitive analysis template exactly
  • 130+ features across 9 well-organized sections
  • 8 FOUNDATION.md principles evaluated systematically
  • Executive summary table with key dimensions
  • Gap analysis with actionable recommendations
  • Professional tone with balanced, fair assessment

Factual Accuracy Cross-Check

I verified the claims against the competitive analysis rankings:

  • narsil-mcp correctly identified as Bump actions/checkout from 4 to 6 #2 with score 4.5 ✅
  • Language counts and tool counts appear consistent with the analysis ✅
  • Performance benchmarks cite specific examples (rust-analyzer, Linux kernel) ✅
  • Technical capabilities align with what's documented ✅

⚠️ Critical Issues Found (Confirming Greptile's Review)

Issue 1: Math Error in FOUNDATION.md Scoring
Lines 43 and 353 state "Codegraph 6, narsil-mcp 0, Tie 2" but the actual table shows:

  • 7 Codegraph wins (Principles 1,3,4,5,6,7,8)
  • 0 narsil-mcp wins
  • 1 Tie (Principle 2)

Should be: "Codegraph 7, narsil-mcp 0, Tie 1"

Issue 2: Incorrect Relative Path
Line 5 uses ./COMPETITIVE_ANALYSIS.md but this file is in generated/competitive/ subdirectory, so it should be ../COMPETITIVE_ANALYSIS.md

Strategic Value Assessment

This document provides significant strategic value:

  • Clear differentiation between breadth (narsil-mcp) vs depth (codegraph) strategies
  • Actionable gap analysis identifying 6 features worth adopting vs 8 not worth adopting
  • Honest acknowledgment of competitor strengths (security analysis, language count)
  • Token efficiency insights - 3.3x less token overhead is a compelling competitive advantage

Writing Quality

Professional, technical, and balanced. The document avoids promotional language while clearly articulating codegraph's advantages. The "Bottom line" summaries after each section are particularly effective.

Critical Concerns

Minor factual errors only: The math error in principle scoring and incorrect relative path are easily fixable and don't affect the core strategic value of the analysis.

Final Recommendation

  • Rating: ⭐⭐⭐⭐☆ (4/5)
  • Action: APPROVE WITH CONDITIONS
  • Reasoning: This is a high-quality competitive analysis that provides substantial strategic value with comprehensive feature comparisons and balanced assessment. The document structure is excellent and follows the established framework perfectly. The only issues are two minor factual errors identified by Greptile that are easily corrected.

Conditions for approval:

  1. Fix math error: Change "Codegraph 6, narsil-mcp 0, Tie 2" to "Codegraph 7, narsil-mcp 0, Tie 1" on lines 43 and 353
  2. Fix relative path: Change ./COMPETITIVE_ANALYSIS.md to ../COMPETITIVE_ANALYSIS.md on line 5

This competitive deep-dive successfully positions codegraph against its closest competitor and will serve as valuable strategic documentation for the project.


| PR Branch: docs/competitive-narsil

@carlos-alm
Copy link
Contributor Author

@greptileai

@carlos-alm carlos-alm merged commit bf67881 into main Mar 3, 2026
28 checks passed
@carlos-alm carlos-alm deleted the docs/competitive-narsil branch March 3, 2026 01:21
@github-actions github-actions bot locked and limited conversation to collaborators Mar 3, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant