docs: add demo video to README hero#68
Conversation
📝 WalkthroughWalkthroughAdds a README demo link, a new auth-flow compare benchmark README with reproducer and headline Anthropic metrics, an executable verify.sh that parses ChangesDocumentation Update & Verification Script
Sequence Diagram(s)sequenceDiagram
participant User
participant verify_sh
participant jq
participant NodeJS
participant report_json
User->>verify_sh: run ./verify.sh
verify_sh->>jq: ensure jq present
verify_sh->>NodeJS: execute embedded parser
NodeJS->>report_json: load & parse report.json
NodeJS->>NodeJS: extract baseline & graphify metrics
NodeJS->>verify_sh: print reduction ratios and usage dumps
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@README.md`:
- Line 15: Change the "See it in action" heading from h3 to h2 to match the
document hierarchy: replace the leading "###" with "##" for the heading text
"See it in action" so it follows the existing h1 → h2 structure and resolves the
MD001 heading-increment warning.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
|
|
||
| graphify-ts indexes a TypeScript/Node workspace (and PR diffs) into a local knowledge graph, then compiles that graph into the **smallest verifiable context pack** the agent actually needs for the task at hand. No cloud upload, no API key for indexing, no SaaS dashboard — just a local subprocess your agent talks to over MCP. | ||
|
|
||
| ### See it in action |
There was a problem hiding this comment.
Fix heading level to match document structure.
The heading uses h3 (###) but should use h2 (##) to maintain consistent document hierarchy. All other major sections in the README use h2, and markdown best practices require headings to increment by only one level at a time (h1 → h2, not h1 → h3).
📝 Proposed fix
-### See it in action
+## See it in actionAs per static analysis guidelines, this resolves the MD001 heading-increment warning.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### See it in action | |
| ## See it in action |
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 15-15: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@README.md` at line 15, Change the "See it in action" heading from h3 to h2 to
match the document hierarchy: replace the leading "###" with "##" for the
heading text "See it in action" so it follows the existing h1 → h2 structure and
resolves the MD001 heading-increment warning.
- Add a caption under the embedded video so the headline numbers (31 -> 14 turns, 170s -> 107s, 2.81M -> 532K input tokens; all Anthropic-reported) are readable without watching the clip. - Scaffold docs/benchmarks/2026-05-09-govalidate-auth-e2e/ with a README describing the run setup and a verify.sh that reads the compare report.json and prints the reductions. The actual report.json + paired prompt/answer files still need to be dropped in from graphify-out/compare/2026-05-09T23-21-35/ before the benchmark is independently reproducible. - Link the new benchmark folder from the README's Public proof section so it's discoverable alongside the existing artifacts.
Adds four assertions to why-graphify-doc.test.ts so the new caption + benchmark folder are covered by the same honesty contract as the existing 2026-04-30 numbers: - README caption must contain the captured Anthropic-reported reductions (2,811,682 / 532,021 / 5.28x / 2.21x / 1.58x). - README must link the new docs/benchmarks/2026-05-09-... folder. - The new benchmark README must mention the Anthropic-reported framing and the --baseline-mode native_agent invocation. - verify.sh must exit cleanly with a clear message when report.json has not yet been dropped in. Full suite: 1384/1384 passing.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/unit/why-graphify-doc.test.ts (1)
108-108: ⚡ Quick winAvoid pinning the full second-level timestamp literal.
Line 108 hard-codes an exact run timestamp, which will cause noisy test churn on benchmark refreshes. Prefer matching stable structure (date/prefix) with a regex.
Proposed assertion softening
- expect(verify).toContain('graphify-out/compare/2026-05-09T23-21-35') + expect(verify).toMatch(/graphify-out\/compare\/2026-05-09T\d{2}-\d{2}-\d{2}/)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/unit/why-graphify-doc.test.ts` at line 108, The test currently pins a full timestamp in the assertion by using expect(verify).toContain('graphify-out/compare/2026-05-09T23-21-35'); change this to assert the stable prefix and a pattern instead of the exact second-level timestamp: update the assertion on the verify variable (the expect call) to use a regex-based match (or toMatch) that checks for "graphify-out/compare/" followed by a YYYY-MM-DD and time at least to hour-minute (omit or make seconds optional) so the test no longer fails when the run-second changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@tests/unit/why-graphify-doc.test.ts`:
- Around line 104-109: The test in why-graphify-doc.test.ts currently checks the
generated verify script for several strings but doesn't assert the "exits
cleanly" behavior; update the failing test case that inspects the verify
variable (inside the it(...) block that mentions "exits cleanly") to include an
assertion that the missing-file branch contains an explicit clean exit by adding
expect(verify).toContain('exit 0') so the test will fail if the script no longer
exits cleanly.
---
Nitpick comments:
In `@tests/unit/why-graphify-doc.test.ts`:
- Line 108: The test currently pins a full timestamp in the assertion by using
expect(verify).toContain('graphify-out/compare/2026-05-09T23-21-35'); change
this to assert the stable prefix and a pattern instead of the exact second-level
timestamp: update the assertion on the verify variable (the expect call) to use
a regex-based match (or toMatch) that checks for "graphify-out/compare/"
followed by a YYYY-MM-DD and time at least to hour-minute (omit or make seconds
optional) so the test no longer fails when the run-second changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: bc317a4d-9229-44a8-8a59-dff7fa9fc061
📒 Files selected for processing (1)
tests/unit/why-graphify-doc.test.ts
| it('ships a verify.sh reproducer that reads report.json and exits cleanly when missing', () => { | ||
| expect(verify).toContain('#!/usr/bin/env bash') | ||
| expect(verify).toContain('report.json') | ||
| expect(verify).toContain('not found') | ||
| expect(verify).toContain('graphify-out/compare/2026-05-09T23-21-35') | ||
| }) |
There was a problem hiding this comment.
The “exits cleanly” claim is not actually asserted.
On Line 104, the test says missing report.json exits cleanly, but Lines 105-109 only validate string presence. Add an assertion that the missing-file branch includes an explicit clean exit (exit 0) so this test fails on behavior regressions.
Proposed test tightening
it('ships a verify.sh reproducer that reads report.json and exits cleanly when missing', () => {
expect(verify).toContain('#!/usr/bin/env bash')
expect(verify).toContain('report.json')
- expect(verify).toContain('not found')
+ expect(verify).toMatch(/report\.json[\s\S]*not found[\s\S]*exit\s+0/i)
expect(verify).toContain('graphify-out/compare/2026-05-09T23-21-35')
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('ships a verify.sh reproducer that reads report.json and exits cleanly when missing', () => { | |
| expect(verify).toContain('#!/usr/bin/env bash') | |
| expect(verify).toContain('report.json') | |
| expect(verify).toContain('not found') | |
| expect(verify).toContain('graphify-out/compare/2026-05-09T23-21-35') | |
| }) | |
| it('ships a verify.sh reproducer that reads report.json and exits cleanly when missing', () => { | |
| expect(verify).toContain('#!/usr/bin/env bash') | |
| expect(verify).toContain('report.json') | |
| expect(verify).toMatch(/report\.json[\s\S]*not found[\s\S]*exit\s+0/i) | |
| expect(verify).toContain('graphify-out/compare/2026-05-09T23-21-35') | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tests/unit/why-graphify-doc.test.ts` around lines 104 - 109, The test in
why-graphify-doc.test.ts currently checks the generated verify script for
several strings but doesn't assert the "exits cleanly" behavior; update the
failing test case that inspects the verify variable (inside the it(...) block
that mentions "exits cleanly") to include an assertion that the missing-file
branch contains an explicit clean exit by adding expect(verify).toContain('exit
0') so the test will fail if the script no longer exits cleanly.
Summary
Adds a short demo video right under the lead paragraph (above the Why graphify-ts? section) so anyone landing on the README sees the tool in action before scrolling.
The video is hosted on GitHub user-attachments (~75 MB), so nothing gets committed to the repo and the npm tarball stays unchanged. Embedded as a bare URL on its own line — that's the canonical pattern that GitHub renders as an inline player.
Test plan
npx vitest run tests/unit/why-graphify-doc.test.ts tests/unit/package-metadata.test.ts— 31/31 pass (doc-honesty assertions still satisfied)---) reads naturallySummary by CodeRabbit
Documentation
Tests