/rɪˈsiːts/ — pronounced "receipts". The spelling is intentional.
한국어 버전: README.ko.md
Every claim needs leceipts.
A lightweight toolkit that forces code-change work to end with structured, verifiable reports. Prevents "it's done" lies by making verification a required section that cannot be silently skipped.
- Language-agnostic (works with any project, any stack)
- AI-optional (helps humans too, but especially good for LLM coding agents)
- Zero runtime dependencies (tsx + typescript for the checker, nothing else)
- Does not touch your code — just adds a workflow layer on top
Every team with AI coding assistants (Claude, Cursor, Copilot, Codex) has hit this wall:
You: "please fix the login bug"
AI: "Done! I've fixed the login bug. Everything is working correctly now."
(You run it. It still crashes.)
The AI isn't lying out of malice. It's lying because there is no structural slot where it has to prove verification happened. If you tell it "be honest", it says yes and keeps lying. If you give it a response format with a mandatory "Verification" section, the lie becomes structurally visible — an empty verification box is obviously empty.
The same failure mode happens with humans: "Looks good, shipping it" PRs where nobody actually ran the tests.
This kit solves both with one mechanism: a 5-section response format that every code change must conform to, plus a standalone checker that enforces the structure on persisted report files.
1. Root cause — what actually went wrong and why
2. Change — what files/lines were modified
3. Recurrence prevention — how this specific bug cannot recur silently
4. Verification — commands run + their actual results
5. Remaining risk — regression risks and unverified areas (NOT wishlist)
That's it. No schemas, no YAML front-matter, no tooling required to write one. Any AI or human can produce one in plain markdown, and the checker verifies the structure on every commit.
| Section | What it prevents |
|---|---|
| 1. Root cause | "I changed it and it works now" — without understanding why it broke, the fix is probably a symptom patch |
| 2. Change | Silent scope creep — forces an explicit list of what was touched |
| 3. Recurrence prevention | Hotfixes dressed up as real fixes — forces a guardrail, test, or explicit "intentional omission" |
| 4. Verification | The core lie: "I tested it" with nothing to show for it — forces command + result |
| 5. Remaining risk | Overconfidence — forces an honest disclosure of what you didn't test |
Section 5 has a counter-intuitive rule: it is NOT a wishlist. It is only for regression risks and unverified areas. Wishlist items ("could be stronger", "follow-up improvement") would otherwise accumulate in section 5 until every report looks unfinished, so the checker flags wishlist phrasing as a violation.
Yes if any of these apply:
- You use AI coding assistants and get burned by false completion reports
- You run a team where code review finds "missing verification" regularly
- You run CI that occasionally surfaces bugs a human already marked "tested"
- You want to make "it works on my machine" structurally impossible
- You are building your own AI agent and want a quality gate on its output
No if:
- Your workflow is purely exploratory (no code changes, just research)
- You have zero review overhead and are comfortable with the status quo
- You prefer freeform PR descriptions over structure
The kit adds maybe 30 seconds to each code-change report once the format is a habit. The cost is small; the thing it catches is large.
┌────────────────────────────────────────┐
│ 1. AI/human writes a 5-section report │
│ in a markdown file or PR body │
└─────────────────┬──────────────────────┘
│
▼
┌────────────────────────────────────────┐
│ 2. If persisted: file lives in │
│ plans/<ticket-id>-verification- │
│ report.md │
└─────────────────┬──────────────────────┘
│
▼
┌────────────────────────────────────────┐
│ 3. check-reports.ts runs on CI: │
│ - structural: all 5 sections exist │
│ - content: sections 3/4 not empty │
│ - content: section 5 not a wishlist │
└─────────────────┬──────────────────────┘
│
▼
┌────────────────────────────────────────┐
│ 4. Violations block merge. │
│ Clean reports allow merge. │
└────────────────────────────────────────┘
Three pieces:
- Prompt layer (
docs/working-rules.md) — the rules your AI/humans read and follow. Drop intoCLAUDE.mdor equivalent. - Artifact layer (
templates/verification-report-template.md) — the file format persisted per ticket/change. - Enforcement layer (
scripts/check-reports.ts) — the automated check that runs in CI and blocks merges on violations.
You can adopt any subset. The prompt layer alone raises response quality immediately. Adding the artifact layer makes reports searchable and auditable. Adding the enforcement layer makes the whole thing cheat-proof.
npm install --save-dev leceipts tsxThe tsx devDependency is required because leceipts ships as TypeScript
source and runs via tsx. Both are dev-only; neither touches your runtime
bundle.
Verify the install with:
npx tsx node_modules/leceipts/scripts/check-reports.ts --help 2>&1 || truePackage page: npmjs.com/package/leceipts
In your project root:
// verification-kit.config.json
{
"reportsDir": "plans",
"reportPattern": "-verification-report\\.md$",
"legacyMarker": "<!-- legacy-verification-report: pre-5-section format -->",
"baseBranch": "main"
}reportsDir— folder that will hold verification reportsreportPattern— filename regex for the checkerlegacyMarker— HTML comment that grandfather-exempts old reportsbaseBranch— used by the default "check only new reports" mode
mkdir -p plans
cp node_modules/leceipts/templates/verification-report-template.md \
plans/verification-report-template.mdRead docs/claude-md-snippet.md and paste
the appropriate snippet into your project's CLAUDE.md / AGENTS.md /
.cursorrules file. This makes your AI coding assistant use the 5-section
format automatically.
For the full rules document, copy docs/working-rules.md to your project
(docs/working-rules.md) and reference it from CLAUDE.md.
npx tsx node_modules/leceipts/scripts/check-reports.ts \
--config ./verification-kit.config.jsonYou should see no reports to check on a fresh install (no reports exist
yet). Add this command to your CI pipeline:
# .github/workflows/ci.yml
- name: Check verification reports
run: npx tsx node_modules/leceipts/scripts/check-reports.ts \
--config ./verification-kit.config.json{
"scripts": {
"check:reports": "tsx node_modules/leceipts/scripts/check-reports.ts --config ./verification-kit.config.json"
}
}Now npm run check:reports is the single command humans and CI both run.
If you already have verification reports in an older format, prepend this line to each one so the checker skips them:
<!-- legacy-verification-report: pre-5-section format -->Or use a one-liner:
for f in plans/*-verification-report.md; do
if ! grep -q "legacy-verification-report" "$f"; then
{ echo "<!-- legacy-verification-report: pre-5-section format -->"; cat "$f"; } > "$f.tmp" && mv "$f.tmp" "$f"
fi
doneThe checker only enforces the structure on reports that do NOT have this marker, so you don't need to migrate history.
The checker runs two passes against each report:
Every report must have all 5 section headings (Korean or English accepted).
[leceipts] FAIL — 1 report(s) have violations:
plans/TASK-042-verification-report.md
- 3. Recurrence prevention / 재발 방지
- 5. Remaining risk / 남은 리스크
Sections that exist must not be vacuous. Specific checks:
- Section 3 — body must not be empty or placeholder-only
- Section 4 — body must not be empty AND must contain verification evidence (a command in backticks, a result marker like ✅/❌, or an explicit "unverifiable" / "검증 불가" statement)
- Section 5 — body must not contain wishlist phrasing ("follow-up", "nice to have", "could be improved", 후속 개선, 더 강화, etc.)
[leceipts] FAIL — 1 report(s) have violations:
plans/TASK-042-verification-report.md
- 3. Recurrence prevention — empty or placeholder-only body
- 4. Verification — no command, result marker, or 'unverifiable' statement found
- 5. Remaining risk — wishlist phrasing detected: follow-up, nice to have (see working-rules.md §7)
tsx scripts/check-reports.ts # check only new reports (default)
tsx scripts/check-reports.ts --all # check every report
tsx scripts/check-reports.ts --file <path> # check one specific file
tsx scripts/check-reports.ts --config <path> # use a non-default config location
The default "new reports only" mode uses git ls-tree <baseBranch> to
find reports that do not yet exist on the base branch. This lets CI
enforce the structure on new work without flagging historical reports.
git clone <this repo>
cd leceipts
npm install
# Check the example report — should pass
npm run example:check
# → [leceipts] OK — 1 report(s) checked, all passed
# Check with --all mode
npm run example:check:allThen look at example/plans/TASK-001-example-verification-report.md to
see what a well-formed report looks like.
- name: Check verification reports
run: npx tsx node_modules/leceipts/scripts/check-reports.ts \
--config ./verification-kit.config.json# .git/hooks/pre-commit
#!/usr/bin/env bash
set -e
npx tsx node_modules/leceipts/scripts/check-reports.ts \
--config ./verification-kit.config.jsonIf you already have a scripts/verify-build.sh or equivalent:
#!/usr/bin/env bash
set -euo pipefail
npm run typecheck
npm test
npm run build
# Add this line:
npx tsx node_modules/leceipts/scripts/check-reports.ts \
--config ./verification-kit.config.jsonBy default the checker looks for files matching
-verification-report\.md$. Change reportPattern in the config:
{
"reportsDir": "docs/verify",
"reportPattern": "^VR-\\d+\\.md$"
}{ "baseBranch": "develop" }The checker accepts both Korean and English heading text by default. If
you want to support a third language, fork REQUIRED_SECTIONS in
scripts/check-reports.ts and add your patterns. PRs welcome.
If you only want the structural check (Pass 1) and not the content checks (Pass 2), fork the script and remove the Pass 2 block. The structural check alone is already a meaningful quality gate.
On purpose, to keep it small:
- Does not define what a "ticket" is — works with GitHub issues, Linear, Jira, plain numbered files, or no tickets at all
- Does not manage report file naming (you pick, or use the convention)
- Does not run your tests (use your existing test runner)
- Does not provide a web UI (it's a CLI)
- Does not fabricate reports for you — the AI or human still has to write the 5 sections; the kit only enforces they exist
If you want any of those, the script is ~300 lines and easy to wrap.
This kit is not AI-specific. It works for:
- Pure human teams
- AI-assisted teams
- Fully autonomous AI agent pipelines
It is focused strictly on code-change verification. It does not try to solve generated-content quality, prompt management, or any other adjacent problem.
Q: Does this replace PR descriptions? A: No — it complements them. The 5-section format can live IN the PR description or in a separate file. Most teams do both: the file is the canonical record, the PR description is a summary.
Q: What if my change is tiny (one-line typo fix)? A: Section 4 still applies — "Typecheck: passed, tests: passed". Sections 1-3 can be one line each. Section 5 is "None". The overhead is ~30 seconds and still catches real regressions.
Q: Can I use this without persisted report files? A: Yes. The 5-section format can live purely in chat replies or PR descriptions. The checker becomes optional in that case — its job is to enforce the structure on persisted files.
Q: My AI assistant keeps forgetting the format. What do I do?
A: Two things. First, put the rules directly in CLAUDE.md /
AGENTS.md / .cursorrules — top-level, not buried in subdocs. Second,
add the checker to CI so the AI gets immediate feedback when it skips
sections. Within a few iterations the format becomes automatic.
Q: Does this work with TypeScript, Python, Rust, Go? A: Yes — the kit's scripts are TypeScript (via tsx) but the workflow is language-agnostic. Your project can be in any language; only the checker script needs Node 18+ to run. If you don't want Node in CI, rewrite the checker in your language of choice — it's under 300 lines and uses only filesystem + regex.
Q: How does this compare to Conventional Commits / Semantic PRs? A: Different axis. Conventional Commits standardizes the subject line; this kit standardizes the verification narrative. They compose well.
Q: Is there a Python version of the checker? A: Not yet. The report file format is language-agnostic (plain markdown), so a Python port is straightforward. PRs welcome.
MIT — see LICENSE.
Built by ivan (@0oooooooo0).