______ __ ______ __ __ ______ __ ______ __ __
/\ ___\ /\ \ /\ __ \ /\ \ _ \ \ /\ == \ /\ \ /\__ _\ /\ \_\ \
\ \ \____ \ \ \____ \ \ __ \ \ \ \/ ".\ \ \ \ __< \ \ \ \/_/\ \/ \ \____ \
\ \_____\ \ \_____\ \ \_\ \_\ \ \__/".~\_\ \ \_\ \_\ \ \_\ \ \_\ \/\_____\
\/_____/ \/_____/ \/_/\/_/ \/_/ \/_/ \/_/ /_/ \/_/ \/_/ \/_____/
See what Claude Code actually did.
Clawrity is a local-first reviewer for Claude Code sessions. It turns a finished session into a short plain-English summary so you can answer:
- what changed?
- what actually matters?
- what should I review first?
Clawrity is not a dashboard, tracing sink, or generic observability platform.
It is a reviewer.
Claude Code users already have access to more raw information than they can usually use in the moment:
- transcripts
- diffs
- hook logs
- dashboards
- traces
That helps if you want to reconstruct everything.
Most of the time, though, the real question is smaller and more practical:
what actually deserves my attention before I trust this session and move on?
That is the gap Clawrity is trying to fill.
There are already ways to collect or forward Claude Code session data.
Those tools are useful if you want:
- telemetry
- replay
- debugging infrastructure
- team-wide monitoring
- cost or usage visibility
Clawrity is for a different moment.
It is for the person who just finished a Claude Code session and wants a fast review artifact that says:
- where the risky file is
- where the core behavior change probably lives
- whether the session wandered into a surprising area
- whether commands retried, stalled, or failed to complete
So the wedge is not:
- more logs
- more traces
- more observability
The wedge is:
interpretation and prioritization for real Claude Code sessions.
That is why Clawrity is intentionally local-first, short, and review-shaped.
After a Claude Code session, Clawrity can summarize:
- files Claude touched
- the file that looks riskiest to review first
- the file where the core behavior change likely lives
- shell commands, repeated commands, and incomplete commands
- blocked or suspicious actions
- a short
review firstlist you can scan in seconds
Claude touched 5 files, 1 command.
what matters
- review src/auth/session.ts first because it looks highest-risk by path heuristics
- core behavior change likely lives in src/billing/form.ts
- touches span package metadata, app code, auth, runtime, docs
review first
1. src/auth/session.ts
2. src/billing/form.ts
Inside a repo where you want Claude Code sessions reviewed:
clawrity install --root <repo>That writes local Clawrity config plus Claude Code hook config into:
.claude/clawrity.json.claude/settings.local.json.clawrity/install.json.clawrity/hooks/claude-code-hooks.json
The install stays local to the repo and does not make network calls.
Use a throwaway branch or repo copy for the first run.
-
Install Clawrity:
clawrity install --root <repo>
-
Run Claude Code on a task you can sanity-check by hand.
-
After the session ends, review the newest session:
clawrity review --root <repo>
-
If you want a shareable Markdown version:
clawrity review --root <repo> --markdown .clawrity/summaries/latest.md
The current product flow is:
-
Claude Code runs in a repo with Clawrity installed.
-
Local hooks collect session events during the run.
-
StoporStopFailurefinalizes the session into:.clawrity/sessions/<session-id>/finalized-input.json -
clawrity reviewturns that finalized bundle into terminal and Markdown output.
If you want to package a session for debugging or feedback:
clawrity report \
--root <repo> \
--session <session-id> \
--output .clawrity/reports/<session-id> \
--forceThat creates a local report bundle with:
finalized-input.jsonevents.jsonlreview-summary.mdreport.jsonissue-report.md
If you want to keep a session as a durable local fixture:
clawrity export \
--root <repo> \
--session <session-id> \
--output local-fixtures/clawrity/<name>.json \
--forceclawrity install [--root <path>] [--force]
clawrity uninstall [--root <path>]
clawrity review [--root <path>] [--bundle <path>] [--session <id>] [--markdown <path>] [--no-terminal]
clawrity report [--root <path>] [--bundle <path>] [--session <id>] [--output <path>] [--force] [--no-terminal]
clawrity export [--root <path>] [--bundle <path>] [--session <id>] [--output <path>] [--force]
clawrity --version
The repo includes both synthetic fixtures and realistic local tryout paths.
Useful test commands:
node --test src/summarizer/summarizer.test.js
node --test tests/integration/install-collect-finalize.test.mjs
node --test tests/docs/tryout-docs.test.mjsUseful examples:
examples/report-bundle.mdexamples/fixture-export.mdexamples/review-summary.mdexamples/review-terminal.txtexamples/evaluation-rubric.md
Clawrity is for people using Claude Code on real code who want a fast post-session review without reading the whole transcript or diff first.
The current wedge is:
- Claude Code only
- local-first
- review-oriented, not monitoring-oriented
- strongest for “it worked, but I still want to know what actually happened”
More specifically, Clawrity is strongest when the session:
- touched more than one file
- mixed code edits with shell commands
- felt a little broader or stranger than expected
- technically worked, but still made you want a second look
Clawrity is still heuristic.
That means:
- it can over-prioritize path risk in some sessions
- it is better at review guidance than deep semantic understanding
- it works best when paired with a real diff and normal human judgment
If it can’t say something honestly, it should stay short rather than fake certainty.