Skip to content

ethan10clay/clawrity

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

clawrity

 ______     __         ______     __     __     ______     __     ______   __  __
/\  ___\   /\ \       /\  __ \   /\ \  _ \ \   /\  == \   /\ \   /\__  _\ /\ \_\ \
\ \ \____  \ \ \____  \ \  __ \  \ \ \/ ".\ \  \ \  __<   \ \ \  \/_/\ \/ \ \____ \
 \ \_____\  \ \_____\  \ \_\ \_\  \ \__/".~\_\  \ \_\ \_\  \ \_\    \ \_\  \/\_____\
  \/_____/   \/_____/   \/_/\/_/   \/_/   \/_/   \/_/ /_/   \/_/     \/_/   \/_____/

See what Claude Code actually did.

Clawrity is a local-first reviewer for Claude Code sessions. It turns a finished session into a short plain-English summary so you can answer:

  • what changed?
  • what actually matters?
  • what should I review first?

Clawrity is not a dashboard, tracing sink, or generic observability platform.
It is a reviewer.

Why This Exists

Claude Code users already have access to more raw information than they can usually use in the moment:

  • transcripts
  • diffs
  • hook logs
  • dashboards
  • traces

That helps if you want to reconstruct everything.

Most of the time, though, the real question is smaller and more practical:

what actually deserves my attention before I trust this session and move on?

That is the gap Clawrity is trying to fill.

Why There Is Room For This

There are already ways to collect or forward Claude Code session data.

Those tools are useful if you want:

  • telemetry
  • replay
  • debugging infrastructure
  • team-wide monitoring
  • cost or usage visibility

Clawrity is for a different moment.

It is for the person who just finished a Claude Code session and wants a fast review artifact that says:

  • where the risky file is
  • where the core behavior change probably lives
  • whether the session wandered into a surprising area
  • whether commands retried, stalled, or failed to complete

So the wedge is not:

  • more logs
  • more traces
  • more observability

The wedge is:

interpretation and prioritization for real Claude Code sessions.

That is why Clawrity is intentionally local-first, short, and review-shaped.

What It Does

After a Claude Code session, Clawrity can summarize:

  • files Claude touched
  • the file that looks riskiest to review first
  • the file where the core behavior change likely lives
  • shell commands, repeated commands, and incomplete commands
  • blocked or suspicious actions
  • a short review first list you can scan in seconds

Example Output

Claude touched 5 files, 1 command.

what matters
- review src/auth/session.ts first because it looks highest-risk by path heuristics
- core behavior change likely lives in src/billing/form.ts
- touches span package metadata, app code, auth, runtime, docs

review first
1. src/auth/session.ts
2. src/billing/form.ts

Install

Inside a repo where you want Claude Code sessions reviewed:

clawrity install --root <repo>

That writes local Clawrity config plus Claude Code hook config into:

  • .claude/clawrity.json
  • .claude/settings.local.json
  • .clawrity/install.json
  • .clawrity/hooks/claude-code-hooks.json

The install stays local to the repo and does not make network calls.

First Tryout

Use a throwaway branch or repo copy for the first run.

  1. Install Clawrity:

    clawrity install --root <repo>
  2. Run Claude Code on a task you can sanity-check by hand.

  3. After the session ends, review the newest session:

    clawrity review --root <repo>
  4. If you want a shareable Markdown version:

    clawrity review --root <repo> --markdown .clawrity/summaries/latest.md

Session Flow

The current product flow is:

  1. Claude Code runs in a repo with Clawrity installed.

  2. Local hooks collect session events during the run.

  3. Stop or StopFailure finalizes the session into:

    .clawrity/sessions/<session-id>/finalized-input.json
    
  4. clawrity review turns that finalized bundle into terminal and Markdown output.

Report And Export

If you want to package a session for debugging or feedback:

clawrity report \
  --root <repo> \
  --session <session-id> \
  --output .clawrity/reports/<session-id> \
  --force

That creates a local report bundle with:

  • finalized-input.json
  • events.jsonl
  • review-summary.md
  • report.json
  • issue-report.md

If you want to keep a session as a durable local fixture:

clawrity export \
  --root <repo> \
  --session <session-id> \
  --output local-fixtures/clawrity/<name>.json \
  --force

CLI

clawrity install [--root <path>] [--force]
clawrity uninstall [--root <path>]
clawrity review [--root <path>] [--bundle <path>] [--session <id>] [--markdown <path>] [--no-terminal]
clawrity report [--root <path>] [--bundle <path>] [--session <id>] [--output <path>] [--force] [--no-terminal]
clawrity export [--root <path>] [--bundle <path>] [--session <id>] [--output <path>] [--force]
clawrity --version

Local Testing

The repo includes both synthetic fixtures and realistic local tryout paths.

Useful test commands:

node --test src/summarizer/summarizer.test.js
node --test tests/integration/install-collect-finalize.test.mjs
node --test tests/docs/tryout-docs.test.mjs

Useful examples:

  • examples/report-bundle.md
  • examples/fixture-export.md
  • examples/review-summary.md
  • examples/review-terminal.txt
  • examples/evaluation-rubric.md

Positioning

Clawrity is for people using Claude Code on real code who want a fast post-session review without reading the whole transcript or diff first.

The current wedge is:

  • Claude Code only
  • local-first
  • review-oriented, not monitoring-oriented
  • strongest for “it worked, but I still want to know what actually happened”

More specifically, Clawrity is strongest when the session:

  • touched more than one file
  • mixed code edits with shell commands
  • felt a little broader or stranger than expected
  • technically worked, but still made you want a second look

Current Limits

Clawrity is still heuristic.

That means:

  • it can over-prioritize path risk in some sessions
  • it is better at review guidance than deep semantic understanding
  • it works best when paired with a real diff and normal human judgment

If it can’t say something honestly, it should stay short rather than fake certainty.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors