Skip to content

simbrett/boardroom-codereview

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Boardroom Code Review

A Claude Code skill that dispatches up to 8 AI agents — each embodying a legendary software figure — to independently review your code, then cross-read and rebut each other's positions.

The disagreements ARE the output. A single reviewer gives you one perspective. The boardroom gives you the debate that surfaces blind spots, trade-offs, and tensions a single review misses.

The Board

Reviewer Role Philosophy
Uncle Bob (Robert C. Martin) The Purist SOLID principles, clean architecture, craftsmanship
Kent Beck The Test Evangelist TDD, simple design, YAGNI
Linus Torvalds The Pragmatist Simplicity, performance, no unnecessary abstraction
Sandi Metz The Clarity Advocate Readability, small objects, Tell Don't Ask
John Carmack The Performance Hawk Allocations, data-oriented design, hot paths
Martin Fowler The Pattern Architect Domain modeling, refactoring, code smells
Charity Majors The Ops Realist Observability, failure modes, operational readiness
Dan Abramov The Minimalist Deletability, minimal dependencies, no premature abstraction

How It Works

Round 1: Each reviewer independently analyzes the code through their lens
         Produces: position paper, findings, strengths, vote (YES/NO/CONDITIONAL)

Round 2: Each reviewer reads ALL Round 1 positions, then writes rebuttals
         Produces: rebuttals, revised findings, final vote (may change!)

Synthesis: The orchestrator analyzes all positions to find:
           - Biggest disagreements (where the real trade-offs live)
           - Mind changes (what arguments were persuasive)
           - Consensus points (what everyone agrees on)
           - High-confidence findings (flagged by 2+ reviewers)

Output includes markdown files for each reviewer, a vote tracker, synthesis document, and an interactive HTML report with dark/light theme, filterable findings, and tabbed navigation.

Installation

Directory Structure

Create the following structure under your Claude Code skills directory:

~/.claude/skills/boardroom-codereview/
  SKILL.md
  resources/
    round1-prompt.md
    round2-prompt.md
    project-briefing-template.md
    report-template.html
    personalities/
      uncle-bob.md
      kent-beck.md
      linus-torvalds.md
      sandi-metz.md
      john-carmack.md
      martin-fowler.md
      charity-majors.md
      dan-abramov.md

Quick Install

git clone https://github.com/simbrett/boardroom-codereview.git ~/.claude/skills/boardroom-codereview

That's it. The repo structure matches the expected skill layout exactly.

Usage

/boardroom-codereview <input> [options]

Input (exactly one, required)

Input Description
branch:<name> Review a branch diff against main
pr:<number> Review a pull request (GitHub or ADO)
files:<path1>,<path2>,... Review specific files
"<question>" A design question (no code)

Options

Option Default Description
--rounds <1|2> 2 Number of debate rounds
--board <N> 8 (all) Randomly select N reviewers
--reviewers <slug1>,<slug2> all Named selection (overrides --board)
--output <path> docs/boardroom-reviews/ Output directory
--no-html off Skip HTML report generation

Examples

# Full 8-person, 2-round review of a PR
/boardroom-codereview pr:42

# Quick 3-person, 1-round review
/boardroom-codereview pr:42 --board 3 --rounds 1

# Specific reviewers on a branch
/boardroom-codereview branch:feature/auth --reviewers uncle-bob,charity-majors,kent-beck

# Architecture question (no code)
/boardroom-codereview "Should we use event sourcing or CRUD for the order service?"

# Review specific files
/boardroom-codereview files:src/auth/handler.ts,src/auth/middleware.ts

Cost & Time

Configuration Agents Time Relative Cost
Full board, 2 rounds ~17 3-5 min $$$
Full board, 1 round ~9 1-3 min $$
3 reviewers, 2 rounds ~7 2-3 min $
3 reviewers, 1 round ~4 1-2 min $

Output

By default, output goes to <repo root>/docs/boardroom-reviews/<YYYY-MM-DD>-<slug>/:

docs/boardroom-reviews/2025-01-15-pr-42/
  report.html          # <-- Start here
  synthesis.md
  vote-tracker.md
  round1/
    uncle-bob.md
    kent-beck.md
    ...
  round2/
    uncle-bob.md
    kent-beck.md
    ...

Where to Look First

report.html is the single most valuable output. Open it in a browser and scroll down to the "All Findings" table. This aggregates every issue from every reviewer into one filterable view — you can filter by severity, confidence threshold, reviewer, or show only issues flagged by 2+ reviewers (the highest-signal findings).

synthesis.md is the narrative companion. The "High-Confidence Findings" section is the most actionable part — it lists issues that multiple reviewers independently flagged, with file:line references and severity. When reviewers with opposing philosophies agree something is a problem, it almost certainly is.

Also worth reading in synthesis.md:

  • Biggest Fights — the real trade-off debates where reasonable people disagree
  • Mind Changes — who changed their vote in Round 2 and what argument convinced them (this is where the boardroom format earns its keep)
  • Conditions for Approval — the concrete list of what CONDITIONAL voters want before they'd say YES

HTML Report Features

  • Light/dark theme toggle (persists via localStorage)
  • Board overview cards with reviewer philosophies and votes
  • Vote tracker table showing round-by-round vote changes
  • Tabbed position papers and rebuttals
  • Filterable findings table (by severity, confidence, reviewer, cross-referenced)
  • Collapsible synthesis sections
  • Print-friendly layout

Project Context (Optional)

If your project has a .agents/Knowledge.md file at the repo root, the boardroom will automatically generate a project briefing and inject it into each reviewer's prompt. This gives the reviewers domain-specific context about your project's conventions, architecture, and known pitfalls.

Without it, reviewers operate on pure software philosophy — still valuable, just less project-aware.

Customization

Adding Personalities

Create a new .md file in resources/personalities/ with this frontmatter:

---
name: "Display Name"
full_name: "Full Name"
role: "The <Archetype>"
---

Then add sections for Philosophy, Review Focus Areas, Key Beliefs, Severity Calibration, What They Praise, What They Attack, Voice & Tone, and Signature Move. See any existing personality file for the format.

Adjusting for Your VCS

The skill supports GitHub PRs out of the box via gh pr view. For Azure DevOps, set a PAT environment variable and the skill will use the ADO REST API. See the pr: input handling in SKILL.md Step 2 for details.

License

MIT License. See LICENSE.

About

A Claude Code skill that dispatches up to 8 AI agents — each embodying a legendary software figure — to independently review code, then cross-read and rebut each other's positions

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages