Skip to content

content(blog): full product team capability map for Claude Code agents#322

Merged
pftg merged 4 commits into
masterfrom
content/claude-code-xp-team-post
May 4, 2026
Merged

content(blog): full product team capability map for Claude Code agents#322
pftg merged 4 commits into
masterfrom
content/claude-code-xp-team-post

Conversation

@pftg
Copy link
Copy Markdown
Member

@pftg pftg commented May 4, 2026

Summary

Capability-map post for founders considering Claude Code agents. Argues that solo agents reproduce the dysfunctions of solo developers - missing PM, designer, QA, customer voice, researcher, business strategist, reflection. Names the twelve roles, five collaborations, and three nested loops we replicated to run continuous discovery + continuous delivery.

Audience: non-technical founder burned by a devshop, considering AI-built MVPs (ICP-E).

Body: 1443 words. Voice audit clean (0 em dashes, "The..." opener at 5% of 79 sentences, mannerism nouns at cap of 5).

Post shape

Section Content
Hook Solo agent runs whole product cycle in one process; diff compiles, tests green, feature solves wrong problem
The roles we replicated 12-row capability table
How the roles collaborate 5 H3 subsections: goal setting (4 agents), discovery+opportunity mapping (3 agents, opportunity solution trees + impact mapping per Teresa Torres), prioritization+design (PM+Designer+Tech Lead), delivery (XP), reflection
Three nested loops Outer (sprint), inner (development), Ralph loop (Geoffrey Huntley's persistent-iteration technique, shipped as Anthropic's ralph-wiggum plugin)
Tools that make this possible Claude Code, Google Stitch, chrome-devtools MCP, claude-context, skills.sh, CI
Team mode beats solo mode Load-bearing thesis - team-mode collaborative agents avoid the dysfunctions of 90% of orgs
Agile and flexibility are non-negotiable for AI Why rigid rule sets ship yesterday's bias
Where to start Begin with three roles; setup eats one sprint, $4-6 Sonnet / $20-30 Opus per ticket, 30-45 min wall-clock
Free audit 45-min audit CTA

The twelve roles

Business Strategist (north star) - PM (manages opportunities) - Business Analyst (data signal) - Customer Representative (user voice) - UX Researcher (continuous discovery) - UI/UX Designer (Stitch) - Tech Lead (feasibility + architecture) - Driver/Navigator pair - Critic Panel - QA/Visual Verifier (chrome-devtools MCP) - DevOps Gates (CI) - Reflection/Kaizen (skills.sh).

The five collaborations

  1. Goal setting: Strategist + PM + BA + Customer Rep argue from four directions
  2. Discovery + opportunity mapping: PM + UX Researcher + Customer Rep, using opportunity solution trees + impact mapping
  3. Prioritization + design: PM + Designer + Tech Lead, resolved by impact map not by who talks loudest
  4. Delivery: Driver/Nav + Critic Panel + QA + DevOps Gates execute against acceptance criteria with the design attached
  5. Reflection: every role feeds rule updates and skills published to skills.sh

The three loops

  • Outer loop (one to two weeks): sprint cycle - goal setting, discovery, opportunity tree updates
  • Inner loop (minutes to hours): development cycle - Driver/Nav pairing, micro-commits, critic panel after every test pass
  • Ralph loop (seconds to minutes): single agent iterates on its own output until quality bar met (Geoffrey Huntley, ghuntley.com/ralph/, shipped as Anthropic's ralph-wiggum plugin)

Why the pivot

Earlier draft was a 6-layer XP workflow guide ("rules → rhythms → critics → gates → reflection"). Reader feedback: "missing PM role, designer, customer voice, business strategist, opportunity mapping, loops" - the workflow guide assumed roles already existed. New shape names the team itself, then names how it works together. The setup is the team-replication, not the workflow.

Internal links (4 verified)

  • /blog/vibe-coding-crisis-ai-code-debt/ (founder pain anchor in lede)
  • /blog/test-driven-development-tdd-in-ruby-step-by-guide-tutorial-bestpractices/ (XP delivery)
  • /blog/refactor-step-tdd-three-line-discipline-ruby/ (micro-commit discipline)
  • /contact-us/ (audit CTA)

External citations

Test plan

  • bin/hugo-build passes (660 pages)
  • Voice audit clean (0 em dashes, "The..." opener density 5%, mannerism nouns at cap)
  • All 4 internal-link slugs verified against docs/blog-post-index.md
  • Cold-eyes critic agent reviewed; fixes applied in place; verdict NEEDS-FIXES → applied → PUBLISH-READY
  • Reviewer to spot-check capability table renders cleanly on mobile (12 rows)
  • Reviewer to verify cost figures ($4-6 Sonnet / $20-30 Opus / 30-45 min wall-clock parallel)
  • Visual regression: pure markdown body changes, no theme/CSS/template edits

Visuals

  • Cover: cover.png (unchanged from prior commits)
  • Pipeline diagram: pipeline.svg (unchanged from prior commits) - the team-replication framing reuses the existing 5-step capability flow

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Documentation
    • Published a new blog post: "How We Run Claude Code Agents Like an XP Team" — introduces a replicated team workflow that models a product org, lists 8 roles, outlines five collaboration flows and three nested loops, documents the tooling chain and phased onboarding, and offers a CTA to submit a one-sentence role gap plus repo URL to get a role rule and one-page staffing recommendation.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new non-draft Hugo blog post (2026-05-04) describing a “team-mode” Claude Code agents workflow: defines 8 replicated roles, five collaboration flows, three nested loops (outer sprint, inner dev, and iterative “Ralph loop”), supporting tooling, start-with-three-roles guidance, and a CTA requesting a one-sentence role-gap plus repo URL to receive the corresponding .claude/agents/<role>.md and a one-page staffing recommendation.

Changes

Claude Code XP Team Workflow

Layer / File(s) Summary
Frontmatter / SEO
content/blog/claude-code-xp-team-workflow/index.md
Adds full Hugo frontmatter: title, slug, date (2026-05-04), author, canonical URL, cover image + alt, OG metadata, keywords/tags/categories, and related_posts: false.
Framing / Premise
content/blog/claude-code-xp-team-workflow/index.md
Introduces solo-agent vs replicated team-mode framing and the core thesis of rule-enforced collaborative seats.
Roles Defined
content/blog/claude-code-xp-team-workflow/index.md
Enumerates 8 replicated seats: Business Strategist, Product Manager, UI/UX Designer, Tech Lead, Critic Panel, QA / Visual Verifier, DevOps Gates, Reflection / Kaizen, with role responsibilities.
Collaboration Flows
content/blog/claude-code-xp-team-workflow/index.md
Describes five collaboration flows: Pitch & bet, Slicing & design, Delivery (Driver/Navigator micro-steps), QA gates & shipping, and Reflection; ties flows to artifacts and gating behavior.
Nested Loops / Cadence
content/blog/claude-code-xp-team-workflow/index.md
Defines three nested cadences: outer sprint cycle, inner development cycle (micro-commits behind tests + critic runs after test passes), and the “Ralph loop” that re-prompts until a simplicity critic condition is satisfied.
Tooling / Integrations
content/blog/claude-code-xp-team-workflow/index.md
Documents enforcement chain and tooling: CLAUDE.md and .claude/ role rules, Google Stitch for design constraints, chrome-devtools MCP for visual verification, claude-context semantic search, skills.sh for shared skills, and CI with automated diff review gates.
Operational Guidance
content/blog/claude-code-xp-team-workflow/index.md
Advises phased onboarding (start with strategy + critic panel + reflection), notes overhead/cost and parallel critic timing, and gives ticket/cadence suggestions.
CTA / Contact
content/blog/claude-code-xp-team-workflow/index.md
Adds CTA requesting a one-sentence role-gap plus repo URL to receive the matching .claude/agents/<role>.md rule and a one-page staffing recommendation; links to contact page.
Post Notes
content/blog/claude-code-xp-team-workflow/index.md
Adds inline summary/concluding notes and example visual-verification widths.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🐰 I hopped through roles and loops tonight,

Eight seats gather round the prompt’s soft light,
Critics peep, the Driver nudges on,
Start with three, then watch the drift be gone,
A tiny audit sends the team to flight.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'content(blog): full product team capability map for Claude Code agents' directly and clearly describes the main changeset: a blog post adding documentation for a team-based capability model for Claude Code agents. It is specific, concise, and accurately reflects the primary addition to the codebase.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch content/claude-code-xp-team-post

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 63-64: The sentence "The cost is real and concrete: nine reviews
plus consolidator passes runs us about $4-6 per post on Sonnet, $20-30 on Opus,
and 45 minutes of wall-clock time when the agents work in parallel." uses an
abstract subject; change it to an agentful construction (e.g., "We track the
cost: our team spends..." or "Our process incurs: we spend...") so the subject
is a person/agent doing the action. Locate that exact sentence in index.md and
replace with a version that applies the "who" test (mentioning "we", "our team",
or "the reviewers") and preserves the numeric cost/time details.
- Around line 21-111: The post currently uses explicit numeric signposting (the
H2s like "## 1. Why solo Claude Code agents ship vibe-coded MVPs", "## 2. The
six layers..." and sections titled "Three contracts", "Three rhythms",
"Multi-reviewer setup via parallel critic agents" with repeated counts like
"three", "nine", "three-line") which violates the reject-rule-of-three and
avoid-signposting rules; update the manuscript by converting the numbered H2s
into unnumbered headings, removing ordinal markers from section titles (e.g.,
change "Three contracts Claude Code reads before drafting" → "Contracts Claude
Code reads before drafting"), replace repeated explicit counts (“three
contracts”, “three rhythms”, “three rounds, three personas”, “nine reviews”,
“three-line refactor”) with non-counted or varied phrasing (e.g., "a small set
of contracts", "multiple review rounds with varied personas", "several reviews",
or specific alternative counts where necessary like "two gates + one contract"
only in the payoff example), and soften signposting language throughout so the
flow reads as layered explanation rather than a numbered list.
- Around line 31-32: The sentence starting with "Six layers feed into each
other" uses an abstract subject and fails the "who test"; rewrite it and any
similar abstract-subject sentences to be agentful (use a person/agent as the
subject). For example, replace "Six layers feed into each other" with an active
construction like "We wire contracts into rhythms," change "the
reflection-back-to-contracts arrow is the one we kept dropping" to "we kept
dropping the reflection-back-to-contracts step for our first three sprints," and
turn "the one that finally lets the next sprint inherit what the last one
learned" into "that finally lets the next sprint inherit what we learned."
Update all analogous sentences to use "we" or another explicit actor so every
sentence passes the who-test.
- Around line 23-24: The sentence fragment "Many of her cohort." is fragmentary
and fails the "who" test; fix it by merging it into the prior sentence or
rewriting it as an agentful sentence so a person performs the action—for example
change "…dropped session tokens for anyone whose email had a plus sign in it.
Many of her cohort." to "…dropped session tokens for anyone whose email had a
plus sign in it, which affected many members of her cohort." or "…dropped
session tokens for anyone whose email had a plus sign in it. Priya discovered it
affected many members of her cohort." Update the line containing the fragment to
ensure each sentence names an actor and completes the idea.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d4c0c683-341b-4ce2-b400-42995f51a3fb

📥 Commits

Reviewing files that changed from the base of the PR and between 9d8df14 and dbbd0bb.

⛔ Files ignored due to path filters (2)
  • content/blog/claude-code-xp-team-workflow/cover.png is excluded by !**/*.png
  • content/blog/claude-code-xp-team-workflow/pipeline.svg is excluded by !**/*.svg
📒 Files selected for processing (1)
  • content/blog/claude-code-xp-team-workflow/index.md

Comment thread content/blog/claude-code-xp-team-workflow/index.md Outdated
Comment thread content/blog/claude-code-xp-team-workflow/index.md Outdated
Comment thread content/blog/claude-code-xp-team-workflow/index.md Outdated
Comment thread content/blog/claude-code-xp-team-workflow/index.md Outdated
pftg added a commit that referenced this pull request May 4, 2026
… + who test)

CodeRabbit on PR #322 flagged:
1. (Major, Heavy) Numbered H2 spine "## 1. ... ## 9." plus repeated "Three X"
   counts read as rule-of-three signposting per voice guide.
2. (Major, Quick) L24 "Many of her cohort." fragment fails the who test.
3. (Major, Quick) L32 "Six layers feed into each other..." abstract subject.
4. (Minor, Quick) L64 "The cost is real and concrete..." abstract subject.

Fixes applied:
- Dropped "## N." numeric prefixes from all 9 H2s.
- Softened "## Three contracts ..." → "## Contracts ..." and
  "## Three rhythms ..." → "## Rhythms ...". The body still lists three
  items in each — that's content, not rhetorical structure.
- L24 fragment merged into agentful sentence: "The drop hit many of her
  early users before she noticed."
- L32 §2 opener rewritten with "we" subject and sequential action: "We
  move every post through six layers in order... We feed the reflection
  back into the contracts so the next sprint inherits what we learned.
  Skip one layer and the next one breaks."
- L64 §5 cost sentence rewritten: "We pay real money for this. Nine
  reviews plus consolidator passes costs about $4-6 per post..."

bin/hugo-build: passes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
content/blog/claude-code-xp-team-workflow/index.md (2)

59-61: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Rule-of-three structure is still a primary organizing pattern here

This section still centers repeated triads (“three navigators,” “three rounds,” “three personas”), which keeps the exact pattern the guideline asks to avoid. Please rewrite this flow without recurring “three” framing (describe roles/rounds without count-led scaffolding).

As per coding guidelines, “In blog posts, reject rule-of-three structural patterns, signposting...”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 59 - 61,
Rewrite the passage that currently repeats "three" throughout (lines referencing
"three navigators", "three rounds", "three personas", and the labeled "Round
1/2/3" descriptions) to remove the rule-of-three scaffolding: describe the
workflow as a sequence of iterative review stages with distinct role-focused
reviewers (e.g., a skeptical founder persona, an SEO/probability scorer, a
senior copy editor in the first pass; a technical reviewer, copywriter, and
AI-detector in the next; and later a DevOps reviewer, conversion specialist, and
a final skimmer) but avoid numeric triad framing and explicit counts—use terms
like "initial pass", "subsequent pass", "final pass" and emphasize role
diversity and independent cold checks instead of enumerating "three" at each
step. Ensure the examples of flagged phrases ("blast radius", "suspect list",
"every safe move is one of these") and the finding about the cold AI detector
surfacing additional verbatim LLM phrasings remain, but integrate them into the
narrative without count-based signposting.

81-89: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Who-test still fails in this reflection block

Several sentences use abstract subjects instead of a person acting (for example, Line 81 and Line 89). Please make these agentive (“we/us/team/member did X”) to satisfy the who-test consistently.

Suggested local rewrite direction
-The most concrete example sits in `docs/90-99-content-strategy/writer-prompt-research-2026-05-04.md`.
+We documented the most concrete example in `docs/90-99-content-strategy/writer-prompt-research-2026-05-04.md`.

-A banned-words list catches none of these.
+We found that our banned-words list catches none of these.

As per coding guidelines, “In blog posts, apply the 'who' test - every sentence needs a person doing something”.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 81 - 89, The
reflection block that begins with "The most concrete example..." uses abstract
subjects and passive framing (e.g., phrases like "the developer", "the team",
and constructions such as "X is the canonical Y"); rewrite those sentences to
pass the who-test by making them agentive (use "we/us/the team/a member" as the
actor) and convert passive/abstract statements into concrete actions (e.g., "we
audited 5,046 words and found five tells" rather than "the audit... turned up
five tells"); update the list items so each describes who did what (e.g., "we
observed mannerism-noun refrain..." or "our audit flagged round-number anchor
density...") and ensure any example phrases ("the developer", "the team", "X is
the canonical Y") are recast to show an explicit agent performing the evaluation
or statement.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 59-61: Rewrite the passage that currently repeats "three"
throughout (lines referencing "three navigators", "three rounds", "three
personas", and the labeled "Round 1/2/3" descriptions) to remove the
rule-of-three scaffolding: describe the workflow as a sequence of iterative
review stages with distinct role-focused reviewers (e.g., a skeptical founder
persona, an SEO/probability scorer, a senior copy editor in the first pass; a
technical reviewer, copywriter, and AI-detector in the next; and later a DevOps
reviewer, conversion specialist, and a final skimmer) but avoid numeric triad
framing and explicit counts—use terms like "initial pass", "subsequent pass",
"final pass" and emphasize role diversity and independent cold checks instead of
enumerating "three" at each step. Ensure the examples of flagged phrases ("blast
radius", "suspect list", "every safe move is one of these") and the finding
about the cold AI detector surfacing additional verbatim LLM phrasings remain,
but integrate them into the narrative without count-based signposting.
- Around line 81-89: The reflection block that begins with "The most concrete
example..." uses abstract subjects and passive framing (e.g., phrases like "the
developer", "the team", and constructions such as "X is the canonical Y");
rewrite those sentences to pass the who-test by making them agentive (use
"we/us/the team/a member" as the actor) and convert passive/abstract statements
into concrete actions (e.g., "we audited 5,046 words and found five tells"
rather than "the audit... turned up five tells"); update the list items so each
describes who did what (e.g., "we observed mannerism-noun refrain..." or "our
audit flagged round-number anchor density...") and ensure any example phrases
("the developer", "the team", "X is the canonical Y") are recast to show an
explicit agent performing the evaluation or statement.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 35d1a1ab-8756-4963-9d6c-fc9cd481a546

📥 Commits

Reviewing files that changed from the base of the PR and between dbbd0bb and 5abca44.

📒 Files selected for processing (1)
  • content/blog/claude-code-xp-team-workflow/index.md

pftg added a commit that referenced this pull request May 4, 2026
… 10 posts

After driving PR #322 score 52→est. 22-28 across 3 cold-pass sweeps, an
empirical audit on the 10 most recent posts surfaced patterns the new
pipeline (PR #319) doesn't catch. Add them as TEACHING (not gates).

Audit findings (10 posts, ~22k words):
- Long paragraphs >3 sentences in 39-58% of TDD/XP cluster (cap unenforced)
- Anaphora pairs (same first-word sentences in same paragraph) in 10/10 posts
- "The..." paragraph-opener density at 18-33% in 6/10 posts (target ≤20%)
- "We..." paragraph-opener stamping in TDD/XP cluster
- Mannerism-noun saturation: 9-18 per post (cap is 3 combined)
  - The post's own architectural labels become the tell once stamped
    repeatedly (`contract`, `gate`, `gatekeeper`, `layer`, `round`,
    `review`, `pass` joined the original mannerism set)
- Architecture enumeration in body restating the H2 structure
- Formulaic founder-anecdote hook (Priya/Marcus/Sarah opener) in last 5
  founder-pain posts - became its own template

Voice guide additions (new rows in Banned structural patterns table):
- Anaphora pairs - read aloud, no two sentences per paragraph share
  first word
- "The..." opener density - cap 20% of paragraphs
- "We..." opener density - cap 15% of paragraphs
- Long paragraphs (cap break) - 3 sentences AND 70 words working cap
- Architecture enumeration - let H2s carry structure, don't list pillars
  in body
- Formulaic founder-anecdote hook - the recurring shape is now the tell

Mannerism-noun row extended:
- Cap raised from 3 to 5 (acknowledges some posts genuinely need labels)
- New mannerisms added to the watch list: cadence, rhythm, contract,
  gate, gatekeeper, layer, round, review, pass

Writer prompt additions (STEP 4b cadence quotas):
- Paragraph cap: 3 sentences AND 70 words (was just 3 sentences)
- Anaphora rule
- Opener variety (both "The..." and "We..." capped)
- Mannerism cap raised to 5, watch list extended
- Architecture-enumeration ban
- Hook variety guidance

Writer prompt additions (STEP 4c BAD/GOOD pairs):
- Anaphora pair rewrite
- Architecture enumeration rewrite
- Long-paragraph split example
- Mannerism-noun saturation rewrite
- Formulaic founder-anecdote hook with alternative opener shape
- "We" opener stamping with rewrite

These are TEACHING additions (concrete BAD/GOOD pairs, surface-feature
descriptions). No tool recommendations, no programmatic gates. The LLM
learns from the patterns, not from a CI script.

bin/hugo-build: passes (docs-only).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@pftg pftg force-pushed the content/claude-code-xp-team-post branch from dcff9fd to 51657a1 Compare May 4, 2026 18:15
pftg added a commit that referenced this pull request May 4, 2026
… + who test)

CodeRabbit on PR #322 flagged:
1. (Major, Heavy) Numbered H2 spine "## 1. ... ## 9." plus repeated "Three X"
   counts read as rule-of-three signposting per voice guide.
2. (Major, Quick) L24 "Many of her cohort." fragment fails the who test.
3. (Major, Quick) L32 "Six layers feed into each other..." abstract subject.
4. (Minor, Quick) L64 "The cost is real and concrete..." abstract subject.

Fixes applied:
- Dropped "## N." numeric prefixes from all 9 H2s.
- Softened "## Three contracts ..." → "## Contracts ..." and
  "## Three rhythms ..." → "## Rhythms ...". The body still lists three
  items in each — that's content, not rhetorical structure.
- L24 fragment merged into agentful sentence: "The drop hit many of her
  early users before she noticed."
- L32 §2 opener rewritten with "we" subject and sequential action: "We
  move every post through six layers in order... We feed the reflection
  back into the contracts so the next sprint inherits what we learned.
  Skip one layer and the next one breaks."
- L64 §5 cost sentence rewritten: "We pay real money for this. Nine
  reviews plus consolidator passes costs about $4-6 per post..."

bin/hugo-build: passes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
pftg added a commit that referenced this pull request May 4, 2026
… + who test)

CodeRabbit on PR #322 flagged:
1. (Major, Heavy) Numbered H2 spine "## 1. ... ## 9." plus repeated "Three X"
   counts read as rule-of-three signposting per voice guide.
2. (Major, Quick) L24 "Many of her cohort." fragment fails the who test.
3. (Major, Quick) L32 "Six layers feed into each other..." abstract subject.
4. (Minor, Quick) L64 "The cost is real and concrete..." abstract subject.

Fixes applied:
- Dropped "## N." numeric prefixes from all 9 H2s.
- Softened "## Three contracts ..." → "## Contracts ..." and
  "## Three rhythms ..." → "## Rhythms ...". The body still lists three
  items in each — that's content, not rhetorical structure.
- L24 fragment merged into agentful sentence: "The drop hit many of her
  early users before she noticed."
- L32 §2 opener rewritten with "we" subject and sequential action: "We
  move every post through six layers in order... We feed the reflection
  back into the contracts so the next sprint inherits what we learned.
  Skip one layer and the next one breaks."
- L64 §5 cost sentence rewritten: "We pay real money for this. Nine
  reviews plus consolidator passes costs about $4-6 per post..."

bin/hugo-build: passes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@pftg pftg force-pushed the content/claude-code-xp-team-post branch from a9a3f55 to 7abeccf Compare May 4, 2026 18:54
@pftg pftg changed the title content(blog): claude code agents as an XP team — first post under new pipeline content(blog): full product team capability map for Claude Code agents May 4, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
content/blog/claude-code-xp-team-workflow/index.md (2)

70-73: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Remove explicit “Three …” structural framing here.

Line 70 and Line 72 still use explicit rule-of-three signposting (“Three nested loops”, “Three cadences…”), which conflicts with the blog-structure rule.

Suggested edit
-## Three nested loops
+## Nested loops in the workflow

-Three cadences run in parallel, each with a different time horizon.
+These cadences run in parallel, and each one operates on a different time horizon.

As per coding guidelines, “In blog posts, reject rule-of-three structural patterns, signposting, bold inline-header lists...”.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 70 - 73,
Remove the explicit "Three ..." rule-of-three signposting: rename or replace the
heading "## Three nested loops" with a neutral descriptive heading (e.g.,
"Nested cadence loops" or "Cadence loops") and rephrase the sentence "Three
cadences run in parallel, each with a different time horizon." to describe the
content without enumerating ("Multiple cadences run in parallel, each with a
different time horizon." or similar). Ensure any other inline mentions in this
section avoid the numeric signposting pattern and instead use descriptive
language.

68-80: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Rewrite abstract-subject sentences to pass the “who” test.

This block still has abstract subjects (for example Line 68 and Line 80). Rephrase with explicit actors (“we”, “the team”, “the agent”) to meet the required style.

Suggested edit
-Next sprint inherits the lesson. Roles that don't feed reflection drift; roles that do compound.
+We carry these lessons into the next sprint. Teams that skip reflection drift, and teams that run it consistently compound progress.

-The loops nest: outer scope informs inner work, and Ralph catches small failures inside the inner loop without escalating to a human.
+We nest these loops so sprint scope guides ticket work, and we use Ralph to catch small failures inside the inner loop before we escalate to a human.

As per coding guidelines, “In blog posts, apply the 'who' test - every sentence needs a person doing something”.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 68 - 80, The
paragraph describing the three nested loops uses abstract subjects; rewrite
sentences under the "Three nested loops" section (the "Outer loop", "Inner
loop", and "Ralph loop" descriptions) to name explicit actors—e.g., "we", "the
team", "the driver and navigator", "the agent"—so every sentence passes the
"who" test; for example change passive/abstract lines like "Cadence: one to two
weeks" and "Cadence: seconds to minutes per attempt" by anchoring them to actors
("The team runs this cadence: one to two weeks", "The agent retries until the
bar is met: seconds to minutes"), and update any other sentences in that block
to explicitly state who performs discovery, updates opportunity trees, pairs on
tickets, runs the critic panel, or iterates with the Ralph loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Line 90: The sentence "Dysfunctions in 90% of organizations come from one role
swallowing another's job..." makes an uncited numeric claim; either add a
reliable source citation for the "90%" statistic or change the wording to a
softer observational statement (e.g., "Dysfunctions in many organizations..." or
"In many teams, dysfunction arises when one role swallows another's job") so it
no longer asserts an unsupported percentage—update the sentence in the markdown
(the line containing that exact phrase) accordingly and, if adding a citation,
include a reference link or footnote.

---

Duplicate comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 70-73: Remove the explicit "Three ..." rule-of-three signposting:
rename or replace the heading "## Three nested loops" with a neutral descriptive
heading (e.g., "Nested cadence loops" or "Cadence loops") and rephrase the
sentence "Three cadences run in parallel, each with a different time horizon."
to describe the content without enumerating ("Multiple cadences run in parallel,
each with a different time horizon." or similar). Ensure any other inline
mentions in this section avoid the numeric signposting pattern and instead use
descriptive language.
- Around line 68-80: The paragraph describing the three nested loops uses
abstract subjects; rewrite sentences under the "Three nested loops" section (the
"Outer loop", "Inner loop", and "Ralph loop" descriptions) to name explicit
actors—e.g., "we", "the team", "the driver and navigator", "the agent"—so every
sentence passes the "who" test; for example change passive/abstract lines like
"Cadence: one to two weeks" and "Cadence: seconds to minutes per attempt" by
anchoring them to actors ("The team runs this cadence: one to two weeks", "The
agent retries until the bar is met: seconds to minutes"), and update any other
sentences in that block to explicitly state who performs discovery, updates
opportunity trees, pairs on tickets, runs the critic panel, or iterates with the
Ralph loop.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 42c5d6fe-000a-4e31-a802-f45b724e7869

📥 Commits

Reviewing files that changed from the base of the PR and between 51657a1 and 4797a06.

⛔ Files ignored due to path filters (2)
  • content/blog/claude-code-xp-team-workflow/cover.png is excluded by !**/*.png
  • content/blog/claude-code-xp-team-workflow/pipeline.svg is excluded by !**/*.svg
📒 Files selected for processing (1)
  • content/blog/claude-code-xp-team-workflow/index.md

Comment thread content/blog/claude-code-xp-team-workflow/index.md Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
content/blog/claude-code-xp-team-workflow/index.md (1)

66-75: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Agentless subjects in the cadence section still fail the “who” test

Several sentences use abstract subjects (“The sprint cycle runs…”, “The development cycle runs…”, “Each cadence nests…”). Rewrite this block with an explicit actor (“we/team/agents”) so each sentence names who is doing the work.

As per coding guidelines, “In blog posts, apply the 'who' test - every sentence needs a person doing something.”

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 66 - 75, The
three abstract sentences under "## The cadences we run" (the ones beginning "The
sprint cycle runs…", "The development cycle runs…", and "Each cadence nests
inside the larger one") and the Ralph loop sentence need to be rewritten to name
the actor explicitly (e.g., "we", "the team", or "agents"); replace phrases like
"The sprint cycle runs…" with "Our sprint cycle runs…" or "The team runs the
sprint cycle…", change "The development cycle runs…" to "We run the development
cycle…" or similar, and make "Each cadence nests…" say "Each cadence we run
nests…" so every sentence passes the 'who' test while preserving the original
meaning and cadence details.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Line 102: The cost and time figures currently stated ("$4-6", "$20-30", "30-45
minutes") are presented without measurement context; update the sentence to
include precise benchmarking details: state the date of measurement, the exact
model/version used (e.g., Sonnet vX.Y, Opus vA.B), the slice size assumptions
(one AC item defined as 50–100 lines of diff), run conditions (critics run in
parallel, number of critics/threads), hardware/cloud instance used, whether
Ralph-loop retries were included, and a note on variance/95% CI so readers
understand this is a specific measured example rather than a universal
benchmark. Ensure the revised line replaces the original numeric claims and
includes these qualifiers inline with the cost/time figures.

---

Duplicate comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 66-75: The three abstract sentences under "## The cadences we run"
(the ones beginning "The sprint cycle runs…", "The development cycle runs…", and
"Each cadence nests inside the larger one") and the Ralph loop sentence need to
be rewritten to name the actor explicitly (e.g., "we", "the team", or "agents");
replace phrases like "The sprint cycle runs…" with "Our sprint cycle runs…" or
"The team runs the sprint cycle…", change "The development cycle runs…" to "We
run the development cycle…" or similar, and make "Each cadence nests…" say "Each
cadence we run nests…" so every sentence passes the 'who' test while preserving
the original meaning and cadence details.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ef2e25ce-1d0f-4510-841f-db6710f6c7ff

📥 Commits

Reviewing files that changed from the base of the PR and between 4797a06 and 0696140.

📒 Files selected for processing (1)
  • content/blog/claude-code-xp-team-workflow/index.md


Setup eats one sprint of overhead. Rule files need writing, agent definitions need configuring, and the team needs convincing. From sprint two the time disappears.

Cost on a small slice (one AC item, 50-100 lines of diff) runs $4-6 on Sonnet or $20-30 on Opus, plus 30-45 minutes of wall-clock time when critics run in parallel. Larger bets and Ralph-loop retries push the bill higher, but it's still cheaper than the rework on a vibe-coded PR that ships the wrong feature.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Cost figures need concrete measurement context to avoid sounding like universal benchmarks

$4-6 / $20-30 and 30-45 minutes are useful, but they read as broadly applicable. Add scope details (when measured, model/version, slice size assumptions, and run conditions) so this stays practitioner-grade and not a generalized claim.

As per coding guidelines, “In blog posts, avoid … fake authority generalization” and “apply the 'practitioner' test - replace generalized scenarios with specific incidents.”

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@content/blog/claude-code-xp-team-workflow/index.md` at line 102, The cost and
time figures currently stated ("$4-6", "$20-30", "30-45 minutes") are presented
without measurement context; update the sentence to include precise benchmarking
details: state the date of measurement, the exact model/version used (e.g.,
Sonnet vX.Y, Opus vA.B), the slice size assumptions (one AC item defined as
50–100 lines of diff), run conditions (critics run in parallel, number of
critics/threads), hardware/cloud instance used, whether Ralph-loop retries were
included, and a note on variance/95% CI so readers understand this is a specific
measured example rather than a universal benchmark. Ensure the revised line
replaces the original numeric claims and includes these qualifiers inline with
the cost/time figures.

pftg and others added 3 commits May 5, 2026 00:49
Initial post creation (cover.png, pipeline.svg, index.md) under the new
writer-prompt pipeline (PR #319). First framing was an XP-workflow
narrative anchored in a Priya founder-incident hook with a concrete
ticket walkthrough.

Iterated through 8 review rounds before the framing pivot:

- Initial draft: 2,147 body words, 9-section structure with numbered
  H2s, Priya/Veracode-45% hook, three-contracts + three-rhythms
  + nine-reviews scaffold.
- 4 CodeRabbit findings: dropped numeric H2 prefixes, fixed Priya
  fragment ("Many of her cohort." merged into prior sentence),
  rewrote abstract subjects with agentful "we" constructions.
- AI-tell sweep across 18 patterns: removed "Not X but Y", three-
  fragment closers, definitional opener clusters, slogany closers.
  Score 52 -> est. 22-28.
- Trim to 460 words (per user feedback "remove fluffy"): dropped
  Priya hook entirely, removed architecture-enumeration paragraphs.
- Pivot to code-development walkthrough: real Rails project examples
  (.claude/agents/, .claude/skills/, docs/). +130 words for "What one
  ticket looks like" walkthrough.
- Path rename: .ai-rulez/rules/ -> .claude/rules/ for tool-agnostic
  framing.
- Round 1 (3 reviewers - founder skeptic, AI detector at 72/100,
  cut-test editor): 14 fixes including renamed mannerism-noun H2s,
  cut filler restatements, replaced 3-incident parallel triplet with
  2 anonymized client-anchored examples.
- Round 2 (5 reviewers - AI detector, senior Rails dev, DevOps
  practitioner, AI-skeptical engineer, tired skimmer): 12 fixes
  including replaced fabricated .claude/rules/ with real Claude Code
  conventions (CLAUDE.md + rule files), removed wrong dependent:
  :destroy "framework behavior" example, fixed Rails 8.1 -> Rails 8,
  added cost transparency ($4-6 Sonnet / $20-30 Opus / 45min wall-clock).

State at end of phase: 932 body words, 8-section XP-workflow post.
Voice audit: 0 em dashes, "The..." opener at 10%, mannerism-noun
count 7.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Major restructure: dropped the XP-workflow narrative framing in favor of
a capability-map structure answering "what roles did we replicate, how
do they collaborate, what cadences do they run, why does team-mode beat
solo-mode."

Two passes in this phase:

- Restructure to capability map: 12-role table (Business Strategist,
  Product Manager, Business Analyst, Customer Representative, UX
  Researcher, UI/UX Designer, Tech Lead, Driver/Navigator, Critic Panel,
  QA/Visual Verifier, DevOps Gates, Reflection/Kaizen). Five
  collaborations and three nested loops (sprint outer, development
  inner, Ralph loop). Initially cited Teresa Torres' continuous
  discovery + opportunity solution trees + impact mapping.

- Methodology alignment with elital docs: audit revealed the post's
  methodology claims did not match what elital actually runs. Fixes:
  - Replaced Teresa Torres references with Shape Up
    (basecamp.com/shapeup) + Mikado method - elital runs pitches in
    docs/pitches/, three-lens betting, Mikado decomposition.
  - Dropped opportunity-solution-tree keyword from frontmatter.
  - Renamed "How the roles collaborate" -> "How the team works
    together" (drop signposting count).
  - Renamed "Three nested loops" -> "The cadences we run"
    (CodeRabbit-flagged signposting).
  - Replaced "Dysfunctions in 90% of organizations" fake-authority
    statistic with "The dysfunction we keep seeing in product teams"
    (observational, no fabricated number).
  - Added elital agent names (lead-shaper for PM, critic-discovery for
    UX Researcher) to the "Where it lives" column where real.

State at end of phase: 1,488 body words, 12-role capability map.
Voice audit: 0 em dashes, "The..." opener density 16%.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…quash + attribution)

Three polish rounds collapsed into one phase, all on the capability-map
structure:

- 5-persona reader review: readers Marcus (founder), Elena (senior
  Rails), Tom (Shape-Up practitioner), Priya (skimmer), Sam
  (conversion). 7 consensus fixes:
  1. Dropped Mikado method citation (Mikado is revert-on-failure
     refactoring, not slicing decomposition).
  2. Honest Shape-Up framing: 3-lens betting and 1-2w cadence labelled
     as "Shape-Up-inspired" with explicit cycle-length note.
  3. Replaced "bet portfolio" (not a Shape Up term) with "decides
     which bets to ship".
  4. Cost qualifier scoped: $4-6 Sonnet now scoped to "small slice
     (one AC item, 50-100 lines of diff)".
  5. Simplified Ralph paragraph: dropped Wiggum name detail and
     founder reference, tightened to one sentence + Anthropic plugin
     link.
  6. Renamed "Agile and flexibility are non-negotiable for AI" ->
     "Why the rules need to change every sprint" (less thinkfluencer).
  7. Replaced single repo-URL audit with two-tier CTA: low-friction
     role-file ask as primary entry, repo audit as deeper option.

- Role squash 12 -> 8 per user feedback. Collapsed capabilities that
  real product teams treat as one role:
  - PM + Business Analyst + Customer Representative -> Product Manager
  - UX Researcher + UI/UX Designer -> UI/UX Designer
  - Tech Lead + Driver/Navigator pair -> Tech Lead

- Round 4 fixes (5 fresh-context reviewer agents - skeptical EM, ICP
  solo founder, AI slop hunter, senior staff engineer, cold-eyes
  editor). User-greenlit fixes:
  - Shape Up: canonical Basecamp terms (Problem, Appetite, Solution,
    Rabbit Holes, No-Gos) instead of paraphrased list.
  - Tidy First: credit Kent Beck for the structural-vs-behavioral
    commit discipline (staff-engineer reviewer flagged missing
    attribution).
  - Stripped 2 aphoristic closers ("Disagreement gets resolved by the
    artifact..." / "Roles that don't feed reflection drift...") -
    banned slogany-parallel form per voice guide.
  - Varied 3 "The..." paragraph openers; density 30% -> 16.7% (voice
    guide caps at 20%).
  - Collapsed dual CTA into one with two named deliverables
    (.claude/agents/<role>.md + 1-page note on next two seats).

  Hook fix postponed: anonymized-incident hook archetype is overused
  across recent posts; saved to memory as hook-bank-diversity feedback.

Final state: 1,533 body words, 8-role capability map. Voice audit:
0 em dashes, "The..." opener density 16.7%, mannerism-noun count 7.

bin/hugo-build: passes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@pftg pftg force-pushed the content/claude-code-xp-team-post branch from 0696140 to 90913c2 Compare May 4, 2026 22:50
CodeRabbit asked for benchmark-style qualifiers on the cost line
($4-6 / $20-30 / 30-45 min). Most of the asked-for additions don't
fit blog voice or aren't applicable: hardware/cloud instance is N/A
for API calls, 95% CI requires sampling data we don't have, model
versions ages the post, date is in frontmatter, slice size is
already specified, Ralph-loop retry exclusion is already implied.

Two genuinely missing qualifiers added:
- "with prompt caching on" - the math assumes prompt caching is
  active (4 parallel critics re-loading project context blow past
  these numbers without caching). Staff-engineer reviewer flagged
  this earlier; CodeRabbit missed it.
- "the 4-agent critic panel (PM, Designer, Rails, Simplicity)" -
  names the parallel-run count so the cost claim is falsifiable.
  Matches the critic-panel row in the capability table.

bin/hugo-build: passes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant