content(blog): full product team capability map for Claude Code agents#322
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new non-draft Hugo blog post (2026-05-04) describing a “team-mode” Claude Code agents workflow: defines 8 replicated roles, five collaboration flows, three nested loops (outer sprint, inner dev, and iterative “Ralph loop”), supporting tooling, start-with-three-roles guidance, and a CTA requesting a one-sentence role-gap plus repo URL to receive the corresponding ChangesClaude Code XP Team Workflow
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 63-64: The sentence "The cost is real and concrete: nine reviews
plus consolidator passes runs us about $4-6 per post on Sonnet, $20-30 on Opus,
and 45 minutes of wall-clock time when the agents work in parallel." uses an
abstract subject; change it to an agentful construction (e.g., "We track the
cost: our team spends..." or "Our process incurs: we spend...") so the subject
is a person/agent doing the action. Locate that exact sentence in index.md and
replace with a version that applies the "who" test (mentioning "we", "our team",
or "the reviewers") and preserves the numeric cost/time details.
- Around line 21-111: The post currently uses explicit numeric signposting (the
H2s like "## 1. Why solo Claude Code agents ship vibe-coded MVPs", "## 2. The
six layers..." and sections titled "Three contracts", "Three rhythms",
"Multi-reviewer setup via parallel critic agents" with repeated counts like
"three", "nine", "three-line") which violates the reject-rule-of-three and
avoid-signposting rules; update the manuscript by converting the numbered H2s
into unnumbered headings, removing ordinal markers from section titles (e.g.,
change "Three contracts Claude Code reads before drafting" → "Contracts Claude
Code reads before drafting"), replace repeated explicit counts (“three
contracts”, “three rhythms”, “three rounds, three personas”, “nine reviews”,
“three-line refactor”) with non-counted or varied phrasing (e.g., "a small set
of contracts", "multiple review rounds with varied personas", "several reviews",
or specific alternative counts where necessary like "two gates + one contract"
only in the payoff example), and soften signposting language throughout so the
flow reads as layered explanation rather than a numbered list.
- Around line 31-32: The sentence starting with "Six layers feed into each
other" uses an abstract subject and fails the "who test"; rewrite it and any
similar abstract-subject sentences to be agentful (use a person/agent as the
subject). For example, replace "Six layers feed into each other" with an active
construction like "We wire contracts into rhythms," change "the
reflection-back-to-contracts arrow is the one we kept dropping" to "we kept
dropping the reflection-back-to-contracts step for our first three sprints," and
turn "the one that finally lets the next sprint inherit what the last one
learned" into "that finally lets the next sprint inherit what we learned."
Update all analogous sentences to use "we" or another explicit actor so every
sentence passes the who-test.
- Around line 23-24: The sentence fragment "Many of her cohort." is fragmentary
and fails the "who" test; fix it by merging it into the prior sentence or
rewriting it as an agentful sentence so a person performs the action—for example
change "…dropped session tokens for anyone whose email had a plus sign in it.
Many of her cohort." to "…dropped session tokens for anyone whose email had a
plus sign in it, which affected many members of her cohort." or "…dropped
session tokens for anyone whose email had a plus sign in it. Priya discovered it
affected many members of her cohort." Update the line containing the fragment to
ensure each sentence names an actor and completes the idea.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d4c0c683-341b-4ce2-b400-42995f51a3fb
⛔ Files ignored due to path filters (2)
content/blog/claude-code-xp-team-workflow/cover.pngis excluded by!**/*.pngcontent/blog/claude-code-xp-team-workflow/pipeline.svgis excluded by!**/*.svg
📒 Files selected for processing (1)
content/blog/claude-code-xp-team-workflow/index.md
… + who test) CodeRabbit on PR #322 flagged: 1. (Major, Heavy) Numbered H2 spine "## 1. ... ## 9." plus repeated "Three X" counts read as rule-of-three signposting per voice guide. 2. (Major, Quick) L24 "Many of her cohort." fragment fails the who test. 3. (Major, Quick) L32 "Six layers feed into each other..." abstract subject. 4. (Minor, Quick) L64 "The cost is real and concrete..." abstract subject. Fixes applied: - Dropped "## N." numeric prefixes from all 9 H2s. - Softened "## Three contracts ..." → "## Contracts ..." and "## Three rhythms ..." → "## Rhythms ...". The body still lists three items in each — that's content, not rhetorical structure. - L24 fragment merged into agentful sentence: "The drop hit many of her early users before she noticed." - L32 §2 opener rewritten with "we" subject and sequential action: "We move every post through six layers in order... We feed the reflection back into the contracts so the next sprint inherits what we learned. Skip one layer and the next one breaks." - L64 §5 cost sentence rewritten: "We pay real money for this. Nine reviews plus consolidator passes costs about $4-6 per post..." bin/hugo-build: passes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
content/blog/claude-code-xp-team-workflow/index.md (2)
59-61:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRule-of-three structure is still a primary organizing pattern here
This section still centers repeated triads (“three navigators,” “three rounds,” “three personas”), which keeps the exact pattern the guideline asks to avoid. Please rewrite this flow without recurring “three” framing (describe roles/rounds without count-led scaffolding).
As per coding guidelines, “In blog posts, reject rule-of-three structural patterns, signposting...”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 59 - 61, Rewrite the passage that currently repeats "three" throughout (lines referencing "three navigators", "three rounds", "three personas", and the labeled "Round 1/2/3" descriptions) to remove the rule-of-three scaffolding: describe the workflow as a sequence of iterative review stages with distinct role-focused reviewers (e.g., a skeptical founder persona, an SEO/probability scorer, a senior copy editor in the first pass; a technical reviewer, copywriter, and AI-detector in the next; and later a DevOps reviewer, conversion specialist, and a final skimmer) but avoid numeric triad framing and explicit counts—use terms like "initial pass", "subsequent pass", "final pass" and emphasize role diversity and independent cold checks instead of enumerating "three" at each step. Ensure the examples of flagged phrases ("blast radius", "suspect list", "every safe move is one of these") and the finding about the cold AI detector surfacing additional verbatim LLM phrasings remain, but integrate them into the narrative without count-based signposting.
81-89:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftWho-test still fails in this reflection block
Several sentences use abstract subjects instead of a person acting (for example, Line 81 and Line 89). Please make these agentive (“we/us/team/member did X”) to satisfy the who-test consistently.
Suggested local rewrite direction
-The most concrete example sits in `docs/90-99-content-strategy/writer-prompt-research-2026-05-04.md`. +We documented the most concrete example in `docs/90-99-content-strategy/writer-prompt-research-2026-05-04.md`. -A banned-words list catches none of these. +We found that our banned-words list catches none of these.As per coding guidelines, “In blog posts, apply the 'who' test - every sentence needs a person doing something”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 81 - 89, The reflection block that begins with "The most concrete example..." uses abstract subjects and passive framing (e.g., phrases like "the developer", "the team", and constructions such as "X is the canonical Y"); rewrite those sentences to pass the who-test by making them agentive (use "we/us/the team/a member" as the actor) and convert passive/abstract statements into concrete actions (e.g., "we audited 5,046 words and found five tells" rather than "the audit... turned up five tells"); update the list items so each describes who did what (e.g., "we observed mannerism-noun refrain..." or "our audit flagged round-number anchor density...") and ensure any example phrases ("the developer", "the team", "X is the canonical Y") are recast to show an explicit agent performing the evaluation or statement.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 59-61: Rewrite the passage that currently repeats "three"
throughout (lines referencing "three navigators", "three rounds", "three
personas", and the labeled "Round 1/2/3" descriptions) to remove the
rule-of-three scaffolding: describe the workflow as a sequence of iterative
review stages with distinct role-focused reviewers (e.g., a skeptical founder
persona, an SEO/probability scorer, a senior copy editor in the first pass; a
technical reviewer, copywriter, and AI-detector in the next; and later a DevOps
reviewer, conversion specialist, and a final skimmer) but avoid numeric triad
framing and explicit counts—use terms like "initial pass", "subsequent pass",
"final pass" and emphasize role diversity and independent cold checks instead of
enumerating "three" at each step. Ensure the examples of flagged phrases ("blast
radius", "suspect list", "every safe move is one of these") and the finding
about the cold AI detector surfacing additional verbatim LLM phrasings remain,
but integrate them into the narrative without count-based signposting.
- Around line 81-89: The reflection block that begins with "The most concrete
example..." uses abstract subjects and passive framing (e.g., phrases like "the
developer", "the team", and constructions such as "X is the canonical Y");
rewrite those sentences to pass the who-test by making them agentive (use
"we/us/the team/a member" as the actor) and convert passive/abstract statements
into concrete actions (e.g., "we audited 5,046 words and found five tells"
rather than "the audit... turned up five tells"); update the list items so each
describes who did what (e.g., "we observed mannerism-noun refrain..." or "our
audit flagged round-number anchor density...") and ensure any example phrases
("the developer", "the team", "X is the canonical Y") are recast to show an
explicit agent performing the evaluation or statement.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 35d1a1ab-8756-4963-9d6c-fc9cd481a546
📒 Files selected for processing (1)
content/blog/claude-code-xp-team-workflow/index.md
… 10 posts After driving PR #322 score 52→est. 22-28 across 3 cold-pass sweeps, an empirical audit on the 10 most recent posts surfaced patterns the new pipeline (PR #319) doesn't catch. Add them as TEACHING (not gates). Audit findings (10 posts, ~22k words): - Long paragraphs >3 sentences in 39-58% of TDD/XP cluster (cap unenforced) - Anaphora pairs (same first-word sentences in same paragraph) in 10/10 posts - "The..." paragraph-opener density at 18-33% in 6/10 posts (target ≤20%) - "We..." paragraph-opener stamping in TDD/XP cluster - Mannerism-noun saturation: 9-18 per post (cap is 3 combined) - The post's own architectural labels become the tell once stamped repeatedly (`contract`, `gate`, `gatekeeper`, `layer`, `round`, `review`, `pass` joined the original mannerism set) - Architecture enumeration in body restating the H2 structure - Formulaic founder-anecdote hook (Priya/Marcus/Sarah opener) in last 5 founder-pain posts - became its own template Voice guide additions (new rows in Banned structural patterns table): - Anaphora pairs - read aloud, no two sentences per paragraph share first word - "The..." opener density - cap 20% of paragraphs - "We..." opener density - cap 15% of paragraphs - Long paragraphs (cap break) - 3 sentences AND 70 words working cap - Architecture enumeration - let H2s carry structure, don't list pillars in body - Formulaic founder-anecdote hook - the recurring shape is now the tell Mannerism-noun row extended: - Cap raised from 3 to 5 (acknowledges some posts genuinely need labels) - New mannerisms added to the watch list: cadence, rhythm, contract, gate, gatekeeper, layer, round, review, pass Writer prompt additions (STEP 4b cadence quotas): - Paragraph cap: 3 sentences AND 70 words (was just 3 sentences) - Anaphora rule - Opener variety (both "The..." and "We..." capped) - Mannerism cap raised to 5, watch list extended - Architecture-enumeration ban - Hook variety guidance Writer prompt additions (STEP 4c BAD/GOOD pairs): - Anaphora pair rewrite - Architecture enumeration rewrite - Long-paragraph split example - Mannerism-noun saturation rewrite - Formulaic founder-anecdote hook with alternative opener shape - "We" opener stamping with rewrite These are TEACHING additions (concrete BAD/GOOD pairs, surface-feature descriptions). No tool recommendations, no programmatic gates. The LLM learns from the patterns, not from a CI script. bin/hugo-build: passes (docs-only). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
dcff9fd to
51657a1
Compare
… + who test) CodeRabbit on PR #322 flagged: 1. (Major, Heavy) Numbered H2 spine "## 1. ... ## 9." plus repeated "Three X" counts read as rule-of-three signposting per voice guide. 2. (Major, Quick) L24 "Many of her cohort." fragment fails the who test. 3. (Major, Quick) L32 "Six layers feed into each other..." abstract subject. 4. (Minor, Quick) L64 "The cost is real and concrete..." abstract subject. Fixes applied: - Dropped "## N." numeric prefixes from all 9 H2s. - Softened "## Three contracts ..." → "## Contracts ..." and "## Three rhythms ..." → "## Rhythms ...". The body still lists three items in each — that's content, not rhetorical structure. - L24 fragment merged into agentful sentence: "The drop hit many of her early users before she noticed." - L32 §2 opener rewritten with "we" subject and sequential action: "We move every post through six layers in order... We feed the reflection back into the contracts so the next sprint inherits what we learned. Skip one layer and the next one breaks." - L64 §5 cost sentence rewritten: "We pay real money for this. Nine reviews plus consolidator passes costs about $4-6 per post..." bin/hugo-build: passes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… + who test) CodeRabbit on PR #322 flagged: 1. (Major, Heavy) Numbered H2 spine "## 1. ... ## 9." plus repeated "Three X" counts read as rule-of-three signposting per voice guide. 2. (Major, Quick) L24 "Many of her cohort." fragment fails the who test. 3. (Major, Quick) L32 "Six layers feed into each other..." abstract subject. 4. (Minor, Quick) L64 "The cost is real and concrete..." abstract subject. Fixes applied: - Dropped "## N." numeric prefixes from all 9 H2s. - Softened "## Three contracts ..." → "## Contracts ..." and "## Three rhythms ..." → "## Rhythms ...". The body still lists three items in each — that's content, not rhetorical structure. - L24 fragment merged into agentful sentence: "The drop hit many of her early users before she noticed." - L32 §2 opener rewritten with "we" subject and sequential action: "We move every post through six layers in order... We feed the reflection back into the contracts so the next sprint inherits what we learned. Skip one layer and the next one breaks." - L64 §5 cost sentence rewritten: "We pay real money for this. Nine reviews plus consolidator passes costs about $4-6 per post..." bin/hugo-build: passes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
a9a3f55 to
7abeccf
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
content/blog/claude-code-xp-team-workflow/index.md (2)
70-73:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRemove explicit “Three …” structural framing here.
Line 70 and Line 72 still use explicit rule-of-three signposting (“Three nested loops”, “Three cadences…”), which conflicts with the blog-structure rule.
Suggested edit
-## Three nested loops +## Nested loops in the workflow -Three cadences run in parallel, each with a different time horizon. +These cadences run in parallel, and each one operates on a different time horizon.As per coding guidelines, “In blog posts, reject rule-of-three structural patterns, signposting, bold inline-header lists...”.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 70 - 73, Remove the explicit "Three ..." rule-of-three signposting: rename or replace the heading "## Three nested loops" with a neutral descriptive heading (e.g., "Nested cadence loops" or "Cadence loops") and rephrase the sentence "Three cadences run in parallel, each with a different time horizon." to describe the content without enumerating ("Multiple cadences run in parallel, each with a different time horizon." or similar). Ensure any other inline mentions in this section avoid the numeric signposting pattern and instead use descriptive language.
68-80:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRewrite abstract-subject sentences to pass the “who” test.
This block still has abstract subjects (for example Line 68 and Line 80). Rephrase with explicit actors (“we”, “the team”, “the agent”) to meet the required style.
Suggested edit
-Next sprint inherits the lesson. Roles that don't feed reflection drift; roles that do compound. +We carry these lessons into the next sprint. Teams that skip reflection drift, and teams that run it consistently compound progress. -The loops nest: outer scope informs inner work, and Ralph catches small failures inside the inner loop without escalating to a human. +We nest these loops so sprint scope guides ticket work, and we use Ralph to catch small failures inside the inner loop before we escalate to a human.As per coding guidelines, “In blog posts, apply the 'who' test - every sentence needs a person doing something”.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 68 - 80, The paragraph describing the three nested loops uses abstract subjects; rewrite sentences under the "Three nested loops" section (the "Outer loop", "Inner loop", and "Ralph loop" descriptions) to name explicit actors—e.g., "we", "the team", "the driver and navigator", "the agent"—so every sentence passes the "who" test; for example change passive/abstract lines like "Cadence: one to two weeks" and "Cadence: seconds to minutes per attempt" by anchoring them to actors ("The team runs this cadence: one to two weeks", "The agent retries until the bar is met: seconds to minutes"), and update any other sentences in that block to explicitly state who performs discovery, updates opportunity trees, pairs on tickets, runs the critic panel, or iterates with the Ralph loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Line 90: The sentence "Dysfunctions in 90% of organizations come from one role
swallowing another's job..." makes an uncited numeric claim; either add a
reliable source citation for the "90%" statistic or change the wording to a
softer observational statement (e.g., "Dysfunctions in many organizations..." or
"In many teams, dysfunction arises when one role swallows another's job") so it
no longer asserts an unsupported percentage—update the sentence in the markdown
(the line containing that exact phrase) accordingly and, if adding a citation,
include a reference link or footnote.
---
Duplicate comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 70-73: Remove the explicit "Three ..." rule-of-three signposting:
rename or replace the heading "## Three nested loops" with a neutral descriptive
heading (e.g., "Nested cadence loops" or "Cadence loops") and rephrase the
sentence "Three cadences run in parallel, each with a different time horizon."
to describe the content without enumerating ("Multiple cadences run in parallel,
each with a different time horizon." or similar). Ensure any other inline
mentions in this section avoid the numeric signposting pattern and instead use
descriptive language.
- Around line 68-80: The paragraph describing the three nested loops uses
abstract subjects; rewrite sentences under the "Three nested loops" section (the
"Outer loop", "Inner loop", and "Ralph loop" descriptions) to name explicit
actors—e.g., "we", "the team", "the driver and navigator", "the agent"—so every
sentence passes the "who" test; for example change passive/abstract lines like
"Cadence: one to two weeks" and "Cadence: seconds to minutes per attempt" by
anchoring them to actors ("The team runs this cadence: one to two weeks", "The
agent retries until the bar is met: seconds to minutes"), and update any other
sentences in that block to explicitly state who performs discovery, updates
opportunity trees, pairs on tickets, runs the critic panel, or iterates with the
Ralph loop.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 42c5d6fe-000a-4e31-a802-f45b724e7869
⛔ Files ignored due to path filters (2)
content/blog/claude-code-xp-team-workflow/cover.pngis excluded by!**/*.pngcontent/blog/claude-code-xp-team-workflow/pipeline.svgis excluded by!**/*.svg
📒 Files selected for processing (1)
content/blog/claude-code-xp-team-workflow/index.md
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
content/blog/claude-code-xp-team-workflow/index.md (1)
66-75:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAgentless subjects in the cadence section still fail the “who” test
Several sentences use abstract subjects (“The sprint cycle runs…”, “The development cycle runs…”, “Each cadence nests…”). Rewrite this block with an explicit actor (“we/team/agents”) so each sentence names who is doing the work.
As per coding guidelines, “In blog posts, apply the 'who' test - every sentence needs a person doing something.”
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@content/blog/claude-code-xp-team-workflow/index.md` around lines 66 - 75, The three abstract sentences under "## The cadences we run" (the ones beginning "The sprint cycle runs…", "The development cycle runs…", and "Each cadence nests inside the larger one") and the Ralph loop sentence need to be rewritten to name the actor explicitly (e.g., "we", "the team", or "agents"); replace phrases like "The sprint cycle runs…" with "Our sprint cycle runs…" or "The team runs the sprint cycle…", change "The development cycle runs…" to "We run the development cycle…" or similar, and make "Each cadence nests…" say "Each cadence we run nests…" so every sentence passes the 'who' test while preserving the original meaning and cadence details.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Line 102: The cost and time figures currently stated ("$4-6", "$20-30", "30-45
minutes") are presented without measurement context; update the sentence to
include precise benchmarking details: state the date of measurement, the exact
model/version used (e.g., Sonnet vX.Y, Opus vA.B), the slice size assumptions
(one AC item defined as 50–100 lines of diff), run conditions (critics run in
parallel, number of critics/threads), hardware/cloud instance used, whether
Ralph-loop retries were included, and a note on variance/95% CI so readers
understand this is a specific measured example rather than a universal
benchmark. Ensure the revised line replaces the original numeric claims and
includes these qualifiers inline with the cost/time figures.
---
Duplicate comments:
In `@content/blog/claude-code-xp-team-workflow/index.md`:
- Around line 66-75: The three abstract sentences under "## The cadences we run"
(the ones beginning "The sprint cycle runs…", "The development cycle runs…", and
"Each cadence nests inside the larger one") and the Ralph loop sentence need to
be rewritten to name the actor explicitly (e.g., "we", "the team", or "agents");
replace phrases like "The sprint cycle runs…" with "Our sprint cycle runs…" or
"The team runs the sprint cycle…", change "The development cycle runs…" to "We
run the development cycle…" or similar, and make "Each cadence nests…" say "Each
cadence we run nests…" so every sentence passes the 'who' test while preserving
the original meaning and cadence details.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ef2e25ce-1d0f-4510-841f-db6710f6c7ff
📒 Files selected for processing (1)
content/blog/claude-code-xp-team-workflow/index.md
|
|
||
| Setup eats one sprint of overhead. Rule files need writing, agent definitions need configuring, and the team needs convincing. From sprint two the time disappears. | ||
|
|
||
| Cost on a small slice (one AC item, 50-100 lines of diff) runs $4-6 on Sonnet or $20-30 on Opus, plus 30-45 minutes of wall-clock time when critics run in parallel. Larger bets and Ralph-loop retries push the bill higher, but it's still cheaper than the rework on a vibe-coded PR that ships the wrong feature. |
There was a problem hiding this comment.
Cost figures need concrete measurement context to avoid sounding like universal benchmarks
$4-6 / $20-30 and 30-45 minutes are useful, but they read as broadly applicable. Add scope details (when measured, model/version, slice size assumptions, and run conditions) so this stays practitioner-grade and not a generalized claim.
As per coding guidelines, “In blog posts, avoid … fake authority generalization” and “apply the 'practitioner' test - replace generalized scenarios with specific incidents.”
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@content/blog/claude-code-xp-team-workflow/index.md` at line 102, The cost and
time figures currently stated ("$4-6", "$20-30", "30-45 minutes") are presented
without measurement context; update the sentence to include precise benchmarking
details: state the date of measurement, the exact model/version used (e.g.,
Sonnet vX.Y, Opus vA.B), the slice size assumptions (one AC item defined as
50–100 lines of diff), run conditions (critics run in parallel, number of
critics/threads), hardware/cloud instance used, whether Ralph-loop retries were
included, and a note on variance/95% CI so readers understand this is a specific
measured example rather than a universal benchmark. Ensure the revised line
replaces the original numeric claims and includes these qualifiers inline with
the cost/time figures.
Initial post creation (cover.png, pipeline.svg, index.md) under the new writer-prompt pipeline (PR #319). First framing was an XP-workflow narrative anchored in a Priya founder-incident hook with a concrete ticket walkthrough. Iterated through 8 review rounds before the framing pivot: - Initial draft: 2,147 body words, 9-section structure with numbered H2s, Priya/Veracode-45% hook, three-contracts + three-rhythms + nine-reviews scaffold. - 4 CodeRabbit findings: dropped numeric H2 prefixes, fixed Priya fragment ("Many of her cohort." merged into prior sentence), rewrote abstract subjects with agentful "we" constructions. - AI-tell sweep across 18 patterns: removed "Not X but Y", three- fragment closers, definitional opener clusters, slogany closers. Score 52 -> est. 22-28. - Trim to 460 words (per user feedback "remove fluffy"): dropped Priya hook entirely, removed architecture-enumeration paragraphs. - Pivot to code-development walkthrough: real Rails project examples (.claude/agents/, .claude/skills/, docs/). +130 words for "What one ticket looks like" walkthrough. - Path rename: .ai-rulez/rules/ -> .claude/rules/ for tool-agnostic framing. - Round 1 (3 reviewers - founder skeptic, AI detector at 72/100, cut-test editor): 14 fixes including renamed mannerism-noun H2s, cut filler restatements, replaced 3-incident parallel triplet with 2 anonymized client-anchored examples. - Round 2 (5 reviewers - AI detector, senior Rails dev, DevOps practitioner, AI-skeptical engineer, tired skimmer): 12 fixes including replaced fabricated .claude/rules/ with real Claude Code conventions (CLAUDE.md + rule files), removed wrong dependent: :destroy "framework behavior" example, fixed Rails 8.1 -> Rails 8, added cost transparency ($4-6 Sonnet / $20-30 Opus / 45min wall-clock). State at end of phase: 932 body words, 8-section XP-workflow post. Voice audit: 0 em dashes, "The..." opener at 10%, mannerism-noun count 7. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Major restructure: dropped the XP-workflow narrative framing in favor of
a capability-map structure answering "what roles did we replicate, how
do they collaborate, what cadences do they run, why does team-mode beat
solo-mode."
Two passes in this phase:
- Restructure to capability map: 12-role table (Business Strategist,
Product Manager, Business Analyst, Customer Representative, UX
Researcher, UI/UX Designer, Tech Lead, Driver/Navigator, Critic Panel,
QA/Visual Verifier, DevOps Gates, Reflection/Kaizen). Five
collaborations and three nested loops (sprint outer, development
inner, Ralph loop). Initially cited Teresa Torres' continuous
discovery + opportunity solution trees + impact mapping.
- Methodology alignment with elital docs: audit revealed the post's
methodology claims did not match what elital actually runs. Fixes:
- Replaced Teresa Torres references with Shape Up
(basecamp.com/shapeup) + Mikado method - elital runs pitches in
docs/pitches/, three-lens betting, Mikado decomposition.
- Dropped opportunity-solution-tree keyword from frontmatter.
- Renamed "How the roles collaborate" -> "How the team works
together" (drop signposting count).
- Renamed "Three nested loops" -> "The cadences we run"
(CodeRabbit-flagged signposting).
- Replaced "Dysfunctions in 90% of organizations" fake-authority
statistic with "The dysfunction we keep seeing in product teams"
(observational, no fabricated number).
- Added elital agent names (lead-shaper for PM, critic-discovery for
UX Researcher) to the "Where it lives" column where real.
State at end of phase: 1,488 body words, 12-role capability map.
Voice audit: 0 em dashes, "The..." opener density 16%.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…quash + attribution)
Three polish rounds collapsed into one phase, all on the capability-map
structure:
- 5-persona reader review: readers Marcus (founder), Elena (senior
Rails), Tom (Shape-Up practitioner), Priya (skimmer), Sam
(conversion). 7 consensus fixes:
1. Dropped Mikado method citation (Mikado is revert-on-failure
refactoring, not slicing decomposition).
2. Honest Shape-Up framing: 3-lens betting and 1-2w cadence labelled
as "Shape-Up-inspired" with explicit cycle-length note.
3. Replaced "bet portfolio" (not a Shape Up term) with "decides
which bets to ship".
4. Cost qualifier scoped: $4-6 Sonnet now scoped to "small slice
(one AC item, 50-100 lines of diff)".
5. Simplified Ralph paragraph: dropped Wiggum name detail and
founder reference, tightened to one sentence + Anthropic plugin
link.
6. Renamed "Agile and flexibility are non-negotiable for AI" ->
"Why the rules need to change every sprint" (less thinkfluencer).
7. Replaced single repo-URL audit with two-tier CTA: low-friction
role-file ask as primary entry, repo audit as deeper option.
- Role squash 12 -> 8 per user feedback. Collapsed capabilities that
real product teams treat as one role:
- PM + Business Analyst + Customer Representative -> Product Manager
- UX Researcher + UI/UX Designer -> UI/UX Designer
- Tech Lead + Driver/Navigator pair -> Tech Lead
- Round 4 fixes (5 fresh-context reviewer agents - skeptical EM, ICP
solo founder, AI slop hunter, senior staff engineer, cold-eyes
editor). User-greenlit fixes:
- Shape Up: canonical Basecamp terms (Problem, Appetite, Solution,
Rabbit Holes, No-Gos) instead of paraphrased list.
- Tidy First: credit Kent Beck for the structural-vs-behavioral
commit discipline (staff-engineer reviewer flagged missing
attribution).
- Stripped 2 aphoristic closers ("Disagreement gets resolved by the
artifact..." / "Roles that don't feed reflection drift...") -
banned slogany-parallel form per voice guide.
- Varied 3 "The..." paragraph openers; density 30% -> 16.7% (voice
guide caps at 20%).
- Collapsed dual CTA into one with two named deliverables
(.claude/agents/<role>.md + 1-page note on next two seats).
Hook fix postponed: anonymized-incident hook archetype is overused
across recent posts; saved to memory as hook-bank-diversity feedback.
Final state: 1,533 body words, 8-role capability map. Voice audit:
0 em dashes, "The..." opener density 16.7%, mannerism-noun count 7.
bin/hugo-build: passes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
0696140 to
90913c2
Compare
CodeRabbit asked for benchmark-style qualifiers on the cost line ($4-6 / $20-30 / 30-45 min). Most of the asked-for additions don't fit blog voice or aren't applicable: hardware/cloud instance is N/A for API calls, 95% CI requires sampling data we don't have, model versions ages the post, date is in frontmatter, slice size is already specified, Ralph-loop retry exclusion is already implied. Two genuinely missing qualifiers added: - "with prompt caching on" - the math assumes prompt caching is active (4 parallel critics re-loading project context blow past these numbers without caching). Staff-engineer reviewer flagged this earlier; CodeRabbit missed it. - "the 4-agent critic panel (PM, Designer, Rails, Simplicity)" - names the parallel-run count so the cost claim is falsifiable. Matches the critic-panel row in the capability table. bin/hugo-build: passes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
Capability-map post for founders considering Claude Code agents. Argues that solo agents reproduce the dysfunctions of solo developers - missing PM, designer, QA, customer voice, researcher, business strategist, reflection. Names the twelve roles, five collaborations, and three nested loops we replicated to run continuous discovery + continuous delivery.
Audience: non-technical founder burned by a devshop, considering AI-built MVPs (ICP-E).
Body: 1443 words. Voice audit clean (0 em dashes, "The..." opener at 5% of 79 sentences, mannerism nouns at cap of 5).
Post shape
ralph-wiggumplugin)The twelve roles
Business Strategist (north star) - PM (manages opportunities) - Business Analyst (data signal) - Customer Representative (user voice) - UX Researcher (continuous discovery) - UI/UX Designer (Stitch) - Tech Lead (feasibility + architecture) - Driver/Navigator pair - Critic Panel - QA/Visual Verifier (chrome-devtools MCP) - DevOps Gates (CI) - Reflection/Kaizen (skills.sh).
The five collaborations
The three loops
Why the pivot
Earlier draft was a 6-layer XP workflow guide ("rules → rhythms → critics → gates → reflection"). Reader feedback: "missing PM role, designer, customer voice, business strategist, opportunity mapping, loops" - the workflow guide assumed roles already existed. New shape names the team itself, then names how it works together. The setup is the team-replication, not the workflow.
Internal links (4 verified)
/blog/vibe-coding-crisis-ai-code-debt/(founder pain anchor in lede)/blog/test-driven-development-tdd-in-ruby-step-by-guide-tutorial-bestpractices/(XP delivery)/blog/refactor-step-tdd-three-line-discipline-ruby/(micro-commit discipline)/contact-us/(audit CTA)External citations
Test plan
bin/hugo-buildpasses (660 pages)docs/blog-post-index.mdVisuals
cover.png(unchanged from prior commits)pipeline.svg(unchanged from prior commits) - the team-replication framing reuses the existing 5-step capability flow🤖 Generated with Claude Code
Summary by CodeRabbit