Convert CodeRabbit GitHub PR reviews into clean, LLM-friendly text for AI-assisted code improvement workflows.
CodeRabbit provides excellent automated code review feedback, but it's designed for human consumption in GitHub's web interface. This tool extracts that feedback and formats it specifically for AI coding agents (Claude, ChatGPT, etc.) to automatically apply suggestions.
Perfect for:
- π€ AI-assisted development workflows
- π Automated code improvement pipelines
- π Converting review feedback to actionable prompts
- π Streamlining the code review β implementation cycle
# Basic usage - latest review only (recommended)
python3 extract-coderabbit-feedback.py obra/lace/278
# All reviews (for complex analysis)
python3 extract-coderabbit-feedback.py obra/lace/255 --all-reviews
# Get help
python3 extract-coderabbit-feedback.py --help
- GitHub CLI (
gh
) installed and authenticated - Python 3.6+
- beautifulsoup4 package
Option 1: Direct usage (recommended)
# Clone and use directly
git clone https://github.com/obra/coderabbit-review-helper.git
cd coderabbit-review-helper
pip install --user beautifulsoup4
./extract-coderabbit-feedback.py owner/repo/123
Option 2: pip install (when published)
pip install coderabbit-review-extractor
coderabbit-extract owner/repo/123
Option 3: pipx (isolated environment)
pipx install coderabbit-review-extractor
coderabbit-extract owner/repo/123
# Install gh CLI
brew install gh # macOS
# or follow: https://cli.github.com/
# Authenticate (works with private repos)
gh auth login
- Latest review by default - Avoids overwhelming output from PRs with 20+ reviews
- File-organized output - Groups all feedback by file for easy navigation
- Priority sorting - AI-actionable items (π€) appear first within each file
- AI prompts prioritized - Detailed implementation instructions when available
- Clean diffs - Fallback for simple mechanical changes
- No redundancy - Never shows both prompt AND diff for the same suggestion
- HTML-free output - Stripped of GitHub web interface artifacts
# CodeRabbit Review Feedback
========================================
**Total files with feedback:** 11
**Total comments:** 46
## packages/core/src/config/user-settings.ts
**6 suggestion(s)**
### 1. Lines 13: π€ Replace all `any` with a safe JSON type (violates "no-explicit-any").
**Proposed prompt:**
In packages/core/src/config/user-settings.ts around lines 13, 26, 52 and 68,
replace the use of any
with a concrete JSON-safe type: add a JsonValue
(recursive union: string | number | boolean | null | JsonValue[] | { [key:
string]: JsonValue }) and a UserSettings = Record<string, JsonValue> (or a more
specific shape if known), then change private static cachedSettings, method
parameters and return types to use UserSettings/JsonValue instead of any...
### 2. Lines 42: Consolidate ABOUTME comments
```diff
-// ABOUTME: Simple user settings manager for arbitrary JSON preferences
-// ABOUTME: Stores settings in ~/.lace/user-settings.json with no validation
+// ABOUTME: Simple user settings manager for arbitrary JSON preferences stored in ~/.lace/user-settings.json (no validation)
## Options
### Review Selection
- **Default**: Latest review only (avoids overwhelming multi-review PRs)
- **`--all-reviews`**: Extract all CodeRabbit reviews
- **`--since-commit SHA`**: Reviews after specific commit (coming soon)
### Debug Mode
- **`--debug`**: Show processing annotations to understand decisions
### Input Formats
- **Full URL**: `https://github.com/owner/repo/pull/123`
- **Short format**: `owner/repo/123`
## Advanced Usage
### Pathological Multi-Review PRs
Some PRs accumulate dozens of reviews as code evolves:
```bash
# Problem: 20 reviews = 300 comments across 36 files (overwhelming)
python3 extract-coderabbit-feedback.py obra/lace/255 --all-reviews
# Solution: Latest review only = 58 comments across 22 files (manageable)
python3 extract-coderabbit-feedback.py obra/lace/255
Create ~/.coderabbit-extractor
with custom text to prepend to output. Perfect for encouraging AI agents to think critically rather than blindly follow suggestions:
Example: Critical Evaluation Preamble
# Use the provided example
cp example-preamble.txt ~/.coderabbit-extractor
# Or create your own
cat > ~/.coderabbit-extractor << 'EOF'
A reviewer did some analysis of this PR. They're external, so reading the codebase cold. This is their analysis of the changes and I'd like you to evaluate the analysis and the reviewer carefully.
1) should we hire this reviewer
2) which of the issues they've flagged should be fixed?
3) are the fixes they propose the correct ones?
Anything we *should* fix, put on your todo list.
Anything we should skip, tell me about now.
EOF
This "subterfuge" approach encourages AI agents to:
- Question the validity of suggestions
- Evaluate reviewer competence
- Think independently about solutions
- Avoid blind trust in automated tools
# With critical evaluation preamble
python3 extract-coderabbit-feedback.py owner/repo/123 | claude-api
# Direct application (less critical thinking)
python3 extract-coderabbit-feedback.py owner/repo/123 | claude-api "Apply these suggestions"
The tool extracts and organizes three types of CodeRabbit feedback:
- Refactor suggestions with detailed implementation prompts
- Cross-file changes with scope guidance
- Security fixes with context and alternatives
- Complex architectural changes with step-by-step instructions
- Style improvements with simple diffs
- Best practice suggestions with explanations
- Minor optimizations with code examples
- Issues outside the changed lines with context
- Broader pattern suggestions affecting multiple files
- Architectural recommendations for consideration
- Traditional diffs: "Change A to B" (prescriptive)
- AI prompts: "Problem X causes Y, consider approaches Z" (explanatory)
- Eliminates redundancy between descriptions, diffs, and prompts
- 20% more concise output while preserving critical information
- Priority-sorted so AI focuses on most important items first
- AI prompts let agents find the best solution approach
- Simple diffs provide precise guidance for mechanical changes
- Context preservation for cross-file and multi-step changes
Use --debug
to see processing decisions:
python3 extract-coderabbit-feedback.py obra/lace/278 --debug
Shows annotations like:
<!-- DEBUG: has_prompt=True, has_diff=False, source=inline -->
<!-- DEBUG: Showing prompt only, skipping description and diff -->
This tool evolved through systematic agent feedback and iterative improvement:
- Initial extraction with HTML corruption issues
- Agent review identified formatting problems
- Step-by-step commits fixing each issue class
- Focus group validation of prompt-vs-diff prioritization
MIT License - Feel free to use, modify, and distribute.
Built to bridge the gap between excellent automated code review (CodeRabbit) and excellent automated code improvement (AI coding agents).