- Creates highly-customizable AI Reviews as PR comments.
- Installation: Just 2 files copied to your repo and a open router API Key in your secrets.
- Costs: $0.01 - $0.05 per review (depends highly on model)
- ** Example output ** LearningCircuit/local-deep-research#959 (comment)
This guide explains how to set up the automated AI PR review system using OpenRouter to analyze pull requests with your choice of AI model.
The AI Code Reviewer provides automated, comprehensive code reviews covering:
- Security 🔒 - Hardcoded secrets, SQL injection, XSS, authentication issues, input validation
- Performance ⚡ - Inefficient algorithms, N+1 queries, memory issues, blocking operations
- Code Quality 🎨 - Readability, maintainability, error handling, naming conventions
- Best Practices 📋 - Coding standards, proper patterns, type safety, dead code
The review is posted as a single comprehensive comment on your pull request.
- Go to OpenRouter.ai
- Sign up or log in
- Navigate to API Keys section
- Create a new API key
- Copy the key (it starts with
sk-or-v1-...)
- Go to your GitHub repository
- Navigate to Settings → Secrets and variables → Actions
- Click New repository secret
- Name it:
OPENROUTER_API_KEY - Paste your OpenRouter API key
- Click Add secret
The workflow is pre-configured with sensible defaults, but you can customize it by editing .github/workflows/ai-code-reviewer.yml:
- AI_MODEL: Change the AI model (see OpenRouter models)
- AI_TEMPERATURE: Adjust randomness (default:
0.1for consistent reviews) - AI_MAX_TOKENS: Maximum response length (default:
2000) - MAX_DIFF_SIZE: Maximum diff size in bytes (default:
800000/ 800KB)
To trigger an AI review on a PR:
- Go to the PR page
- Click Labels
- Add the label:
ai_code_review
The review will automatically start and post results as a comment when complete.
To re-run the AI review after making changes:
- Remove the
ai_code_reviewlabel - Add the
ai_code_reviewlabel again
This will generate a fresh review of the current PR state.
The AI posts a comprehensive comment analyzing your code across all focus areas. The review is meant to assist human reviewers, not replace them.
Costs vary by model, but most code-focused models on OpenRouter are very affordable:
- Typical small PR (< 1000 lines): $0.001 - $0.01
- Large PR (1000-5000 lines): $0.01 - $0.05
Check OpenRouter pricing for specific model costs.
Edit ai-reviewer.sh to modify the review prompt. The current focus areas are:
- Security (secrets, injection attacks, authentication)
- Performance (algorithms, queries, memory)
- Code Quality (readability, maintainability, error handling)
- Best Practices (standards, patterns, type safety)
You can adjust these to match your team's priorities.
- Ensure the
ai_code_reviewlabel is added (not just present) - Check that
OPENROUTER_API_KEYsecret is correctly configured - Verify GitHub Actions permissions are properly set
- Check OpenRouter API key validity
- Verify OpenRouter account has sufficient credits
- Review GitHub Actions logs for specific error messages
If you get a "Diff is too large" error:
- Split your PR into smaller, focused changes
- Or increase
MAX_DIFF_SIZEin the workflow file - Default limit is 800KB (~200K tokens)
- API keys are stored securely in GitHub Secrets and passed via environment variables
- Reviews only run when the
ai_code_reviewlabel is manually added - All API calls are made through secure HTTPS connections
- Code diffs are sent to OpenRouter/AI provider - review their data policies
- The workflow has minimal permissions (read contents, write PR comments)
For issues with:
- OpenRouter API: Check OpenRouter documentation
- GitHub Actions: Check GitHub Actions documentation
- Workflow issues: Review the GitHub Actions logs for specific error details