Automated AI-powered code review system for GitHub pull requests. Get comprehensive feedback in minutes.
Cursor Git Workflow is a GitHub Actions-based tool that automatically reviews code changes using AI (GPT-4) and provides detailed feedback directly on pull requests. It helps development teams maintain high code quality standards through automated analysis.
- 🤖 AI-Powered Code Reviews - Leverages GPT-4 for intelligent code analysis
- 📊 Quality Metrics - Provides objective scoring (0-10) for code changes
- 💡 Actionable Feedback - Specific suggestions with line-level annotations
- 🔧 Auto-Formatting - Optionally applies style fixes automatically
- 📥 IDE Integration - Download feedback directly to Cursor/VS Code
- ⚙️ Configurable - Customize review criteria and standards via YAML
See QUICKSTART_5MIN.md for detailed setup instructions.
# 1. Clone repository
git clone https://github.com/jxwalker/cursor-git-workflow
cd cursor-git-workflow
# 2. Run setup
./scripts/setup-environment.sh
# 3. Copy workflow to your project
cp -r .github/workflows /your/project/
cp -r scripts/ci-cd /your/project/scripts/
# 4. Add OPENAI_API_KEY to GitHub Secrets
# 5. Create a pull request to trigger reviews
- Developer creates a pull request
- GitHub Actions triggers the review workflow
- AI analyzes code changes against configured standards
- Detailed feedback appears as PR comments
- Optional: Feedback downloads to developer's IDE
## 🤖 AI Code Review
### 🟢 Overall Score: 9/10
### ✅ Ready to Merge: Yes
### 🚀 Production Readiness: 95%
**Summary:** Code demonstrates good structure and error handling...
### 🚨 Issues Found
- ⚠️ **High - Security** (Line 45): Potential SQL injection vulnerability
💡 **Suggestion:** Use parameterized queries instead
### ✅ Good Practices Found
- Comprehensive error handling
- Well-documented functions
- Consistent code style
- GitHub repository
- OpenAI API key (Get one here)
- Python 3.8+ (for local development tools)
Add to GitHub repository secrets:
OPENAI_API_KEY
- Your OpenAI API key
Create .cursor-workflow.yml
in your repository root:
# AI Model Settings
ai:
model: gpt-4 # or gpt-3.5-turbo for lower cost
temperature: 0.1
max_tokens: 2000
# Review Standards
review:
strictness: moderate # lenient, moderate, or strict
passing_score: 8
# File Filtering
files:
exclude:
- "*.md"
- "tests/*"
- ".github/*"
See .cursor-workflow.example.yml for all options.
Monitor and download AI feedback locally:
# One-time check
./scripts/auto-download-feedback.sh
# Continuous monitoring
./scripts/auto-download-feedback.sh --watch
Run a daemon to monitor all PRs:
# Start monitoring
./scripts/feedback-daemon.sh start
# Check status
./scripts/feedback-daemon.sh status
# Stop monitoring
./scripts/feedback-daemon.sh stop
Typical costs per pull request:
- GPT-4: ~$0.03
- GPT-3.5-turbo: ~$0.002
Monitor usage at OpenAI Platform.
Workflow not triggering?
- Verify workflow file exists in
.github/workflows/
- Check Actions tab for error messages
No AI comments appearing?
- Confirm OPENAI_API_KEY is set in repository secrets
- Review workflow logs for specific errors
Rate limit errors?
- Workflow includes automatic retry logic
- Consider using GPT-3.5-turbo for higher rate limits
For detailed troubleshooting, see docs/TROUBLESHOOTING.md.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Run tests
pytest
MIT License - see LICENSE file for details.
Built to help teams maintain high code quality standards through intelligent automation.