Automated GitHub issue recommender for open source contributors. Scans a repo's issue tracker, scores issues by your focus areas and skill level, and uses AI to generate actionable recommendations.
- Fetches open issues from any GitHub repo via API
- Scores each issue based on your configured focus areas, keywords, and labels
- Ranks by relevance, freshness, engagement, and difficulty match
- Generates a markdown report with AI-powered recommendations (optional)
# Clone
git clone https://github.com/yweiii/opensource-scout.git
cd opensource-scout
# Setup
pip install -r requirements.txt
cp config.example.yaml config.yaml
# Edit config.yaml with your focus areas
# Run
python scout.py # With AI recommendations
python scout.py --no-ai # Heuristic scoring onlyEdit config.yaml to set:
- repo β which GitHub repo to scan
- focus_areas β topics you care about, with labels, keywords, and weights
- tier β your current skill level (1=beginner β 4=maintainer)
- entry_labels β labels that indicate easy entry points
- top_n β how many issues to recommend
See config.example.yaml for a full example configured for Ray.
The included GitHub Action runs every Monday and commits a fresh scout report to output/.
To enable AI-powered recommendations, add your Anthropic API key as a GitHub secret:
- Go to repo Settings β Secrets and variables β Actions
- Add
ANTHROPIC_API_KEYwith your key
You can also trigger a scan manually from the Actions tab β Weekly Scout β Run workflow.
Each issue is scored based on:
| Signal | Impact |
|---|---|
| Focus area label match | +weight (configurable) |
| Focus area keyword match | +weight Γ 0.5 |
| Entry-friendly label | +5 |
| High engagement (5+ π) | +3 |
| Recently updated (< 7 days) | +3 |
| Already assigned | -5 |
| Stale (90+ days) | -3 |
π Open Source Scout Report
Repo: ray-project/ray
Tier: 1
Focus: Ray Serve, LLM Serving, Performance
| # | Score | Difficulty | Title | Areas | π | Updated |
|---|-------|-----------|-------|-------|-----|---------|
| #52746 | 18.0 | Medium | Ray Serve overhead for vLLM | Ray Serve, Performance | 12 | 2026-02-25 |
| #61125 | 15.0 | Easy | LLM batching config | LLM Serving | 3 | 2026-02-18 |
MIT