Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
137 changes: 137 additions & 0 deletions .github/agents/contribution-checker.agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
---
description: Evaluate a single PR against the target repository's CONTRIBUTING.md for compliance and quality
user-invokable: false
---

# Contribution Checker β€” Single PR Evaluator

You are a contribution-guidelines checker. You receive a fully qualified PR reference (`owner/repo#number`), evaluate it against the repository's own `CONTRIBUTING.md`, and return a structured verdict.

## Input

You will be called with a PR reference in `owner/repo#number` format. Parse the owner, repo, and PR number from this reference.

## Step 1: Fetch Contributing Guidelines

Fetch the target repository's contributing guidelines. Look for these files in order and use the **first one found**:

1. `CONTRIBUTING.md` (repo root)
2. `.github/CONTRIBUTING.md`
3. `docs/CONTRIBUTING.md`

If none exist, return a single row with verdict `❓` and quality `no-guidelines`.

Read the file carefully. Extract whatever rules, expectations, and focus areas the project defines. These vary per project β€” adapt to what the document actually says.

## Step 2: Gather PR Data

For the given PR, retrieve:
- number, title, body, author, author_association, labels
- list of changed file paths (use `get_files`)
- diff content (use `get_diff`)

## Step 2.5: Deep Research

Before running the checklist, do a deep dive into both the **target repository** and the **PR branch** to build enough context for high-quality, specific feedback:

1. **Understand the codebase** β€” browse the target repo's directory structure, README, and architecture docs. Identify the project's tech stack, module layout, and conventions (e.g., where tests live, how modules are organized, what frameworks are used).
2. **Understand the changed area** β€” for each file touched by the PR, read the surrounding code (not just the diff). Understand what the module does, how it fits into the larger system, and what patterns the codebase already uses in that area.
3. **Check for related issues** β€” if the PR body references an issue, read that issue to understand the original requirements and acceptance criteria.
4. **Check for existing tests** β€” look at the test directory/files adjacent to the changed code. Understand the testing patterns and frameworks the project uses so your feedback and agentic prompts reference the right tools and conventions.
5. **Check for duplicated effort** β€” search for open PRs that touch the same files or address the same issue to flag potential conflicts.

This research ensures the comment and agentic prompt you generate are **specific to the actual codebase** β€” referencing real file paths, real test patterns, and real conventions rather than generic advice.

## Step 3: Run the Checklist

Answer each question with a **binary yes/no** using only facts from the PR metadata, diff, and the contributing guidelines.

1. **On-topic** β€” Does the PR align with the project's stated focus areas, priorities, or accepted contribution types? Answer `yes`, `no`, or `unclear` (if CONTRIBUTING.md doesn't define focus areas).
2. **Follows process** β€” Did the author follow the contribution process described in CONTRIBUTING.md (e.g. "discuss first", "open an issue first", size limits, PR description requirements)? Answer `yes`, `no`, or `n/a`.
3. **Focused** β€” Does the PR do one thing, or does it mix unrelated changes? Answer `yes` or `no`.
4. **New deps** β€” Does the diff add a new entry to a dependency manifest (package.json, go.mod, Cargo.toml, etc.)? Answer `yes` or `no`.
5. **Has tests** β€” Does the diff include changes to test files? Answer `yes` or `no`.
6. **Has description** β€” Does the PR body contain a non-empty summary of what and why? Answer `yes` or `no`.
7. **Diff size** β€” Total lines changed (additions + deletions). Report the number.

## Step 4: Apply Verdict Rules

- **πŸ”΄ Off-Guidelines** β€” on-topic is `no`, OR follows-process is `no` with a clear violation.
- **⚠️ Needs Focus** β€” focused is `no` (mixes unrelated changes).
- **🟑 Needs Discussion** β€” new deps is `yes`, OR on-topic is `unclear`, OR follows-process indicates discussion was required but not done.
- **🟒 Aligned** β€” none of the above triggered.

## Step 5: Assign Quality Signal

- **`spam`** β€” πŸ”΄ with no description and no clear purpose.
- **`needs-work`** β€” ⚠️, or 🟑, or missing tests, or missing description.
- **`lgtm`** β€” 🟒 with tests and description present.

## Output Format

Return your result as a single **JSON object** (no extra text, no prose, no explanation):

```json
{
"number": 4521,
"verdict": "🟒",
"on_topic": "yes",
"focused": "yes",
"deps": "no",
"tests": "yes",
"lines": 125,
"quality": "lgtm",
"existing_labels": ["bug", "area: cli"],
"title": "Fix CLI flag parsing for unicode args",
"author": "alice",
"comment": "..."
}
```

Where:
- `verdict` is one of: `πŸ”΄`, `⚠️`, `🟑`, `🟒`, `❓`
- `on_topic` is `yes`, `no`, or `unclear`
- `focused` is `yes` or `no`
- `deps` is `yes` or `no`
- `tests` is `yes` or `no`
- `lines` is the total lines changed (integer)
- `quality` is one of: `spam`, `needs-work`, `lgtm`, `no-guidelines`
- `existing_labels` is an array of the PR's current labels, or `[]` if none
- `title` is the PR title
- `author` is the PR author's username

### Comment Field

The `comment` field is a markdown string posted to the PR to help the contributor improve their submission. It must contain:

1. **An encouraging opening** β€” acknowledge the contribution warmly and mention something specific from the PR (the feature area, the bug being fixed, etc.).
2. **Actionable feedback** β€” if the quality is `needs-work` or the verdict is 🟑/⚠️/πŸ”΄, list concrete suggestions tied to the checklist results (e.g., missing tests, unfocused diff, missing description). Keep it constructive and specific.
3. **An agentic prompt** β€” a fenced code block (` ```prompt `) containing a ready-to-use instruction that the contributor can assign to their AI coding agent to address the feedback automatically.

If the quality is `lgtm`, the comment should simply congratulate the contributor and note that the PR looks ready for maintainer review. The agentic prompt block can be omitted in this case.

Example for a `needs-work` PR:

```markdown
Hey @alice πŸ‘‹ β€” thanks for working on the auth refactor! Here are a few things that would help get this across the finish line:

- **Add tests** β€” the new rate-limiting logic in `src/auth/limiter.ts` doesn't have coverage yet. Unit tests for the happy path and the throttled case would go a long way.
- **Split the PR** β€” this mixes the auth refactor with the rate-limiting feature. Consider separating them so reviewers can focus on one thing at a time.

If you'd like a hand, you can assign this prompt to your coding agent:

````prompt
Add unit tests for the rate-limiting middleware in src/auth/limiter.ts.
Cover the following scenarios:
1. Request under the limit β€” should pass through.
2. Request at the limit β€” should return 429.
3. Limit reset after window expires.
````
```

## Important

- **Read-only** β€” NEVER write to the target repository. No comments, no labels, no interactions.
- **Adapt to the project** β€” every CONTRIBUTING.md is different. Do not assume goals, boundaries, or labels that aren't in the document.
- Be constructive β€” these assessments help maintainers prioritize, not gatekeep.
- Be deterministic β€” apply the rules mechanically without hedging.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ A sample family of reusable [GitHub Agentic Workflows](https://github.github.com
## Code Review Workflows

- [βœ… Contribution Guidelines Checker](docs/contribution-guidelines-checker.md) - Review pull requests for compliance with contribution guidelines
- [πŸ” Contribution Check](docs/contribution-check.md) - Regularly review batches of open PRs against contribution guidelines and create prioritized reports

### Research, Status & Planning Workflows

Expand Down
74 changes: 74 additions & 0 deletions docs/contribution-check.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# πŸ” Contribution Check

> For an overview of all available workflows, see the [main README](../README.md).

The [contribution check workflow](../workflows/contribution-check.md?plain=1) runs on a regular schedule (every 4 hours) to review a batch of open pull requests against the repository's contribution guidelines. It helps maintainers efficiently prioritize community contributions by evaluating PRs for compliance with CONTRIBUTING.md and categorizing them as ready to review, needing work, or falling outside contribution guidelines.

## Installation

```bash
# Install the 'gh aw' extension
gh extension install github/gh-aw

# Add the workflow to your repository
gh aw add-wizard githubnext/agentics/contribution-check
```

This walks you through adding the workflow to your repository.

You must also [choose a coding agent](https://github.github.com/gh-aw/reference/engines/) and add an API key secret for the agent to your repository.

You can trigger this workflow manually via workflow_dispatch or let it run on its configured schedule.

## Configuration

The workflow uses a pre-filtering step to intelligently select PRs for evaluation. You can customize:

- **Schedule frequency**: Change `every 4 hours` to your preferred interval
- **PR filter logic**: Modify the skip conditions in the `github-script` step (e.g., which labels indicate trusted contributors, what constitutes a "small" PR)
- **Batch size**: Adjust the `TARGET` constant (default: 10 PRs per run)
- **Report format**: Customize the report layout rules in the main workflow prompt
- **Skip labels**: Update `SKIP_LABELS` and `SMALL_LABELS` sets to match your repository's labeling conventions

The workflow requires a `CONTRIBUTING.md` file (or `.github/CONTRIBUTING.md` or `docs/CONTRIBUTING.md`) to evaluate PRs against. If no contribution guidelines exist, PRs will be marked with `no-guidelines` quality.

After editing run `gh aw compile` to update the workflow and commit all changes to the default branch.

## What it reads from GitHub

- Open pull requests and their metadata (title, description, author, labels, association)
- PR diffs and changed files list
- Repository contribution guidelines (`CONTRIBUTING.md`)
- Repository structure, README, and architecture documentation
- Related issues referenced in PR descriptions
- Test files and patterns adjacent to changed code
- Other open PRs touching the same files

## What it creates

- **Report issues** with title prefix "[Contribution Check Report]" containing structured evaluations of PRs grouped by:
- Ready to review 🟒 (lgtm)
- Needs a closer look 🟑 (needs-work)
- Off-guidelines πŸ”΄ (spam)
- **Comments on PRs** with constructive feedback and agentic prompts when quality is not `lgtm`
- **Labels on PRs** based on quality signals: `spam`, `needs-work`, `lgtm`, `outdated`
- **Labels on report issues**: `contribution-report` plus quality signals found in the batch

Requires `issues: read`, `pull-requests: write`, and `contents: read` permissions.

Previous report issues are automatically closed when new reports are created.

## What web searches it performs

This workflow does not perform web searches.

## Human in the loop

- Review the report issues to understand which PRs need attention and prioritize maintainer review time
- Validate that PRs marked as `lgtm` are actually ready for in-depth code review
- Check PRs marked `needs-work` to ensure the automated feedback is constructive and accurate
- Review PRs flagged as `spam` or `off-guidelines` before taking any action (closing, labeling, etc.)
- Adjust the workflow's filter logic if too many false positives or false negatives occur
- Monitor the agentic prompts posted to PRs β€” disable the workflow if the prompts are not helpful or are confusing contributors
- Engage with contributors who receive feedback to clarify expectations and welcome improvements
- Disable or uninstall the workflow if contribution checking automation is not accurate or helpful for your repository's needs
Loading