Skip to content

feat(default): add pr-review-multi-source skill (adapted from optimus)#331

Open
gandalf-at-lerian wants to merge 2 commits intomainfrom
feat/pr-review-multi-source
Open

feat(default): add pr-review-multi-source skill (adapted from optimus)#331
gandalf-at-lerian wants to merge 2 commits intomainfrom
feat/pr-review-multi-source

Conversation

@gandalf-at-lerian
Copy link
Copy Markdown
Contributor

Summary

Adds ring:pr-review-multi-source to the default plugin.

Fills a gap in Ring: fetches PR metadata AND existing review comments from ALL sources (CodeRabbit, human reviewers, CI bots), dispatches Ring's 7 reviewer agents to evaluate both code AND existing comments, with source attribution.

Key features

  • 9-phase workflow: fetch PR context → present summary → review type → dispatch agents → consolidate with attribution → interactive resolution → batch apply → final summary → respond to PR comments
  • References Ring's existing agents (ring:code-reviewer, ring:security-reviewer, ring:business-logic-reviewer, ring:test-reviewer, ring:nil-safety-reviewer, ring:consequences-reviewer, ring:dead-code-reviewer)
  • Source attribution on every finding

Origin

Adapted from alexgarzao/optimus (optimus-pr-review), converted to Ring format.

cc @alexgarzao

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 9, 2026

Walkthrough

Adds a new skill ring:pr-review-multi-source that performs end-to-end, PR-scoped multi-source review orchestration: detect/validate PR context, collect metadata and all review artifacts, dispatch reviewer agents, consolidate findings and per-comment evaluations, run interactive decision loop, apply approved fixes with verification gates, and post/resolved GitHub comment replies.

Changes

Cohort / File(s) Summary
PR Review Multi-Source Skill
default/skills/pr-review-multi-source/SKILL.md
New skill ring:pr-review-multi-source. Adds PR discovery/skip logic, gh CLI checks, comprehensive PR metadata and paginated comment/review collection (including duplicated-comments parsing), checkout and file enumeration, dispatch of 5–7 reviewer agents, per-comment AGREE/CONTEST/ALREADY FIXED evaluations, deduplication and deterministic finding IDs, interactive severity-ordered decision loop, batched application of approved fixes with lint/unit/(conditional integration/E2E) verification, and posting/resolving replies via GitHub GraphQL.

Sequence Diagram

sequenceDiagram
    actor User
    participant Orchestrator as PR Review<br/>Orchestrator
    participant GitHub as GitHub API
    participant Agents as Reviewer<br/>Agents (5-7)
    participant Verify as Verification<br/>System

    User->>Orchestrator: Trigger with PR URL or branch
    Orchestrator->>GitHub: Check gh CLI, fetch PR metadata & diffs
    GitHub-->>Orchestrator: PR details, comments, reviews (paginated)
    Orchestrator->>Orchestrator: Checkout branch, enumerate changed files
    Orchestrator->>User: Present summary & choose mode (Initial/Final)
    User->>Orchestrator: Select mode
    Orchestrator->>Agents: Dispatch agents with files, PR context, existing comments
    Agents-->>Orchestrator: Findings + per-comment evaluations (AGREE/CONTEST/ALREADY FIXED)
    Orchestrator->>User: Present consolidated findings (severity order)
    User->>Orchestrator: Approve/reject findings interactively
    Orchestrator->>Verify: Apply approved fixes (batch) and run tests/lints
    Verify-->>Orchestrator: Verification results
    Orchestrator->>GitHub: Post reply templates and resolve threads via GraphQL
    GitHub-->>Orchestrator: Confirmations
    Orchestrator-->>User: Final PR readiness verdict and reply summary
Loading

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@default/skills/pr-review-multi-source/SKILL.md`:
- Around line 107-120: The gh REST API calls used to collect review comments in
the SKILL.md (the commands "gh api repos/{owner}/{repo}/pulls/{number}/reviews"
and "gh api repos/{owner}/{repo}/pulls/{number}/comments") do not include
pagination and will miss items beyond the first page; update both commands to
include the --paginate flag so they become paginated requests (use --paginate
with the existing gh api calls for reviews and comments) to ensure complete
comment collection across all pages.
- Around line 519-540: The current GraphQL query and matching logic uses
comments.nodes[0].databaseId which is deprecated and fragile; update the query
to request reviewThreads { comments(first: 100) { nodes { id body } } } (keep
reviewThreads and comments fields) and change the matching logic to compare the
REST comment's node_id (global GraphQL node ID collected in Step 0.3) against
any comment's id in comments.nodes[*] rather than only the first comment or
databaseId so threads are matched reliably.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: e0910bec-adf4-4cf2-bf94-9ab741043547

📥 Commits

Reviewing files that changed from the base of the PR and between 9011f82 and 7e681c5.

📒 Files selected for processing (1)
  • default/skills/pr-review-multi-source/SKILL.md

…f deprecated databaseId

- Add --paginate flag to gh api REST calls for reviews and comments to
  avoid truncation at 30 items per page on larger PRs
- Replace deprecated databaseId with global node_id for thread matching
- Fetch all comments per thread (first: 100) instead of only the first
  to handle multi-comment threads reliably

Addresses CodeRabbit review comments on PR #331.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@default/skills/pr-review-multi-source/SKILL.md`:
- Around line 122-128: Phase 0's extraction schema in SKILL.md must explicitly
include and mark comment_id and node_id as mandatory fields so Phase 9 can
resolve threads; update the Phase 0 extraction section (the block currently
describing extracted fields) to add "comment_id" and "node_id" to the list of
required outputs and ensure any examples or templates reference both fields, and
propagate that requirement to any functions/processes that consume Phase 0
output (e.g., the extractor or mapper that emits these fields) so REST calls at
Line 540 will have node_id available.
- Around line 255-264: The mandated output headers "## New Findings" and "##
Comment Evaluation" conflict with existing reviewer agent output schema (e.g.,
headers like "## VERDICT", "## Summary", "## Issues Found"); update the SKILL.md
so the agent emits both schemas or a compatible mapping: retain "## New
Findings" and "## Comment Evaluation" but also produce equivalent sections "##
VERDICT", "## Summary", and "## Issues Found" (or include a clear mapping block)
to satisfy downstream parsers; modify the template/validator that generates
responses for this skill (search for the strings "## New Findings" and "##
Comment Evaluation" in the file) to either append the reviewer-style sections or
provide a configurable output_schema option so orchestration can accept either
format.
- Around line 521-537: The GraphQL query in the gh api call that fetches
reviewThreads/comments only requests first:100 and will miss items when
pageInfo.hasNextPage is true; update the query and calling logic to implement
cursor-based pagination for both repository.pullRequest.reviewThreads and each
reviewThread.comments by adding pageInfo { hasNextPage endCursor } and looping:
request with after: $reviewThreadsCursor until hasNextPage is false,
accumulating nodes, and for each thread repeat pagination for comments using a
commentsCursor (endCursor) until all comments are collected; ensure variable
names (e.g., $reviewThreadsCursor, $commentsCursor) and the connection fields
(reviewThreads, comments, pageInfo, endCursor, hasNextPage) are used so no
threads or comments are missed.
- Around line 226-284: Replace the existing instructions that say "Dispatch ALL
applicable agents simultaneously via the Task tool" and the line that requires
"ALL agents MUST be dispatched in a SINGLE message with parallel Task calls" so
they instead require using the /ring:codereview command as the single parallel
dispatch entrypoint; update any wording that mentions Task tool orchestration or
sequential Task calls to explicitly state "/ring:codereview" and that all review
agents (ring:code-reviewer, ring:business-logic-reviewer,
ring:security-reviewer, ring:test-reviewer, ring:nil-safety-reviewer,
ring:consequences-reviewer, ring:dead-code-reviewer) must be invoked in one
parallel /ring:codereview message, and remove/replace conflicting examples or
sentences that imply using Task tool calls.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: a6e8db96-4692-4419-8a42-afde37fe0e27

📥 Commits

Reviewing files that changed from the base of the PR and between 7e681c5 and f125614.

📒 Files selected for processing (1)
  • default/skills/pr-review-multi-source/SKILL.md

Comment on lines +122 to +128
For each comment, extract and categorize:
- **Source:** Identify the author — CodeRabbit (bot), human reviewer (by username), CI/CD bot, or other automated tool
- **Type:** General comment, inline code comment, review summary, or approval/request-changes
- **File and line** (for inline comments) — map to the changed files
- **Content** — the actual feedback
- **Status:** Resolved/unresolved (if the platform tracks it)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make node_id extraction explicit in Phase 0 to support Phase 9 thread resolution.

Line 540 requires REST comment node_id, but Phase 0 extraction fields (Line 122-128) don’t explicitly require storing it. Add comment_id + node_id as mandatory extracted fields to avoid later resolution failures.

Suggested change
 For each comment, extract and categorize:
+ - **Comment identifiers:** REST `id` and `node_id` (required for replies and GraphQL thread resolution)
  - **Source:** Identify the author — CodeRabbit (bot), human reviewer (by username), CI/CD bot, or other automated tool
  - **Type:** General comment, inline code comment, review summary, or approval/request-changes

Also applies to: 540-540

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@default/skills/pr-review-multi-source/SKILL.md` around lines 122 - 128, Phase
0's extraction schema in SKILL.md must explicitly include and mark comment_id
and node_id as mandatory fields so Phase 9 can resolve threads; update the Phase
0 extraction section (the block currently describing extracted fields) to add
"comment_id" and "node_id" to the list of required outputs and ensure any
examples or templates reference both fields, and propagate that requirement to
any functions/processes that consume Phase 0 output (e.g., the extractor or
mapper that emits these fields) so REST calls at Line 540 will have node_id
available.

Comment on lines +226 to +284
Dispatch ALL applicable agents simultaneously via the Task tool. Each agent receives:
- The full content of every changed file
- The PR context (description, linked issues)
- ALL existing PR comments (so agents can validate/contest them)
- Coding standards and reference docs

**Agent prompt MUST include:**

```
PR Context:
- PR #<number>: <title>
- Purpose: <PR description summary>
- Linked issues: <list>
- Base branch: <base>

Existing PR Comments (evaluate these — validate or contest each one):
[paste all comments grouped by source, including comments from "Duplicated Comments" sections]

Review scope: Only the files changed in this PR.
Review type: Initial / Final.

Your job:
1. Review the CODE for issues in your domain
2. EVALUATE each existing PR comment in your domain:
- AGREE: The comment is valid and should be addressed
- CONTEST: The comment is incorrect or unnecessary (explain why)
- ALREADY FIXED: The comment was addressed in a subsequent commit
3. Report NEW findings not covered by existing comments

Required output format:
## New Findings
For each: severity, file, line, description, recommendation

## Comment Evaluation
For each existing comment in your domain:
- Comment source and summary
- Your verdict: AGREE / CONTEST / ALREADY FIXED
- Justification
```

### Initial Review (5 agents)

| # | Agent | Focus |
|---|-------|-------|
| 1 | **ring:code-reviewer** | Architecture, design patterns, SOLID, DRY, maintainability |
| 2 | **ring:business-logic-reviewer** | Domain correctness, business rules, edge cases |
| 3 | **ring:security-reviewer** | Vulnerabilities, authentication, input validation, OWASP |
| 4 | **ring:test-reviewer** | Test coverage gaps, error scenarios, flaky patterns |
| 5 | **ring:nil-safety-reviewer** | Nil/null pointer safety, missing guards, panic paths |

### Final Review (7 agents — includes the 5 above plus)

| # | Agent | Focus |
|---|-------|-------|
| 6 | **ring:consequences-reviewer** | Ripple effects, caller chain impact, consumer contract integrity |
| 7 | **ring:dead-code-reviewer** | Orphaned code, dead dependencies, unreachable paths |

All agents MUST be dispatched in a SINGLE message with parallel Task calls.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use /ring:codereview for parallel dispatch instead of direct Task-tool orchestration.

Line 226 and Line 283 currently standardize parallel dispatch via Task calls, but the project learning requires parallel reviewer execution via /ring:codereview. Please switch this phase to instruct /ring:codereview as the required dispatch entrypoint.

Suggested change
-Dispatch ALL applicable agents simultaneously via the Task tool. Each agent receives:
+Dispatch ALL applicable agents simultaneously via `/ring:codereview`. Each agent receives:
...
-All agents MUST be dispatched in a SINGLE message with parallel Task calls.
+All agents MUST be dispatched in a SINGLE `/ring:codereview` invocation with parallel reviewer execution.

Based on learnings: "Code review dispatch MUST use parallel execution of all reviewers via /ring:codereview command, MUST NOT run reviewers sequentially".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@default/skills/pr-review-multi-source/SKILL.md` around lines 226 - 284,
Replace the existing instructions that say "Dispatch ALL applicable agents
simultaneously via the Task tool" and the line that requires "ALL agents MUST be
dispatched in a SINGLE message with parallel Task calls" so they instead require
using the /ring:codereview command as the single parallel dispatch entrypoint;
update any wording that mentions Task tool orchestration or sequential Task
calls to explicitly state "/ring:codereview" and that all review agents
(ring:code-reviewer, ring:business-logic-reviewer, ring:security-reviewer,
ring:test-reviewer, ring:nil-safety-reviewer, ring:consequences-reviewer,
ring:dead-code-reviewer) must be invoked in one parallel /ring:codereview
message, and remove/replace conflicting examples or sentences that imply using
Task tool calls.

Comment on lines +255 to +264
Required output format:
## New Findings
For each: severity, file, line, description, recommendation

## Comment Evaluation
For each existing comment in your domain:
- Comment source and summary
- Your verdict: AGREE / CONTEST / ALREADY FIXED
- Justification
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Required agent output format conflicts with reviewer output_schema.

Line 255-264 mandates ## New Findings and ## Comment Evaluation, but existing reviewer agents require sections like ## VERDICT, ## Summary, ## Issues Found, etc. This mismatch can break response validation/parsing in orchestration.

Suggested change
-Required output format:
-  ## New Findings
-  For each: severity, file, line, description, recommendation
-
-  ## Comment Evaluation
-  For each existing comment in your domain:
-  - Comment source and summary
-  - Your verdict: AGREE / CONTEST / ALREADY FIXED
-  - Justification
+Required output format:
+  - MUST comply with the agent's declared `output_schema` sections (e.g., VERDICT, Summary, Issues Found, etc.).
+  - Include comment evaluation details within the required schema sections:
+    - Comment source and summary
+    - Verdict: AGREE / CONTEST / ALREADY FIXED
+    - Justification
+  - Include net-new findings with severity, file, line, description, recommendation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@default/skills/pr-review-multi-source/SKILL.md` around lines 255 - 264, The
mandated output headers "## New Findings" and "## Comment Evaluation" conflict
with existing reviewer agent output schema (e.g., headers like "## VERDICT", "##
Summary", "## Issues Found"); update the SKILL.md so the agent emits both
schemas or a compatible mapping: retain "## New Findings" and "## Comment
Evaluation" but also produce equivalent sections "## VERDICT", "## Summary", and
"## Issues Found" (or include a clear mapping block) to satisfy downstream
parsers; modify the template/validator that generates responses for this skill
(search for the strings "## New Findings" and "## Comment Evaluation" in the
file) to either append the reviewer-style sections or provide a configurable
output_schema option so orchestration can accept either format.

Comment on lines +521 to +537
gh api graphql -f query='
query {
repository(owner: "<owner>", name: "<repo>") {
pullRequest(number: <number>) {
reviewThreads(first: 100) {
nodes {
id
isResolved
comments(first: 100) {
nodes { id body }
}
}
}
}
}
}
'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

GitHub GraphQL API docs: does querying pullRequest.reviewThreads(first: 100) require pagination via pageInfo/endCursor to retrieve all review threads and comments beyond the first page?

💡 Result:

Yes, querying pullRequest.reviewThreads(first: 100) requires pagination via pageInfo.endCursor to retrieve all review threads beyond the first 100, as GitHub's GraphQL API uses cursor-based pagination for all connections with a 'first' argument limited to 100 items. If more than 100 threads exist, pageInfo.hasNextPage will be true, and subsequent queries must use after: $endCursor with the next first: 100 until hasNextPage is false.

Citations:


Add cursor pagination for GraphQL reviewThreads/comments lookup to avoid missed threads.

The query fetches only first: 100 threads and comments without iterating through pageInfo. GitHub's GraphQL API requires cursor-based pagination for all connections; when more than 100 items exist, pageInfo.hasNextPage is true and subsequent queries must use after: $endCursor to retrieve remaining pages. Large PRs can exceed this limit and miss relevant review threads, causing unresolved replies despite valid node_id.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@default/skills/pr-review-multi-source/SKILL.md` around lines 521 - 537, The
GraphQL query in the gh api call that fetches reviewThreads/comments only
requests first:100 and will miss items when pageInfo.hasNextPage is true; update
the query and calling logic to implement cursor-based pagination for both
repository.pullRequest.reviewThreads and each reviewThread.comments by adding
pageInfo { hasNextPage endCursor } and looping: request with after:
$reviewThreadsCursor until hasNextPage is false, accumulating nodes, and for
each thread repeat pagination for comments using a commentsCursor (endCursor)
until all comments are collected; ensure variable names (e.g.,
$reviewThreadsCursor, $commentsCursor) and the connection fields (reviewThreads,
comments, pageInfo, endCursor, hasNextPage) are used so no threads or comments
are missed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant