Skip to content

add mention-in-issue-by-id#582

Merged
strawgate merged 7 commits intomainfrom
feat/pr-human-review-labeler
Mar 6, 2026
Merged

add mention-in-issue-by-id#582
strawgate merged 7 commits intomainfrom
feat/pr-human-review-labeler

Conversation

@strawgate
Copy link
Collaborator

@strawgate strawgate commented Mar 6, 2026

Summary

  • Adds a new reusable workflow, Mention in Issue by ID, to run the agent against a specific issue number.
  • Adds companion docs, a workflow_dispatch example, and a repository-local trigger workflow for that issue-targeted assistant.
  • Adds a new PR Labeler reusable workflow, plus trigger/docs/example, for PR classification labels.
  • Pins actions/github-script@v7 in .github/aw/actions-lock.json for PR-labeler label-operation pre-sanitization.

Mention in Issue by ID workflow

  • New source workflow: .github/workflows/gh-aw-mention-in-issue-by-id.md
  • New compiled lock workflow: .github/workflows/gh-aw-mention-in-issue-by-id.lock.yml
  • New repository-local trigger workflow: .github/workflows/trigger-mention-in-issue-by-id.yml
  • New usage docs: gh-agent-workflows/mention-in-issue-by-id/README.md
  • New install/trigger example: gh-agent-workflows/mention-in-issue-by-id/example.yml

Behavior

  • Accepts target-issue-number and prompt (plus optional model, additional-instructions, setup-commands, messages-footer, and draft-prs).
  • Constrains issue comments to the targeted issue via safe outputs.
  • Allows creating pull requests and issues when the task requires code or follow-up work.

PR Labeler

  • New source workflow: .github/workflows/gh-aw-pr-labeler.md
  • New compiled lock workflow: .github/workflows/gh-aw-pr-labeler.lock.yml
  • New trigger workflow: .github/workflows/trigger-pr-labeler.yml
  • New usage docs: gh-agent-workflows/pr-labeler/README.md
  • New install/trigger example: gh-agent-workflows/pr-labeler/example.yml
  • Requires classification-labels as a configurable input in the reusable workflow.
  • Repository trigger configures classification-labels as small_boom,medium_boom,big_boom and provides a risk rubric via additional-instructions.
  • Sanitizes label add/remove operations to the configured allowlist before applying safe outputs.
  • Supports removing conflicting classification labels from the configured set before applying selected labels from that same set.

The body of this PR is automatically managed by the Trigger Update PR Body workflow.


The body of this PR is automatically managed by the Trigger Update PR Body workflow.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds two Copilot-driven GitHub Actions workflows and supporting artifacts: a "Mention in Issue by ID" reusable workflow (workflow_call agent, trigger example, README, and example workflow for manual dispatch) that can comment on or create PRs for a targeted issue; and a "PR Labeler" reusable workflow (agent, trigger, README, and examples) that evaluates PRs and manages a single label from a configured set. Also adds a workflow file to trigger the mention-by-ID flow, updates actions lockfile to include actions/github-script v7, and forwards COPILOT_GITHUB_TOKEN and EXTRA_COMMIT_GITHUB_TOKEN where applicable.

Possibly related PRs

  • Related repository changes previously added workflow secret handling and action lock entries for Copilot-driven workflows.
  • Prior work introduced AI-driven "mention in issue" agent workflows and secret-forwarding patterns that align with the additions in this set.
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/pr-human-review-labeler
  • 🛠️ Update Documentation: Commit on current branch
  • 🛠️ Update Documentation: Create PR

Comment @coderabbitai help to get the list of available commands and usage tips.

coderabbitai[bot]

This comment was marked as resolved.

github-actions[bot]

This comment was marked as resolved.

coderabbitai[bot]

This comment was marked as resolved.

github-actions[bot]

This comment was marked as resolved.

@github-actions github-actions bot added the big_boom Large/high-risk PR blast radius; strong human review required label Mar 6, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.github/workflows/gh-aw-pr-labeler.lock.yml (1)

466-469: Avoid eval for setup-commands; run via bash -c with strict flags.

Line 469 uses eval, which adds a second expansion pass and can cause hard-to-debug command behavior.

Suggested hardening
       - env:
           SETUP_COMMANDS: ${{ inputs.setup-commands }}
         if: ${{ inputs.setup-commands != '' }}
         name: Repo-specific setup
-        run: eval "$SETUP_COMMANDS"
+        shell: bash
+        run: |
+          set -euo pipefail
+          bash -euo pipefail -c "$SETUP_COMMANDS"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/gh-aw-pr-labeler.lock.yml around lines 466 - 469, Replace
the unsafe eval usage for the Repo-specific setup step: instead of running eval
"$SETUP_COMMANDS", invoke the shell explicitly and enable strict flags, e.g. run
the commands via bash -c with set -euo pipefail so SETUP_COMMANDS executes in a
single expansion pass under strict error handling; update the step that
references SETUP_COMMANDS and the run line to use an explicit bash invocation
(e.g. bash -lc "set -euo pipefail; $SETUP_COMMANDS") to harden execution.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.github/workflows/gh-aw-pr-labeler.lock.yml:
- Around line 466-469: Replace the unsafe eval usage for the Repo-specific setup
step: instead of running eval "$SETUP_COMMANDS", invoke the shell explicitly and
enable strict flags, e.g. run the commands via bash -c with set -euo pipefail so
SETUP_COMMANDS executes in a single expansion pass under strict error handling;
update the step that references SETUP_COMMANDS and the run line to use an
explicit bash invocation (e.g. bash -lc "set -euo pipefail; $SETUP_COMMANDS") to
harden execution.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9df429c5-a059-4668-8360-b62d7d13ff6a

📥 Commits

Reviewing files that changed from the base of the PR and between f607fb7 and ba885ce.

📒 Files selected for processing (2)
  • .github/workflows/gh-aw-pr-labeler.lock.yml
  • .github/workflows/gh-aw-pr-labeler.md

@strawgate strawgate merged commit 9ae7e8d into main Mar 6, 2026
22 checks passed
@strawgate strawgate deleted the feat/pr-human-review-labeler branch March 6, 2026 21:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

big_boom Large/high-risk PR blast radius; strong human review required

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant