fix(workflows): use direct Claude Code action for AI review#94
fix(workflows): use direct Claude Code action for AI review#94MarkusNeusinger merged 1 commit intomainfrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Pull request overview
This PR refactors the AI review workflow to use the Claude Code action directly instead of posting @claude comments via github-script. The change addresses a GitHub limitation where comments made with GITHUB_TOKEN don't trigger other workflows like util-claude.yml. However, the refactoring introduces two critical bugs that will prevent the workflow from functioning correctly.
Key Changes:
- Replaced
actions/github-script@v8with directanthropics/claude-code-action@v1call - Converted JavaScript template string interpolation to GitHub Actions expression syntax
- Simplified multi-line prompt by removing string concatenation
You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.
| claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }} | ||
| claude_args: "--model opus" | ||
| prompt: | | ||
| ## Task: AI Quality Review for **${{ steps.pr.outputs.library }}** (Attempt ${{ steps.attempts.outputs.count }}/3) |
There was a problem hiding this comment.
The attempt count displayed in the prompt is off by one. The step "Check attempt count" (lines 136-152) returns the current attempt count (0, 1, 2, or 3), but the prompt should show the next attempt number being performed.
For example, when steps.attempts.outputs.count is "0", this is actually the first attempt, so the prompt should show "Attempt 1/3", not "Attempt 0/3".
The previous implementation (using github-script) handled this correctly by calculating parseInt('${{ steps.attempts.outputs.count }}') + 1.
Suggested fix:
## Task: AI Quality Review for **${{ steps.pr.outputs.library }}** (Attempt ${{ steps.attempts.outputs.count == '0' && '1' || steps.attempts.outputs.count == '1' && '2' || steps.attempts.outputs.count == '2' && '3' || steps.attempts.outputs.count }}/3)Or better yet, add a new step before this one to calculate the next attempt number:
- name: Calculate next attempt
id: next_attempt
run: echo "number=$((${{ steps.attempts.outputs.count }} + 1))" >> $GITHUB_OUTPUTThen use ${{ steps.next_attempt.outputs.number }} in the prompt.
| \`\`\`markdown | ||
| ## AI Review - Attempt ${attempt}/3 | ||
| ```markdown | ||
| ## AI Review - Attempt ${{ steps.attempts.outputs.count }}/3 |
There was a problem hiding this comment.
Same issue here - the attempt count is off by one. This line references steps.attempts.outputs.count which contains the current attempt count (0, 1, or 2), but should display the next attempt being performed (1, 2, or 3).
This inconsistency will cause confusion when the AI posts its review with an incorrect attempt number.
| - name: Run Claude AI Quality Review | ||
| if: steps.check.outputs.should_run == 'true' && steps.pr.outputs.skip != 'true' && steps.attempts.outputs.count != '3' | ||
| uses: actions/github-script@v8 | ||
| timeout-minutes: 30 | ||
| uses: anthropics/claude-code-action@v1 |
There was a problem hiding this comment.
Missing step to increment the attempt label. The workflow reads the attempt count from PR labels (lines 136-152), but there's no step that adds the ai-attempt-X label to track that this attempt has been performed.
In the gen-update-plot.yml workflow, this is done at line 222:
gh pr edit $PR_NUMBER --remove-label "ai-rejected" --add-label "ai-attempt-${ATTEMPT}"Without this step, the attempt counter will always remain at 0, causing the following issues:
- The AI review will always show "Attempt 0/3" (or "Attempt 1/3" if the off-by-one bug is fixed)
- The workflow will never reach the "Mark as failed after 3 attempts" step
- Multiple AI reviews could run in parallel on the same PR
Suggested fix: Add a new step before the AI review to add the attempt label:
- name: Update attempt label
if: steps.check.outputs.should_run == 'true' && steps.pr.outputs.skip != 'true' && steps.attempts.outputs.count != '3'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
NEXT_ATTEMPT=$((${{ steps.attempts.outputs.count }} + 1))
gh pr edit ${{ steps.metadata.outputs.pr_number }} --add-label "ai-attempt-${NEXT_ATTEMPT}"## Summary - Replace `actions/github-script` with direct `anthropics/claude-code-action@v1` call - The previous approach posted `@claude` comments which didn't trigger `util-claude.yml` due to GitHub's token limitation (comments made with `GITHUB_TOKEN` don't trigger other workflows) ## Changes - `bot-ai-review.yml`: Replace github-script step with claude-code-action ## Test - Trigger AI review on one of the existing PRs to verify
Summary
actions/github-scriptwith directanthropics/claude-code-action@v1call@claudecomments which didn't triggerutil-claude.ymldue to GitHub's token limitation (comments made withGITHUB_TOKENdon't trigger other workflows)Changes
bot-ai-review.yml: Replace github-script step with claude-code-actionTest