Skip to content

Debug: Test GitHub Action LLM analysis#2

Merged
patdeg merged 3 commits intomainfrom
test/debug-github-action
Jan 24, 2026
Merged

Debug: Test GitHub Action LLM analysis#2
patdeg merged 3 commits intomainfrom
test/debug-github-action

Conversation

@patdeg
Copy link
Contributor

@patdeg patdeg commented Jan 24, 2026

Summary

Test PR to debug the empty LLM analysis in commit-analysis workflow.

Changes

Added verbose logging to diagnose why the API response is empty:

  • Log PR metadata (title, author, file count)
  • Capture and display HTTP status code
  • Show response structure keys
  • Display full response when content is empty
  • Better error messages for troubleshooting

Test plan

  • Check workflow logs for API response details
  • Verify the PR comment shows useful debug info

🤖 Generated with Claude Code

- Log PR info (title, author, file count, diff lines)
- Capture HTTP status code from curl
- Log response keys for debugging
- Show full response when content is empty
- Better error handling with informative messages

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@github-actions
Copy link

AI Code Review

API Error: HTTP

200 - {"choices":[{"finish_reason":"length","index":0,"logprobs":null,"message":{"content":"","reasoning":"We need to produce a concise review: summary, risk assessment, security, suggestions. Let's examine


Powered by Demeterics with openai/gpt-oss-20b

- Extract from 'reasoning' field if 'content' is empty
- Increase max_tokens from 1000 to 2000 to avoid truncation
- Log finish_reason when content is empty

The openai/gpt-oss-20b model outputs to the 'reasoning' field
instead of 'content', and 1000 tokens was causing truncation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@github-actions
Copy link

AI Code Review

API Error: HTTP

200 - {"choices":[{"finish_reason":"stop","index":0,"logprobs":null,"message":{"content":"## 1. Summary \nThe PR adds extensive debugging output to the commit-analysis.yml workflow:\n\n- Prints PR title,


Powered by Demeterics with openai/gpt-oss-20b

Use curl -o to write response to file and -w for status code,
avoiding the need to parse them from a combined output.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@github-actions
Copy link

AI Code Review

1. Summary

  • Added a debug group that prints PR title, author, file count, and diff line count.
  • Increased the LLM max_tokens from 1000 to 2000.
  • Re‑worked the API call to capture the HTTP status code and write the raw response to /tmp/response.json.
  • Added detailed HTTP status and response‑key logging.
  • Implemented robust error handling:
    • Check HTTP status first.
    • Detect API‑level errors (.error).
    • Fallback to the `reasoning

Powered by Demeterics with openai/gpt-oss-20b

@patdeg patdeg merged commit 206c968 into main Jan 24, 2026
1 check passed
@patdeg patdeg deleted the test/debug-github-action branch January 24, 2026 03:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant