fix(app): show evidence block for failed automation runs#2645
Merged
Conversation
The automation-run detail card in Compliance → Task → Integration Checks rendered a "View Evidence" expandable JSON tree for every *passing* result but never for a *failing* one — even though the backend saves the same `evidence` payload for both and the API returns it identically. After the Dependabot severity-gating change (#2643), failing runs surface useful context in their evidence (open_by_severity breakdown, checked_at, etc.) that users need to understand *why* the check failed. Hiding it behind a UI inconsistency defeats that. Mirror the passing block's `details > EvidenceJsonView` pattern onto the findings map so both states render identically.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Contributor
|
🎉 This PR is included in version 3.30.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Follow-up to #2643. The automation-run detail card in Compliance → Task → Integration Checks rendered a View Evidence expandable JSON tree for every passing result but never for a failing one — even though the backend saves the same `evidence` payload in both cases and the API returns it identically.
After the Dependabot severity gating in #2643, fails surface exactly the context users need to understand them (the `open_by_severity` breakdown, `checked_at`, the `alerts` totals). Hiding that JSON for fails defeats the point of showing the evidence at all.
This PR mirrors the passing block's `
Details
> ` pattern onto the findings map so both states render identically. 12-line diff, one file.Trace of the pre-existing inconsistency:
Test plan
Why not also write a unit test?
`TaskIntegrationChecks.tsx` is a 1432-line component with no existing test file of its own. Adding one purely for this 12-line conditional would be out-of-scope scaffolding; the sibling component tests (`SingleTask.test.tsx`, etc.) don't reach into this code path. Happy to add one if you'd prefer — flag it in review.
🤖 Generated with Claude Code
Summary by cubic
Show the "View Evidence" JSON block for failing automation runs in Compliance → Task → Integration Checks, not only for passing results. This fixes the UI inconsistency and lets users see the same evidence (e.g., open_by_severity, checked_at) to understand failures.
Written for commit 975b4c9. Summary will update on new commits.