Improve portfolio yield telemetry ingestion and evidence classification#31483
Conversation
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
|
@copilot Recompile workflow |
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Done in |
|
Hey One thing is holding this back from being review-ready:
If you'd like a hand, you can assign this prompt to your coding agent:
|
|
@copilot Update the PR body for PR #31483 in github/gh-aw to include a proper description.
|
There was a problem hiding this comment.
Pull request overview
Updates the portfolio yield workflow to ingest telemetry from actual GitHub Actions run data and refines evidence scoring/coverage to distinguish declared observability vs observed vs validated telemetry signals.
Changes:
- Extend precompute telemetry normalization to separate
metrics,observed,validated, andsource, and update workflow + portfolio evidence scoring and coverage metrics accordingly. - Update postcompute to derive portfolio evidence quality from observed/validated coverage and surface the split coverage metrics in payloads/reports.
- Add a workflow pre-agent step to snapshot recent workflow run telemetry into JSON and pass it into precompute; regenerate the compiled workflow artifact.
Show a summary per file
| File | Description |
|---|---|
scripts/aw_yield_precompute.py |
Normalizes telemetry entries into structured fields and updates evidence/coverage scoring logic. |
scripts/aw_yield_postcompute.py |
Computes derived evidence quality and reports declared/observed/validated telemetry coverage. |
tests/test_aw_yield_precompute.py |
Adds tests for telemetry normalization, evidence quality behavior, and split coverage metrics. |
tests/test_aw_yield_postcompute.py |
Updates fixtures/assertions to validate split coverage reporting and confidence rules. |
.github/workflows/aw-portfolio-yield.md |
Adds a pre-agent telemetry snapshot step and wires its output into precompute via env var. |
.github/workflows/aw-portfolio-yield.lock.yml |
Recompiled workflow to include the new telemetry snapshot step and env wiring. |
.github/workflows/issue-monster.lock.yml |
Minor comment-only change in the compiled workflow artifact. |
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 7/7 changed files
- Comments generated: 2
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
|
@copilot Do not use python or bash but actions/github-script and javascript. |
For this PR change, the telemetry collection/update logic is implemented in |
✨ Enhancement
What does this improve?
This PR improves how portfolio-yield telemetry is ingested and interpreted so workflow evidence quality is based on clearer, more reliable signals.
scripts/aw_yield_precompute.pymetrics,observed,validated,source)scripts/aw_yield_postcompute.py.github/workflows/aw-portfolio-yield.md(+ compiled lock file)actions/github-scriptJavaScriptWhy is this valuable?
The changes reduce telemetry mis-association risk, improve confidence in evidence scoring, and make portfolio-level governance recommendations more trustworthy by distinguishing declared observability from observed and validated signals.
Implementation approach:
Testing
Existing unit tests run for baseline validation.
Targeted regression coverage includes telemetry path resolution behavior (including duplicate filename scenarios in different directories).
Workflow compilation/validation run for
aw-portfolio-yieldto confirm lock-file consistency and valid generated workflow.Inspect the targeted review comment and affected code path
Run existing lint/build/tests to establish baseline before changes
Apply the minimal code change for the specific comment
Run targeted tests for the changed area
Reply to the review comment with the addressing commit hash
Run final validation (code review + CodeQL)