-
Notifications
You must be signed in to change notification settings - Fork 312
feat: drop-in observability kit with audit comparison and behavioral signals #22711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
+4,357
−105
Merged
Changes from all commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
d01144d
Improve agentic audit baselines and execution observability
mnkiefer a8b86d6
Merge branch 'main' into obs-tools
mnkiefer 80942d2
Merge branch 'main' into obs-tools
pelikhan b22b7cd
Merge branch 'main' into obs-tools
pelikhan 5abdb23
fix: address review comments for observability audit improvements
Copilot d32e410
enhance audit comparison and reporting with task domain & behavior fi…
mnkiefer 89f4ad9
simplify string checks and error handling
mnkiefer 1b1b538
Merge branch 'main' into obs-tools
mnkiefer 41ce17b
Merge branch 'main' into obs-tools
pelikhan b27da9c
Merge branch 'main' into obs-tools
mnkiefer 97cf927
rm observability policy cmd and related
mnkiefer 6e7fd2e
Merge branch 'main' into obs-tools
mnkiefer 3a1c0b3
clean up
mnkiefer 7baf503
rm docs
mnkiefer c1ac9aa
avoid unnecessary refactoring
mnkiefer 80b6893
update agentic observability kit
mnkiefer 65499a9
add episode and DAG model details
mnkiefer 2532822
add deterministic episode model and related fields to logs
mnkiefer ee077e2
fix lint error
mnkiefer File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
1,243 changes: 1,243 additions & 0 deletions
1,243
.github/workflows/agentic-observability-kit.lock.yml
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,238 @@ | ||
| --- | ||
| description: Drop-in observability kit for repositories using agentic workflows | ||
| on: | ||
| schedule: weekly on monday around 08:00 | ||
| workflow_dispatch: | ||
| permissions: | ||
| contents: read | ||
| actions: read | ||
| issues: read | ||
| pull-requests: read | ||
| discussions: read | ||
| engine: copilot | ||
| strict: true | ||
| tracker-id: agentic-observability-kit | ||
| tools: | ||
| agentic-workflows: | ||
| github: | ||
| toolsets: [default, discussions] | ||
| safe-outputs: | ||
| mentions: false | ||
| allowed-github-references: [] | ||
| concurrency-group: "agentic-observability-kit-safe-outputs" | ||
| create-discussion: | ||
| expires: 7d | ||
| category: "audits" | ||
| title-prefix: "[observability] " | ||
| max: 1 | ||
| close-older-discussions: true | ||
| create-issue: | ||
| title-prefix: "[observability escalation] " | ||
| labels: [agentics, warning, observability] | ||
| close-older-issues: true | ||
| max: 1 | ||
| noop: | ||
| report-as-issue: false | ||
| timeout-minutes: 30 | ||
| imports: | ||
| - shared/reporting.md | ||
| --- | ||
|
|
||
| # Agentic Observability Kit | ||
|
|
||
| You are an agentic workflow observability analyst. Produce one executive report that teams can read quickly, and create at most one escalation issue only when repeated patterns show that repository owners need to take action. | ||
|
|
||
| ## Mission | ||
|
|
||
| Review recent agentic workflow runs and surface the signals that matter operationally: | ||
|
|
||
| 1. Repeated drift away from a successful baseline | ||
| 2. Weak control patterns such as new write posture, new MCP failures, or more blocked requests | ||
| 3. Resource-heavy runs that are expensive for the domain they serve | ||
| 4. Stable but low-value agentic runs that may be better as deterministic automation | ||
| 5. Delegated workflows that lost continuity or are no longer behaving like a consistent cohort | ||
|
|
||
| Always create a discussion with the full report. Create an escalation issue only when repeated, actionable problems need durable owner follow-up. | ||
|
|
||
| ## Data Collection Rules | ||
|
|
||
| - Use the `agentic-workflows` MCP tool, not shell commands. | ||
| - Start with the `logs` tool over the last 14 days. | ||
| - Leave `workflow_name` empty so you analyze the full repository. | ||
| - Use `count` large enough to cover the repository, typically `300`. | ||
| - Use the `audit` tool only for up to 3 runs that need deeper inspection. | ||
| - If there are very few runs, still produce a report and explain the limitation. | ||
|
|
||
| ## Deterministic Episode Model | ||
|
|
||
| The logs JSON now includes deterministic lineage fields: | ||
|
|
||
| - `episodes[]` for aggregated execution episodes | ||
| - `edges[]` for lineage edges between runs | ||
|
|
||
| Treat those structures as the primary source of truth for graph shape, confidence, and episode rollups. | ||
|
|
||
| Prefer `episodes[]` and `edges[]` over reconstructing DAGs from raw runs in prompt space. Only fall back to per-run interpretation when episode data is absent or clearly incomplete. | ||
|
|
||
| ## Signals To Use | ||
|
|
||
| The logs JSON already contains the main agentic signals. Prefer these fields over ad hoc heuristics: | ||
|
|
||
| - `episodes[].episode_id` | ||
| - `episodes[].kind` | ||
| - `episodes[].confidence` | ||
| - `episodes[].reasons[]` | ||
| - `episodes[].root_run_id` | ||
| - `episodes[].run_ids[]` | ||
| - `episodes[].workflow_names[]` | ||
| - `episodes[].total_runs` | ||
| - `episodes[].total_tokens` | ||
| - `episodes[].total_estimated_cost` | ||
| - `episodes[].total_duration` | ||
| - `episodes[].risky_node_count` | ||
| - `episodes[].write_capable_node_count` | ||
| - `episodes[].mcp_failure_count` | ||
| - `episodes[].blocked_request_count` | ||
| - `episodes[].risk_distribution` | ||
| - `edges[].edge_type` | ||
| - `edges[].confidence` | ||
| - `edges[].reasons[]` | ||
| - `task_domain.name` and `task_domain.label` | ||
| - `behavior_fingerprint.execution_style` | ||
| - `behavior_fingerprint.tool_breadth` | ||
| - `behavior_fingerprint.actuation_style` | ||
| - `behavior_fingerprint.resource_profile` | ||
| - `behavior_fingerprint.dispatch_mode` | ||
| - `agentic_assessments[].kind` | ||
| - `agentic_assessments[].severity` | ||
| - `context.repo` | ||
| - `context.run_id` | ||
| - `context.workflow_id` | ||
| - `context.workflow_call_id` | ||
| - `context.event_type` | ||
| - `comparison.baseline.selection` | ||
| - `comparison.baseline.matched_on[]` | ||
| - `comparison.classification.label` | ||
| - `comparison.classification.reason_codes[]` | ||
| - `comparison.recommendation.action` | ||
|
|
||
| Treat these values as the canonical signals for reporting. | ||
|
|
||
| ## Interpretation Rules | ||
|
|
||
| - Use episode-level analysis first. Do not treat connected runs as unrelated when `episodes[]` already groups them. | ||
| - Use per-run detail only to explain which nodes contributed to an episode-level problem. | ||
| - If an episode has low confidence, say so explicitly and avoid overconfident causal claims. | ||
| - If delegated workers look risky in isolation but the enclosing episode looks intentional and well-controlled, say that. | ||
| - If the deterministic episode model appears incomplete or missing expected lineage, report that as an observability finding. | ||
|
|
||
| ## Reporting Model | ||
|
|
||
| The discussion must stay concise and operator-friendly. | ||
|
|
||
| ### Visible Summary | ||
|
|
||
| Keep these sections visible: | ||
|
|
||
| 1. `### Executive Summary` | ||
| 2. `### Key Metrics` | ||
| 3. `### Highest Risk Episodes` | ||
| 4. `### Recommended Actions` | ||
|
|
||
| Include small numeric summaries such as: | ||
|
|
||
| - workflows analyzed | ||
| - runs analyzed | ||
| - episodes analyzed | ||
| - high-confidence episodes analyzed | ||
| - runs with `comparison.classification.label == "risky"` | ||
| - runs with medium or high `agentic_assessments` | ||
| - workflows with repeated `overkill_for_agentic` | ||
| - workflows whose comparisons mostly fell back to `latest_success` | ||
|
|
||
| ### Details | ||
|
|
||
| Put detailed per-workflow breakdowns inside `<details>` blocks. | ||
|
|
||
| ### What Good Reporting Looks Like | ||
|
|
||
| For each highlighted episode or workflow, explain: | ||
|
|
||
| - what domain it appears to belong to | ||
| - what its behavioral fingerprint looks like | ||
| - whether the deterministic graph shows an orchestrated DAG or delegated episode | ||
| - whether the actor, cost, and risk seem to belong to the workflow itself or to a larger chain | ||
| - what the episode confidence level is and why | ||
| - whether it is stable against a cohort match or only compared to latest success | ||
| - whether the risky behavior is new, repeated, or likely intentional | ||
| - what a team should change next | ||
|
|
||
| ## Escalation Thresholds | ||
|
|
||
| Use the discussion as the complete source of truth for all qualifying workflows and episodes. Only create an escalation issue when one or more episodes or workflows cross these thresholds in the last 14 days: | ||
|
|
||
| 1. Two or more runs for the same workflow have `comparison.classification.label == "risky"`. | ||
| 2. Two or more runs for the same workflow contain `new_mcp_failure` or `blocked_requests_increase` in `comparison.classification.reason_codes`. | ||
| 3. Two or more runs for the same workflow contain a medium or high severity `resource_heavy_for_domain` assessment. | ||
| 4. Two or more runs for the same workflow contain a medium or high severity `poor_agentic_control` assessment. | ||
|
|
||
| Do not open one issue per workflow. Create at most one escalation issue for the whole run. | ||
|
|
||
| If no workflow crosses these thresholds, do not create an escalation issue. | ||
|
|
||
| If one or more workflows do cross these thresholds, create a single escalation issue that groups the highest-value follow-up work for repository owners. The escalation issue should summarize the workflows that need attention now, why they crossed the thresholds, and what change is recommended first. | ||
|
|
||
| Prefer escalating at the episode level when multiple risky runs are part of one coherent DAG. Only fall back to workflow-level escalation when no broader episode can be established with acceptable confidence. | ||
|
|
||
| ## Optimization Candidates | ||
|
|
||
| Do not create issues for these by default. Report them in the discussion unless they are severe and repeated: | ||
|
|
||
| - repeated `overkill_for_agentic` | ||
| - workflows that are consistently `lean`, `directed`, and `narrow` | ||
| - workflows that are always compared using `latest_success` instead of `cohort_match` | ||
|
|
||
| These are portfolio cleanup opportunities, not immediate incidents. | ||
|
|
||
| ## Use Of Audit | ||
|
|
||
| Use `audit` only when the logs summary is not enough to explain a top problem. Good audit candidates are: | ||
|
|
||
| - the newest risky run for a workflow with repeated warnings | ||
| - a run with a new MCP failure | ||
| - a run that changed from read-only to write-capable posture | ||
|
|
||
| When you use `audit`, fold the extra evidence back into the report instead of dumping raw output. | ||
|
|
||
| ## Output Requirements | ||
|
|
||
| ### Discussion | ||
|
|
||
| Always create one discussion that includes: | ||
|
|
||
| - the date range analyzed | ||
| - any important orchestrator, worker, or workflow_run chains that materially change interpretation | ||
| - the most important inferred episodes and their confidence levels | ||
| - all workflows that crossed the escalation thresholds | ||
| - the workflows with the clearest repeated risk | ||
| - the most common assessment kinds | ||
| - a short list of deterministic candidates | ||
| - a short list of workflows that need owner attention now | ||
|
|
||
| The discussion should cover all qualifying workflows even when no escalation issue is created. | ||
|
|
||
| ### Issues | ||
|
|
||
| Only create an escalation issue when at least one workflow crossed the escalation thresholds. When you do: | ||
|
|
||
| - create one issue for the whole run, not one issue per workflow | ||
| - use a concrete title that signals repository-level owner attention is needed | ||
| - group the escalated workflows in priority order | ||
| - explain the evidence with run counts and the specific assessment or comparison reason codes | ||
| - include the most relevant recommendation for each escalated workflow | ||
| - link up to 3 representative runs across the highest-priority workflows | ||
| - make the issue concise enough to function as a backlog item, with the full detail living in the discussion | ||
|
|
||
| ### No-op | ||
|
|
||
| If the repository has no recent runs or no report can be produced, call `noop` with a short explanation. Otherwise do not use `noop`. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,133 @@ | ||
| // @ts-check | ||
| /// <reference types="@actions/github-script" /> | ||
|
|
||
| const fs = require("fs"); | ||
|
|
||
| const AW_INFO_PATH = "/tmp/gh-aw/aw_info.json"; | ||
| const AGENT_OUTPUT_PATH = "/tmp/gh-aw/agent_output.json"; | ||
| const gatewayEventPaths = ["/tmp/gh-aw/mcp-logs/gateway.jsonl", "/tmp/gh-aw/mcp-logs/rpc-messages.jsonl"]; | ||
|
|
||
| function readJSONIfExists(path) { | ||
| if (!fs.existsSync(path)) { | ||
| return null; | ||
| } | ||
|
|
||
| try { | ||
| return JSON.parse(fs.readFileSync(path, "utf8")); | ||
| } catch { | ||
| return null; | ||
| } | ||
| } | ||
|
|
||
| function countBlockedRequests() { | ||
| let total = 0; | ||
|
|
||
| for (const path of gatewayEventPaths) { | ||
| if (!fs.existsSync(path)) { | ||
| continue; | ||
| } | ||
|
|
||
| const lines = fs.readFileSync(path, "utf8").split("\n"); | ||
| for (const raw of lines) { | ||
| const line = raw.trim(); | ||
| if (!line) continue; | ||
| try { | ||
| const entry = JSON.parse(line); | ||
| if (entry && entry.type === "DIFC_FILTERED") total++; | ||
| } catch { | ||
| // skip malformed lines | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return total; | ||
| } | ||
|
|
||
| function uniqueCreatedItemTypes(items) { | ||
| const types = new Set(); | ||
|
|
||
| for (const item of items) { | ||
| if (item && typeof item.type === "string" && item.type.trim() !== "") { | ||
| types.add(item.type); | ||
| } | ||
| } | ||
|
|
||
| return [...types].sort(); | ||
| } | ||
|
|
||
| function collectObservabilityData() { | ||
| const awInfo = readJSONIfExists(AW_INFO_PATH) || {}; | ||
| const agentOutput = readJSONIfExists(AGENT_OUTPUT_PATH) || { items: [], errors: [] }; | ||
| const items = Array.isArray(agentOutput.items) ? agentOutput.items : []; | ||
| const errors = Array.isArray(agentOutput.errors) ? agentOutput.errors : []; | ||
| const traceId = awInfo.context && typeof awInfo.context.workflow_call_id === "string" ? awInfo.context.workflow_call_id : ""; | ||
|
|
||
| return { | ||
| workflowName: awInfo.workflow_name || "", | ||
| engineId: awInfo.engine_id || "", | ||
| traceId, | ||
| staged: awInfo.staged === true, | ||
| firewallEnabled: awInfo.firewall_enabled === true, | ||
| createdItemCount: items.length, | ||
| createdItemTypes: uniqueCreatedItemTypes(items), | ||
| outputErrorCount: errors.length, | ||
| blockedRequests: countBlockedRequests(), | ||
| }; | ||
| } | ||
|
|
||
| function buildObservabilitySummary(data) { | ||
| const posture = data.createdItemCount > 0 ? "write-capable" : "read-only"; | ||
| const lines = []; | ||
|
|
||
| lines.push("<details>"); | ||
| lines.push("<summary><b>Observability</b></summary>"); | ||
| lines.push(""); | ||
|
|
||
| if (data.workflowName) { | ||
| lines.push(`- **workflow**: ${data.workflowName}`); | ||
| } | ||
| if (data.engineId) { | ||
| lines.push(`- **engine**: ${data.engineId}`); | ||
| } | ||
| if (data.traceId) { | ||
| lines.push(`- **trace id**: ${data.traceId}`); | ||
| } | ||
|
|
||
| lines.push(`- **posture**: ${posture}`); | ||
| lines.push(`- **created items**: ${data.createdItemCount}`); | ||
| lines.push(`- **blocked requests**: ${data.blockedRequests}`); | ||
| lines.push(`- **agent output errors**: ${data.outputErrorCount}`); | ||
| lines.push(`- **firewall enabled**: ${data.firewallEnabled}`); | ||
| lines.push(`- **staged**: ${data.staged}`); | ||
|
|
||
| if (data.createdItemTypes.length > 0) { | ||
| lines.push("- **item types**:"); | ||
| for (const itemType of data.createdItemTypes) { | ||
| lines.push(` - ${itemType}`); | ||
| } | ||
| } | ||
|
|
||
| lines.push(""); | ||
| lines.push("</details>"); | ||
|
|
||
| return lines.join("\n") + "\n"; | ||
| } | ||
|
|
||
| async function main(core) { | ||
| const mode = process.env.GH_AW_OBSERVABILITY_JOB_SUMMARY || ""; | ||
| if (mode !== "on") { | ||
| core.info(`Skipping observability summary: mode=${mode || "unset"}`); | ||
| return; | ||
| } | ||
|
|
||
| const data = collectObservabilityData(); | ||
| const markdown = buildObservabilitySummary(data); | ||
| await core.summary.addRaw(markdown).write(); | ||
| core.info("Generated observability summary in step summary"); | ||
| } | ||
|
|
||
| module.exports = { | ||
| buildObservabilitySummary, | ||
| collectObservabilityData, | ||
| main, | ||
| }; | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.