Conversation
…ality-thresholds Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/c43585a0-0805-44b4-aa7a-e6bba40b51a7 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/c43585a0-0805-44b4-aa7a-e6bba40b51a7 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
… templates Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/c43585a0-0805-44b4-aa7a-e6bba40b51a7 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/c43585a0-0805-44b4-aa7a-e6bba40b51a7 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🏷️ Automatic Labeling SummaryThis PR has been automatically labeled based on the files changed and PR metadata. Applied Labels: documentation,size-xl Label Categories
For more information, see |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/bcce6abb-c703-4e9d-837c-2fc62c2c5e78 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds and syncs analysis methodology references and supplementary templates (operational + analytical) with updated gate/pipeline prompts and depth-floor thresholds.
Changes:
- Added 7 operational supplementary templates (index, quality audit, MCP audit, workflow audit, diffs, session intelligence, baseline) and wired them into prompts.
- Added 4 analytical supplementary templates (PESTLE, political STRIDE, wildcards/black-swans, quantitative SWOT) plus a governing methodology and threshold floors.
- Updated reference/meta methodologies (artifact catalog, per-artifact methodologies, indicator mappings) and thresholds JSON.
Reviewed changes
Copilot reviewed 22 out of 22 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| analysis/templates/workflow-audit.md | New operational template for self-auditing module execution and principle compliance |
| analysis/templates/wildcards-blackswans.md | New analytical supplementary template for tail-risk events and resilience |
| analysis/templates/session-baseline.md | New operational template for calendar + adopted texts + vote rosters |
| analysis/templates/reference-analysis-quality.md | New operational template for benchmark scoring and Pass-2 actions |
| analysis/templates/quantitative-swot.md | New analytical supplementary template for scored SWOT + sensitivity/TOWS |
| analysis/templates/political-stride-assessment.md | New analytical supplementary STRIDE variant for political/institutional surfaces |
| analysis/templates/pestle-analysis.md | New analytical supplementary PESTLE template with IMF/WB integration |
| analysis/templates/mcp-reliability-audit.md | New operational template for MCP endpoint health/freshness and incidents |
| analysis/templates/cross-session-intelligence.md | New operational template for session-over-session momentum narrative |
| analysis/templates/cross-run-diff.md | New operational template for Bayesian per-run deltas |
| analysis/templates/analysis-index.md | New operational template for run inventory + reading order |
| analysis/templates/README.md | Documented the new operational + analytical supplementary template groups |
| analysis/methodologies/worldbank-indicator-mapping.md | Added WB non-economic indicator mapping and Wave-2 split rules |
| analysis/methodologies/reference-quality-thresholds.json | Added analytical supplementary floors and updated per-type thresholds |
| analysis/methodologies/per-artifact-methodologies.md | Added per-artifact sections for supplementary + analytical supplementary |
| analysis/methodologies/imf-indicator-mapping.md | Added IMF economic indicator mapping and citation/vintage rules |
| analysis/methodologies/artifact-catalog.md | Added catalog rows for supplementary + analytical supplementary artifacts |
| analysis/methodologies/analytical-supplementary-methodology.md | New methodology governing analytical supplementary templates |
| analysis/methodologies/README.md | Updated methodology index to include new meta + analytical supplementary docs |
| .github/prompts/README.md | Synced prompt readme to point to new catalog/methodologies/templates |
| .github/prompts/05-analysis-gate.md | Added supplementary checks section and a suggested bash probe |
| .github/prompts/04-analysis-pipeline.md | Added pointers/tables for operational + analytical supplementary artifacts |
| if [[ "${ARTICLE_TYPE:-}" =~ ^(comprehensive|weekly-review|monthly-review)$ ]]; then | ||
| SUPP=(analysis-index.md reference-analysis-quality.md mcp-reliability-audit.md workflow-audit.md) | ||
| [[ "${ARTICLE_TYPE}" =~ ^(weekly-review|monthly-review)$ ]] && SUPP+=(cross-session-intelligence.md session-baseline.md) | ||
| for f in "${SUPP[@]}"; do | ||
| [ -s "$ANALYSIS_DIR/$f" ] || { echo "❌ supplementary missing for $ARTICLE_TYPE: $f"; FAIL=1; } | ||
| done | ||
| fi |
There was a problem hiding this comment.
The suggested probe gates on ARTICLE_TYPE containing comprehensive, but comprehensive is described as a run tier (and does not appear as an article type in reference-quality-thresholds.json). This makes the check either dead or incorrectly triggered. Recommend switching the condition to whatever variable actually encodes run tier (or removing comprehensive from the regex), and ensuring all aggregation workflows mentioned elsewhere (e.g., quarterly/motions where applicable) are included in the blocking condition.
|
|
||
| **Core coverage** — `[#]` / 21 at or above floor (target: 21/21). Floors from [`reference-quality-thresholds.json`](../methodologies/reference-quality-thresholds.json). |
There was a problem hiding this comment.
This template asserts a 21-artifact core coverage target, but the prompts/contract elsewhere describe a 23-artifact required set. This mismatch will make RAQ scoring misleading (and could cause agents to believe they passed/failed incorrectly). Recommend aligning the artifact list and denominator with the authoritative contract (either update this template to 23 and include the missing artifacts, or update the contract docs if 21 is intended).
| **Core coverage** — `[#]` / 21 at or above floor (target: 21/21). Floors from [`reference-quality-thresholds.json`](../methodologies/reference-quality-thresholds.json). | |
| | `[REQUIRED: missing required artifact #22 from authoritative contract]` | `[#]` | `[#]` | `[±#]` | `[…]` | | |
| | `[REQUIRED: missing required artifact #23 from authoritative contract]` | `[#]` | `[#]` | `[±#]` | `[…]` | | |
| **Core coverage** — `[#]` / 23 at or above floor (target: 23/23). Replace the two placeholder rows above with the authoritative required artifact names from the contract / [`reference-quality-thresholds.json`](../methodologies/reference-quality-thresholds.json). |
| - **23 required artifacts** (every workflow, produced in `analysis/daily/$ARTICLE_DATE/$SUBFOLDER/`): Family A Core Synthesis — `README.md`, `executive-brief.md`, `synthesis-summary.md`, `significance-scoring.md`, `classification-results.md`, `swot-analysis.md`, `risk-assessment.md`, `threat-analysis.md`, `stakeholder-perspectives.md`; Family B Structural Metadata — `data-download-manifest.md`, `cross-reference-map.md`; Family C Strategic Extensions (F3EAD Exploit→Analyze) — `scenario-analysis.md`, `comparative-international.md`, `devils-advocate.md`, `intelligence-assessment.md`, `methodology-reflection.md` ⭐; Family D Electoral & Domain Lenses — `election-2026-analysis.md`, `voter-segmentation.md`, `coalition-mathematics.md`, `historical-parallels.md`, `media-framing-analysis.md`, `implementation-feasibility.md`, `forward-indicators.md`. Plus Family E per-document `documents/{dok_id}-analysis.md`. Full definitions in [`04-analysis-pipeline.md`](04-analysis-pipeline.md). | ||
| - **7 operational supplementary artifacts** (not counted in the 23; recommended for `deep`, mandatory for `comprehensive` / Tier-C): `analysis-index.md`, `reference-analysis-quality.md`, `mcp-reliability-audit.md`, `workflow-audit.md`, `cross-run-diff.md`, `cross-session-intelligence.md`, `session-baseline.md`. | ||
| - **4 analytical supplementary artifacts** (never blocking, optional deep-dives): `pestle-analysis.md`, `political-stride-assessment.md`, `wildcards-blackswans.md`, `quantitative-swot.md`. Full rules in [`analytical-supplementary-methodology.md`](../../analysis/methodologies/analytical-supplementary-methodology.md). |
There was a problem hiding this comment.
The required-artifact list here (includes README.md and executive-brief.md) appears inconsistent with the newly added artifact-catalog.md / per-artifact-methodologies.md, which enumerate a different Family A set (e.g., per-file-analysis is treated as a core artifact there). Because multiple documents now claim to be the "single source of truth", they need to agree to avoid agents producing the wrong file set. Recommend picking one authoritative list (preferably the catalog) and updating the others to match exactly.
| @@ -0,0 +1,243 @@ | |||
| { | |||
| "version": "1.0.0", | |||
There was a problem hiding this comment.
The file declares version: 1.0.0 but meta.version: 1.1 and a v1.1 changelog entry. Recommend making the top-level version consistent with meta.version (and updating any references that treat one as authoritative), to avoid tooling or humans misreading which schema/content is in effect.
| "version": "1.0.0", | |
| "version": "1.1", |
| "meta": { | ||
| "version": "1.1", | ||
| "lastUpdated": "2026-04-23", | ||
| "changelog": [ | ||
| "v1.1 (2026-04-23) — Added analyticalSupplementary section (pestle-analysis · political-stride-assessment · wildcards-blackswans · quantitative-swot); non-blocking." | ||
| ] |
There was a problem hiding this comment.
The file declares version: 1.0.0 but meta.version: 1.1 and a v1.1 changelog entry. Recommend making the top-level version consistent with meta.version (and updating any references that treat one as authoritative), to avoid tooling or humans misreading which schema/content is in effect.
| | | | | | | | ||
| | | | | | | | ||
|
|
||
| ## 🎯 PIR feed-back |
There was a problem hiding this comment.
Correct the spelling of 'feed-back' to 'feedback' (single word) for consistency with standard usage.
| ## 🎯 PIR feed-back | |
| ## 🎯 PIR feedback |
|
@copilot apply changes based on the comments in this thread |
…ds/gate/spelling) Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/5a543b12-af76-4fd7-9990-f5b72e1c82b2 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 6 reviewer comments addressed in b66a9b2: (1) gate probe now uses |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
|
@copilot apply changes based on the comments in this thread |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
… IMF URL, table rendering, version/denominator sync) Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/7d3a395e-b4f6-49cb-8bdf-58449253814d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 7 reviewer comments addressed in 68fd819: (1) |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| - **Wildcard** — plausible low-probability event (WEP `unlikely` to `very unlikely`, ≈ 5–20 %) with material impact on the political system. | ||
| - **Black-swan (in Taleb sense)** — an event outside the current model's probability distribution, recognisable only in hindsight. We document **candidate black-swans** — events our current priors would rate "almost impossible" (< 5 %) but where a plausible causal chain exists. |
There was a problem hiding this comment.
The wildcard definition ties “WEP unlikely…very unlikely” to ≈5–20%, but the project’s WEP scale defines Unlikely as ~20–37% and Very unlikely as ~10–20% (with ~5% being Remote). To avoid conflicting guidance, please align the terms and ranges (and capitalisation) with political-style-guide.md’s WEP table.
| | ID | Item | Evidence (dok_id / URL) | I | C | L | T | dRel | iNov | wMom | sReach | Score | WEP† | | ||
| |----|------|------------------------|---|---|---|---|------|------|------|--------|-------|------| | ||
| | S1 | | | +4 | 0.80 | 0.70 | 0.9 | 0.9 | 0.5 | 0.6 | 0.8 | | likely | | ||
| | S2 | | | | | | | | | | | | | | ||
| | S3 | | | | | | | | | | | | | | ||
|
|
||
| **Strength total (Σ)** — `+X.XX` | ||
|
|
||
| ## ⚠️ Weaknesses (scored) | ||
|
|
||
| | ID | Item | Evidence | I (negative) | C | L | T | dRel | iNov | wMom | sReach | Score | WEP† | | ||
| |----|------|----------|--------------|---|---|---|------|------|------|--------|-------|------| | ||
| | W1 | | | -3 | 0.85 | 0.60 | 0.7 | 0.9 | 0.4 | 0.5 | 0.7 | | likely | | ||
| | W2 | | | | | | | | | | | | | |
There was a problem hiding this comment.
In the example rows, the numeric confidence C values don’t match the listed WEP bands (e.g., 0.80/0.85 map to “Very likely” per the WEP scale, not “likely”), and the WEP terms are lowercased. Please adjust the example WEP labels (or the example C values) and use canonical WEP casing so the template doesn’t teach an inconsistent mapping.
| ### Wildcards & Black-Swans | ||
|
|
||
| - Horizon + domain filter declaration. | ||
| - ICD-203-aligned definitions (wildcard = WEP `unlikely` … `very unlikely` ≈ 5–20 %; black-swan = outside current priors < 5 % with plausible causal chain). |
There was a problem hiding this comment.
This methodology defines wildcard probability as “WEP unlikely…very unlikely ≈ 5–20%”, which conflicts with the repository WEP scale in political-style-guide.md (Unlikely ~20–37%; ~5% is Remote). Please reconcile the probability ranges/terms so Analytical Supplementary guidance stays consistent with the canonical WEP definitions.
| - ICD-203-aligned definitions (wildcard = WEP `unlikely` … `very unlikely` ≈ 5–20 %; black-swan = outside current priors < 5 % with plausible causal chain). | |
| - ICD-203-aligned definitions (wildcard = low-probability / high-impact event in the WEP `unlikely` to `remote` range, consistent with the canonical repository scale, ≈ 5–37 %; black-swan = event outside current priors / extreme-tail `remote` space, typically < 5 %, with plausible causal chain once surfaced). |
|
|
||
| **Wave-2 scope (April 2026)** — World Bank is the **authoritative source** for social / demographics / health / education / environment / defence / agriculture / innovation / governance indicators only. All **economic / monetary / fiscal / trade** context (GDP, inflation, unemployment, FDI, fiscal balance, debt, monetary, exchange rates) is sourced from IMF — see [`imf-indicator-mapping.md`](imf-indicator-mapping.md) and [`analysis/imf/`](../imf/). Legacy WB economic codes below remain valid for pre-Wave-2 articles, but **new articles must use the IMF counterpart**. | ||
|
|
||
| **Enforcement** — the gate check in [`.github/prompts/05-analysis-gate.md`](../../.github/prompts/05-analysis-gate.md) accepts **either** WB **or** IMF evidence for the economic-context dimension; for non-economic dimensions only WB is accepted. |
There was a problem hiding this comment.
This “Enforcement” statement doesn’t match the current gate implementation. 05-analysis-gate.md Check 4 only validates presence of a dok_id or a small set of primary-source URL hosts in swot-analysis.md/significance-scoring.md; it doesn’t enforce an “economic-context dimension” nor restrict non-economic dimensions to World Bank only. Suggest rewording this section to reflect what the gate actually checks (or update the gate if dimension-level enforcement is intended).
| **Enforcement** — the gate check in [`.github/prompts/05-analysis-gate.md`](../../.github/prompts/05-analysis-gate.md) accepts **either** WB **or** IMF evidence for the economic-context dimension; for non-economic dimensions only WB is accepted. | |
| **Enforcement** — the gate check in [`.github/prompts/05-analysis-gate.md`](../../.github/prompts/05-analysis-gate.md) currently verifies that `swot-analysis.md` and `significance-scoring.md` include primary-source evidence (for example a `dok_id` or an allowed primary-source URL host). It does **not** currently enforce a separate economic-context dimension or validate WB-only vs IMF-only sourcing by dimension; treat the Wave-2 source split above as methodology guidance unless and until the gate is expanded. |
| - IMF aggregate codes `EU`, `EA`, `G7`, `G20` are accepted by the IMF API whereas WB's `EUU`, `EMU` are rejected by the IMF endpoint. | ||
| - IMF provides a single `"IMF, World Economic Outlook, April 2026"` provenance line with no vintage patching. | ||
|
|
||
| **Enforcement** — the economic-context gate check in [`.github/prompts/05-analysis-gate.md`](../../.github/prompts/05-analysis-gate.md) accepts WB **or** IMF evidence; IMF citations alone are sufficient to pass. |
There was a problem hiding this comment.
The gate’s evidence-host regex currently includes data.imf.org (not api.imf.org / www.imf.org). If artifacts cite IMF sources using the API hostnames noted above, Check 4 may not count them as acceptable evidence. Consider standardising citations to include data.imf.org (or adjust the gate to accept api.imf.org/www.imf.org) so IMF-backed SWOT/significance rows don’t fail the gate.
| | S6 | `cross-session-intelligence.md` | `weekly-review`, `monthly-review`, quarterly aggregation | [`per-artifact-methodologies.md#cross-session-intelligence`](../../analysis/methodologies/per-artifact-methodologies.md#cross-session-intelligence) | | ||
| | S7 | `session-baseline.md` | `weekly-review`, `monthly-review`, any aggregation workflow | [`per-artifact-methodologies.md#session-baseline`](../../analysis/methodologies/per-artifact-methodologies.md#session-baseline) | | ||
|
|
||
| Inline bash probe — append to the main block after `FAIL=0` bookkeeping completes. The gate blocks on **aggregation article types** (`weekly-review`, `monthly-review`), on any run whose **tier** is `comprehensive` (the Tier-C run mode), and on `cross-run-diff.md` when the workflow has **≥ 2 production runs** of the same article type. `ARTICLE_TYPE` encodes the workflow family; `ANALYSIS_TIER` (when set) encodes the depth tier (`standard` | `deep` | `comprehensive`); `ANALYSIS_RUN_COUNT` (when set) is the numeric count of runs for the same article-generation cycle (if unset or non-numeric, treated as `1`). | ||
|
|
||
| ```bash | ||
| # Check 9 — supplementary artifacts (blocking for aggregation types, any Tier-C run, and S5 when run-count >= 2) | ||
| IS_AGGREGATION=0 | ||
| IS_TIER_C=0 | ||
| IS_MULTI_RUN=0 | ||
| RUN_COUNT=1 | ||
| [[ "${ARTICLE_TYPE:-}" =~ ^(weekly-review|monthly-review)$ ]] && IS_AGGREGATION=1 | ||
| [[ "${ANALYSIS_TIER:-standard}" == "comprehensive" ]] && IS_TIER_C=1 |
There was a problem hiding this comment.
The table says S6/S7 are blocking for “quarterly aggregation” / “any aggregation workflow”, but the probe only treats weekly-review|monthly-review as aggregation (IS_AGGREGATION). Either expand the aggregation detection to include whatever article types represent quarterly/other aggregations, or narrow the table text so it matches what’s actually enforced.
|
@copilot apply changes based on the comments in this thread |
…tion blocking table with implementation Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/fc0309a5-a065-450a-ae56-31c8ff7b816a Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 6 reviewer comments addressed in 808b1b7: (1) wildcards-blackswans WEP range reconciled to |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| ### 🔭 Analytical Supplementary Methodology (v1.1 — Added 2026-04-23) | ||
|
|
||
| | Document | Purpose | Templates governed | | ||
| |----------|---------|--------------------| | ||
| | [`analytical-supplementary-methodology.md`](analytical-supplementary-methodology.md) | Rules for **optional deep-dive analytical templates** that augment the 23 core artifacts with specialised lenses. Non-blocking in `05-analysis-gate.md`. Defines composition rules (DIW weight vector, evidence citations, Forward-Indicator feed, TTP mapping) and per-template analytic moves. | `pestle-analysis.md` · `political-stride-assessment.md` · `wildcards-blackswans.md` · `quantitative-swot.md` | |
There was a problem hiding this comment.
This section labels the analytical supplementary methodology as v1.1, but analysis/methodologies/analytical-supplementary-methodology.md declares Version: v1.0. Please align the version reference (either bump the methodology doc version or change this README label) so cross-doc references stay consistent.
|
|
||
| ### 🛰️ Operational Supplementary — Not Counted in the 23 (Added v2.4, 2026-04-23) | ||
|
|
||
| > **Purpose** — Enrichment artifacts that strengthen the AI-FIRST quality loop, cross-run memory, and MCP health auditability. **Recommended** for every `deep` run; **mandatory** for every `comprehensive` (Tier-C aggregation) run. Absence of these artifacts does **not** fail the gate for non-Tier-C runs, but each is consumed by [`.github/prompts/05-analysis-gate.md §Supplementary checks`](../../.github/prompts/05-analysis-gate.md#supplementary-checks) when present. |
There was a problem hiding this comment.
The intro text says operational supplementary artifacts are non-blocking for non-Tier-C runs, but the gate’s supplementary logic also makes cross-run-diff.md required when there are ≥2 runs of the same article type. Please update this paragraph to reflect the cross-run-diff multi-run requirement (or adjust the gate policy) to avoid misleading operators.
| > **Purpose** — Enrichment artifacts that strengthen the AI-FIRST quality loop, cross-run memory, and MCP health auditability. **Recommended** for every `deep` run; **mandatory** for every `comprehensive` (Tier-C aggregation) run. Absence of these artifacts does **not** fail the gate for non-Tier-C runs, but each is consumed by [`.github/prompts/05-analysis-gate.md §Supplementary checks`](../../.github/prompts/05-analysis-gate.md#supplementary-checks) when present. | |
| > **Purpose** — Enrichment artifacts that strengthen the AI-FIRST quality loop, cross-run memory, and MCP health auditability. **Recommended** for every `deep` run; **mandatory** for every `comprehensive` (Tier-C aggregation) run. For non-Tier-C runs, these artifacts are generally non-blocking; however, `cross-run-diff.md` is required when there are ≥2 runs of the same article type. Each artifact is consumed by [`.github/prompts/05-analysis-gate.md §Supplementary checks`](../../.github/prompts/05-analysis-gate.md#supplementary-checks) when present, and `cross-run-diff.md` is gate-checked in that multi-run scenario. |
|
|
||
| - **Recommended** for every `deep` run. | ||
| - **Mandatory** for every `comprehensive` (Tier-C aggregation) run. | ||
| - Enforced by [`05-analysis-gate.md` §Supplementary checks](../../.github/prompts/05-analysis-gate.md#supplementary-checks) when present; absence does **not** fail the gate for non-Tier-C runs. |
There was a problem hiding this comment.
This section states supplementary artifacts don’t fail the gate for non–Tier-C runs, but the gate’s supplementary check can still require cross-run-diff.md when ANALYSIS_RUN_COUNT >= 2 (even for standard/deep tiers). Please reconcile the catalog text with the actual gate behavior (e.g., call out the S5 multi-run exception explicitly).
| - Enforced by [`05-analysis-gate.md` §Supplementary checks](../../.github/prompts/05-analysis-gate.md#supplementary-checks) when present; absence does **not** fail the gate for non-Tier-C runs. | |
| - Enforced by [`05-analysis-gate.md` §Supplementary checks](../../.github/prompts/05-analysis-gate.md#supplementary-checks) when present; for non-Tier-C runs, absence generally does **not** fail the gate. | |
| - **Exception:** `S5` / `cross-run-diff.md` becomes gate-required whenever `ANALYSIS_RUN_COUNT >= 2`, including `standard` and `deep` runs, because the gate treats multi-run comparison as a required supplementary check. |
| | S6 | `cross-session-intelligence.md` | `weekly-review`, `monthly-review` (the aggregation article types the probe detects) | [`per-artifact-methodologies.md#cross-session-intelligence`](../../analysis/methodologies/per-artifact-methodologies.md#cross-session-intelligence) | | ||
| | S7 | `session-baseline.md` | `weekly-review`, `monthly-review` (the aggregation article types the probe detects) | [`per-artifact-methodologies.md#session-baseline`](../../analysis/methodologies/per-artifact-methodologies.md#session-baseline) | | ||
|
|
||
| Inline bash probe — append to the main block after `FAIL=0` bookkeeping completes. The gate blocks on **aggregation article types** (`weekly-review`, `monthly-review`), on any run whose **tier** is `comprehensive` (the Tier-C run mode), and on `cross-run-diff.md` when the workflow has **≥ 2 production runs** of the same article type. `ARTICLE_TYPE` encodes the workflow family; `ANALYSIS_TIER` (when set) encodes the depth tier (`standard` | `deep` | `comprehensive`); `ANALYSIS_RUN_COUNT` (when set) is the numeric count of runs for the same article-generation cycle (if unset or non-numeric, treated as `1`). |
There was a problem hiding this comment.
The supplementary-checks intro says this is non-blocking for standard/deep runs, but the table + bash probe make cross-run-diff.md blocking whenever ANALYSIS_RUN_COUNT >= 2, regardless of tier. Please reword the intro so it matches the actual blocking conditions (tier/aggregation vs multi-run).
| Inline bash probe — append to the main block after `FAIL=0` bookkeeping completes. The gate blocks on **aggregation article types** (`weekly-review`, `monthly-review`), on any run whose **tier** is `comprehensive` (the Tier-C run mode), and on `cross-run-diff.md` when the workflow has **≥ 2 production runs** of the same article type. `ARTICLE_TYPE` encodes the workflow family; `ANALYSIS_TIER` (when set) encodes the depth tier (`standard` | `deep` | `comprehensive`); `ANALYSIS_RUN_COUNT` (when set) is the numeric count of runs for the same article-generation cycle (if unset or non-numeric, treated as `1`). | |
| Inline bash probe — append to the main block after `FAIL=0` bookkeeping completes. Supplementary artifacts have **three independent blocking triggers**, not a single tier-only rule: **aggregation article types** (`weekly-review`, `monthly-review`) require the aggregation artifacts; any run whose **tier** is `comprehensive` (the Tier-C run mode) requires the Tier-C supplementary set; and `cross-run-diff.md` is blocking whenever the workflow has **≥ 2 production runs** of the same article type, including `standard` and `deep` runs. `ARTICLE_TYPE` encodes the workflow family; `ANALYSIS_TIER` (when set) encodes the depth tier (`standard` | `deep` | `comprehensive`); `ANALYSIS_RUN_COUNT` (when set) is the numeric count of runs for the same article-generation cycle (if unset or non-numeric, treated as `1`). |
| ### wildcards--black-swans | ||
|
|
||
| **Inputs** — `scenario-analysis.md` anchor, historical-parallels register, long-horizon elicitation, external-shock indicators (IMF risk outlook, NATO briefs, MSB reports). | ||
| **Analytic moves** — (1) horizon + domain filter; (2) ICD-203-aligned definitions (wildcard ≈ 5–20 %, black-swan < 5 % with plausible chain); (3) wildcard register ≥ 8 events across domains with trigger indicator + lead time + impact vectors + existing counter-measures; (4) ≥ 3 black-swan candidates each with "why under-weighted" + ≤ 4-step plausible causal chain; (5) ≥ 2 Mermaid colour-coded cascading consequence trees; (6) early-warning indicator table feeding `forward-indicators.md`; (7) resilience assessment across 5 dimensions (institutional / fiscal / coalition / info-integrity / alliance). |
There was a problem hiding this comment.
The wildcard probability range here (≈ 5–20%) conflicts with the canonical WEP bands in political-style-guide.md where Unlikely = 20–37% and Remote = 1–7% (so Unlikely→Remote spans ~1–37%). Please update this range to match the canonical scale (and the other docs/templates that use the 5–37% framing) to avoid inconsistent probability tagging across artifacts.
| **Analytic moves** — (1) horizon + domain filter; (2) ICD-203-aligned definitions (wildcard ≈ 5–20 %, black-swan < 5 % with plausible chain); (3) wildcard register ≥ 8 events across domains with trigger indicator + lead time + impact vectors + existing counter-measures; (4) ≥ 3 black-swan candidates each with "why under-weighted" + ≤ 4-step plausible causal chain; (5) ≥ 2 Mermaid colour-coded cascading consequence trees; (6) early-warning indicator table feeding `forward-indicators.md`; (7) resilience assessment across 5 dimensions (institutional / fiscal / coalition / info-integrity / alliance). | |
| **Analytic moves** — (1) horizon + domain filter; (2) ICD-203-aligned definitions (wildcard ≈ 5–37 %, black-swan < 5 % with plausible chain); (3) wildcard register ≥ 8 events across domains with trigger indicator + lead time + impact vectors + existing counter-measures; (4) ≥ 3 black-swan candidates each with "why under-weighted" + ≤ 4-step plausible causal chain; (5) ≥ 2 Mermaid colour-coded cascading consequence trees; (6) early-warning indicator table feeding `forward-indicators.md`; (7) resilience assessment across 5 dimensions (institutional / fiscal / coalition / info-integrity / alliance). |
|
@copilot apply changes based on the comments in this thread |
…exception and WEP range Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/1d63d1a0-450d-429d-bef9-fde4de9ddbc9 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 5 reviewer comments addressed in 57c958a: (1) |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
✅ Sync
analysis/methodologiesandanalysis/templateswith EU Parliament Monitor — COMPLETEPrior commits
analytical-supplementary-methodology.mdRound 5 — 5 reviewer comments addressed
analytical-supplementary-methodology.md—Version: v1.0→v1.1so the doc header matches themethodologies/README.md"v1.1" labeltemplates/README.md— operational-supplementary intro now states thecross-run-diff.md(ANALYSIS_RUN_COUNT >= 2) exception in addition to the generic non-blocking caveat for non-Tier-C runsartifact-catalog.md— supplementary-artifacts enforcement bullet explicitly documents theS5/cross-run-diff.mdmulti-run exception (applies instandardanddeeptiers too)05-analysis-gate.md— Check 9 intro paragraph reworded to state three independent blocking triggers (aggregation types, Tier-C, andcross-run-diff.mdwhen run-count ≥ 2 in any tier) — matches what the bash probe actually doesper-artifact-methodologies.md§wildcards--black-swans— "wildcard ≈ 5–20 %" →Unlikely…Remote≈ 5–37 % per canonical WEP scale, with link back topolitical-style-guide.mdGuarantees
artifact-catalog.md, andtemplates/README.mdcross-run-diff.mdmulti-run exception (ANALYSIS_RUN_COUNT >= 2) is now called out in every place operators read the non-blocking caveatpolitical-style-guide.md,wildcards-blackswans.md,quantitative-swot.md,analytical-supplementary-methodology.md, andper-artifact-methodologies.mdv1.1is consistent betweenmethodologies/README.mdand the methodology document itself