Conversation
🏷️ Automatic Labeling SummaryThis PR has been automatically labeled based on the files changed and PR metadata. Applied Labels: size-xs Label Categories
For more information, see |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…ked examples, cross-methodology linkage, and actionable implementation details - political-risk-methodology.md (v2.1): Add Temporal Analysis Protocol (re-scoring triggers, staleness rules, evolution template) and Bayesian Updating Worked Example (7-day ECHR challenge scenario) - political-swot-framework.md (v2.2): Add SWOT Evolution Tracking (SWOT Delta template, inter-quadrant migration rules with Mermaid diagram, cross-document aggregation protocol) - ai-driven-analysis-guide.md (v4.2): Add Empty Data Handling Protocol (lookback strategy, carry-forward protocol, minimum output) and Per-File Output Example - political-classification-guide.md (v2.2): Add Committee→Domain Canonical Mapping (15 committees matching COMMITTEE_TO_DOMAIN code constant) - political-style-guide.md (v2.1): Add 8 Bad→Good Rewrite Examples covering all prohibited pattern categories with dok_ids - political-threat-framework.md (v3.1): Add Cross-Methodology Linkage Protocol (Severity→Likelihood mapping, SWOT vs threat scope, synthesis integration with Mermaid) Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/2c76dd9f-0404-454c-ae79-eb5e75d6ed03 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
There was a problem hiding this comment.
Pull request overview
Updates the six analysis methodology guides under analysis/methodologies/ to be more operational for AI agents by adding concrete protocols, worked examples, and explicit cross-methodology integration rules (documentation-only change).
Changes:
- Add actionable “how-to” sections (temporal/staleness rules, empty-data handling, SWOT evolution tracking) with templates and Mermaid diagrams.
- Add worked examples (Bayesian updating, bad→good rewrites, per-file analysis output) to reduce generic/empty outputs.
- Establish explicit cross-framework linkages (threat↔risk↔SWOT) and document the canonical committee→domain mapping aligned with the codebase.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| analysis/methodologies/political-risk-methodology.md | Adds Temporal Analysis Protocol + 7-day Bayesian worked example and templates. |
| analysis/methodologies/political-swot-framework.md | Adds SWOT evolution/delta tracking, migration rules, and aggregation protocol. |
| analysis/methodologies/ai-driven-analysis-guide.md | Adds empty-data handling protocol (lookback/direct retrieval/carry-forward) + per-file output example. |
| analysis/methodologies/political-classification-guide.md | Adds canonical committee→domain mapping table + pipeline diagram + code reference. |
| analysis/methodologies/political-style-guide.md | Adds 8 bad→good rewrite examples tied to prohibited patterns with MCP-style references. |
| analysis/methodologies/political-threat-framework.md | Adds cross-methodology linkage protocol (severity→likelihood mapping, synthesis workflow, dashboard diagram). |
| | 2026-03-24 | Initial assessment | 3 | 4 | 12 | — | MEDIUM | | ||
| | 2026-03-26 | Lagrådet favorable opinion | 2 | 4 | 8 | −4 | HIGH | | ||
| | 2026-03-28 | S files KU complaint | 3 | 4 | 12 | +4 | HIGH | | ||
| | 2026-03-30 | No new evidence | 3 | 4 | 12 | 0 | HIGH (aging) | |
There was a problem hiding this comment.
In the 7-day evolution summary, the 2026-03-30 row labels confidence as "HIGH (aging)", but Day 7 is only 2 days after the last evidence (2026-03-28) and the Staleness Rules section defines 0–3 days as Current. Please make the summary consistent with the staleness rules / Day 7 narrative (either change the label to Current, or adjust the day count/example).
| | 2026-03-30 | No new evidence | 3 | 4 | 12 | 0 | HIGH (aging) | | |
| | 2026-03-30 | No new evidence | 3 | 4 | 12 | 0 | HIGH (Current) | |
| The `pre-article-analysis.ts` pipeline automatically looks back up to **5 business days** (configurable via `MAX_LOOKBACK_BUSINESS_DAYS`). If documents are found via lookback, the `dataFreshness` field records the actual date: | ||
|
|
||
| ```markdown | ||
| **Data Freshness:** Lookback applied — documents from 2026-04-01 (target: 2026-04-03) |
There was a problem hiding this comment.
The "Data Freshness" example under the Lookback Strategy doesn't match the repo's actual serialized format, and downstream parsing expects a specific string. scripts/pre-article-analysis/markdown-serializer.ts emits "Data Freshness: Documents sourced from YYYY-MM-DD via lookback fallback (article date: YYYY-MM-DD).", and scripts/analysis-reader.ts extracts freshness using that pattern. Please align the example text here to the real output format (or explicitly note the canonical string to preserve parsing compatibility).
| The `pre-article-analysis.ts` pipeline automatically looks back up to **5 business days** (configurable via `MAX_LOOKBACK_BUSINESS_DAYS`). If documents are found via lookback, the `dataFreshness` field records the actual date: | |
| ```markdown | |
| **Data Freshness:** Lookback applied — documents from 2026-04-01 (target: 2026-04-03) | |
| The `pre-article-analysis.ts` pipeline automatically looks back up to **5 business days** (configurable via `MAX_LOOKBACK_BUSINESS_DAYS`). If documents are found via lookback, the `dataFreshness` field records the actual date using the canonical serialized format expected by downstream parsing: | |
| ```markdown | |
| **Data Freshness**: Documents sourced from **2026-04-01** via lookback fallback (article date: 2026-04-03). |
|
@copilot apply changes based on the comments in this thread |
…ata Freshness format with canonical serializer output Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/3a27acef-d009-45d8-ab21-9e9ccdde9a5b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Both review comments applied in ff8dc82:
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| ## 🏛️ Committee→Domain Canonical Mapping (v2.2) | ||
|
|
||
| This table is the **authoritative single source of truth** for mapping Riksdag committee codes to policy domains. Both the TypeScript classification code (`scripts/data-transformers/constants/committee-names.ts: COMMITTEE_TO_DOMAIN`) and AI analysis agents reference this table. Any discrepancy should be resolved by updating the code to match this document. | ||
|
|
||
| ### Primary Mapping: 15 Riksdag Committees → Policy Domains |
There was a problem hiding this comment.
The new Committee→Domain mapping uses 15 DomainKey values (e.g., eu-foreign, social-insurance) but earlier parts of this same methodology still require a “13-domain taxonomy (ECO, DEF, …)”. As written, it’s unclear which taxonomy analysis outputs should follow and how these two sets relate, which can lead to inconsistent classifications. Please either (a) update the earlier 13-domain requirement to match the 15-domain DomainKey system used in code, or (b) explicitly document how the 13-code taxonomy maps to these domain keys and when each is used.
| > - 📅 **2026-04-07:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy | ||
| > - 📅 **2026-04-15:** FiU spring amending budget deadline — SD budget demands must be resolved before this date | ||
| > - 📅 **2026-04-22:** SCB partisympatiundersökning Q1 2026 release — L's polling position relative to 4% threshold | ||
| > - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20 | ||
|
|
||
| **What changed:** Replaced vague future reference with 3 dated events, specific MCP-observable triggers, and a quantified escalation pathway. |
There was a problem hiding this comment.
These worked examples include hard-coded near-future dates (e.g., 2026-04-07/15/22). Since this is a long-lived methodology/style guide, those dates will quickly become stale and may mislead readers or automated checks that flag “future dates.” Consider converting these to placeholders (e.g., [YYYY-MM-DD]) or using clearly historical example dates.
| > - 📅 **2026-04-07:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy | |
| > - 📅 **2026-04-15:** FiU spring amending budget deadline — SD budget demands must be resolved before this date | |
| > - 📅 **2026-04-22:** SCB partisympatiundersökning Q1 2026 release — L's polling position relative to 4% threshold | |
| > - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20 | |
| **What changed:** Replaced vague future reference with 3 dated events, specific MCP-observable triggers, and a quantified escalation pathway. | |
| > - 📅 **[YYYY-MM-DD]:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy | |
| > - 📅 **[YYYY-MM-DD]:** FiU spring amending budget deadline — SD budget demands must be resolved before this date | |
| > - 📅 **[YYYY-MM-DD]:** SCB partisympatiundersökning Q1 release — L's polling position relative to 4% threshold | |
| > - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20 | |
| **What changed:** Replaced a vague future reference with placeholder-dated events, specific MCP-observable triggers, and a quantified escalation pathway. |
| ### SWOT "Threats" vs. Dedicated Threat Analysis | ||
|
|
||
| The SWOT Threats quadrant and the dedicated threat framework serve **different purposes** at different analytical depths: | ||
|
|
||
| | Dimension | SWOT Threats Quadrant | Dedicated Threat Framework | | ||
| |:---|:---|:---| | ||
| | **Scope** | Broad — any external factor that could harm the actor | Specific — decomposed attack paths, progression stages | | ||
| | **Depth** | Summary (1–3 lines per entry) | Deep analysis (Attack Trees, Kill Chain, Diamond Model, Actor Profiling) | | ||
| | **Framework** | Single quadrant within 4-quadrant SWOT | 4+ complementary frameworks combined | | ||
| | **Output** | Threat list with confidence labels | Threat taxonomy, severity calibration, forward indicators | | ||
| | **Time horizon** | Current snapshot | Current + prognostic (forward indicators) | | ||
| | **When to use** | Every analysis (SWOT is mandatory) | High-significance events (SENSITIVE/RESTRICTED classification) | | ||
|
|
||
| #### Integration Rules | ||
|
|
||
| 1. **Every dedicated threat finding** MUST produce a corresponding SWOT Threat entry — the SWOT entry is the **summary** of the deeper analysis | ||
| 2. **Not every SWOT Threat** requires a dedicated threat analysis — only those scoring ≥3 severity warrant full framework treatment | ||
| 3. **Cross-reference format:** SWOT Threat entries from dedicated analysis include `(see threat-analysis.md §[section])` annotation |
There was a problem hiding this comment.
In this section, “Dedicated Threat Framework” is described as used for “High-significance events (SENSITIVE/RESTRICTED classification)”, but the Integration Rules then say only SWOT threats with severity ≥3 warrant full framework treatment. Those criteria can diverge (e.g., a SENSITIVE event with severity 2, or a PUBLIC event with severity 4). Please clarify which rule is primary (classification gate vs. severity threshold) and how to handle conflicts, so analysts apply the framework consistently.
| **No new evidence for 2 days.** Score carries forward with staleness marker: | ||
|
|
||
| - Risk: L=3, I=4, Score=**12** `⚠️ Last evidence: 2026-03-28 (2 days ago)` |
There was a problem hiding this comment.
The staleness marker usage is internally inconsistent here: the rule says the ⚠️ STALE marker is for scores older than 7 days, but the worked example applies a
| **No new evidence for 2 days.** Score carries forward with staleness marker: | |
| - Risk: L=3, I=4, Score=**12** `⚠️ Last evidence: 2026-03-28 (2 days ago)` | |
| **No new evidence for 2 days.** Score carries forward unchanged: | |
| - Risk: L=3, I=4, Score=**12** `Last evidence: 2026-03-28 (2 days ago)` |
|
@copilot apply changes based on the comments in this thread |
… in classification guide, rephrase threat trigger to SWOT Impact/Confidence fields Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/d95d85c4-4ec7-4787-876c-ddcaaf9f6416 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 3 review comments addressed in 15d46cc:
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| R["⚠️ Risk Methodology<br/>Likelihood × Impact"] | ||
| S["💼 SWOT Framework<br/>Threats Quadrant"] | ||
|
|
||
| T -->|"Severity informs<br/>Likelihood (L) & Impact (I)"| R |
There was a problem hiding this comment.
In this Mermaid diagram label, & is inside a fenced code block, so it will render literally as “&” (not “&”). Use a plain & (or rephrase without needing an ampersand) so the diagram text renders correctly.
| T -->|"Severity informs<br/>Likelihood (L) & Impact (I)"| R | |
| T -->|"Severity informs<br/>Likelihood (L) & Impact (I)"| R |
| #### Severity → Likelihood Mapping Table | ||
|
|
||
| | Threat Severity (this framework) | Risk Likelihood (risk-methodology) | Rationale | | ||
| |:---:|:---:|:---| | ||
| | 1 (Negligible) | L=1 (Rare) | Theoretical threat, no active indicators | | ||
| | 2 (Minor) | L=2 (Unlikely) | Threat identified but no progression observed | | ||
| | 3 (Moderate) | L=3 (Possible) | Active threat indicators, early Kill Chain stages | | ||
| | 4 (Major) | L=4 (Likely) | Threat at mid-to-late Kill Chain stage, multiple indicators | | ||
| | 5 (Severe) | L=5 (Almost Certain) | Threat at final Kill Chain stage, imminent materialization | | ||
|
|
||
| **Rule:** When a threat assessment produces a severity score, it MUST be reflected in the corresponding risk category's Likelihood value. If a threat scores Severity=4, the corresponding risk cannot have Likelihood=1 without explicit justification. |
There was a problem hiding this comment.
This new section says severity informs both Likelihood (L) and Impact (I), but the added mapping table + MUST-rule only cover Likelihood. Earlier in this same document there’s already a “Threat-to-Risk Integration” table mapping severity to both L and I; duplicating a different/partial mapping is inconsistent and likely to confuse readers. Consider referencing the existing table instead of duplicating it here, or expand this table/rules to cover both L and I and keep terminology aligned.
| #### Severity → Likelihood Mapping Table | |
| | Threat Severity (this framework) | Risk Likelihood (risk-methodology) | Rationale | | |
| |:---:|:---:|:---| | |
| | 1 (Negligible) | L=1 (Rare) | Theoretical threat, no active indicators | | |
| | 2 (Minor) | L=2 (Unlikely) | Threat identified but no progression observed | | |
| | 3 (Moderate) | L=3 (Possible) | Active threat indicators, early Kill Chain stages | | |
| | 4 (Major) | L=4 (Likely) | Threat at mid-to-late Kill Chain stage, multiple indicators | | |
| | 5 (Severe) | L=5 (Almost Certain) | Threat at final Kill Chain stage, imminent materialization | | |
| **Rule:** When a threat assessment produces a severity score, it MUST be reflected in the corresponding risk category's Likelihood value. If a threat scores Severity=4, the corresponding risk cannot have Likelihood=1 without explicit justification. | |
| #### Severity → Risk Alignment | |
| Use the existing **Threat-to-Risk Integration** table earlier in this document as the authoritative mapping between this framework's severity scores and the risk methodology values. | |
| **Rule:** When a threat assessment produces a severity score, it MUST be reflected in the corresponding risk category's **Likelihood (L)** and **Impact (I)** values in accordance with the existing Threat-to-Risk Integration mapping. If a threat scores `Severity=4`, the corresponding risk assessment cannot assign materially inconsistent values (for example, `L=1` and/or `I=1`) without explicit justification. |
| > - 📅 **[YYYY-MM-DD]:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy | ||
| > - 📅 **[YYYY-MM-DD]:** FiU spring amending budget deadline — SD budget demands must be resolved before this date | ||
| > - 📅 **[YYYY-MM-DD]:** SCB partisympatiundersökning Q1 release — L's polling position relative to 4% threshold | ||
| > - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20 |
There was a problem hiding this comment.
In Example 6, “L” is used both as the party abbreviation (Liberalerna) and as the Likelihood variable in the same bullet (“L votes…” then “L=3”). This is ambiguous for readers. Consider spelling out the party name at least once (e.g., “Liberalerna (L)”) and/or writing “Likelihood=3” to disambiguate.
| > - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20 | |
| > - **Trigger:** If Liberalerna (L) votes against JuU15 in plenary AND SCB shows Liberalerna below 4%, Coalition Risk escalates from Likelihood=3, Impact=5, Score=15 to Likelihood=4, Impact=5, Score=20 |
|
@copilot apply changes based on the comments in this thread |
…disambiguate L party/Likelihood Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/af080565-78e5-4ba0-a0df-367d5cffa499 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 3 review comments addressed in 88097ac:
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
|
@copilot apply changes based on the comments in this thread |
These 3 changes were already applied in 88097ac:
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| **Generated:** 2026-04-03 07:30 UTC | ||
| **Documents Analyzed:** 0 (new) | 12 (carry-forward from 2026-04-01) | ||
| **Parliamentary Calendar:** Riksdagen in session; no plenary votes scheduled for 2026-04-03 | ||
| **Data Freshness:** No new documents since 2026-04-01 (2 business days) |
There was a problem hiding this comment.
The example uses **Data Freshness:** No new documents since …, but downstream parsing expects the canonical serialized format **Data Freshness**: Documents sourced from **YYYY-MM-DD** … (colon outside bold + fixed prefix). If an agent follows this example, analysis-reader.ts won’t extract dataFreshness. Update the example (and any guidance) to use the canonical string when you want the field parsed, or avoid labeling non-canonical text as “Data Freshness”.
| **Data Freshness:** No new documents since 2026-04-01 (2 business days) | |
| **Data Freshness**: Documents sourced from **2026-04-01**; no new documents for 2 business days |
| | `synthesis-summary.md` | Most recent risk dashboard with staleness markers | Previous day's `risk-assessment.md` | | ||
| | `risk-assessment.md` | All active risk scores with `⚠️ Last evidence: [date]` markers | Previous day's risk scores | | ||
| | `swot-analysis.md` | Active SWOT entries with confidence decay applied | Previous day's SWOT | |
There was a problem hiding this comment.
Carry-forward guidance mandates ⚠️ Last evidence: [date] markers for all risk scores. This conflicts with the risk methodology’s staleness rules (warning markers should be reserved for aging/stale/expired states, not necessarily every entry). Consider requiring a neutral Last evidence: YYYY-MM-DD field plus a separate staleness/status label derived from the aging table.
| # Political Intelligence Style Guide | ||
|
|
||
| <!-- version: 1.0.0 | updated: 2026-03-26 | author: Hack23 AB --> | ||
|
|
||
| ## Purpose | ||
|
|
||
| This style guide establishes standards for political intelligence reporting across all article types published by Riksdagsmonitor. Inspired by the [ISMS Style Guide](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md), it adapts documentation and communication standards to the domain of parliamentary intelligence reporting. |
There was a problem hiding this comment.
This section introduces a second top-level title (# Political Intelligence Style Guide) even though the document already has an HTML <h1> at the top. Having multiple H1s can confuse navigation/TOC and makes the structure ambiguous. Consider removing this markdown H1 or downgrading it to a ## heading that fits the existing hierarchy.
| D7["📅 Day 7<br/>R2: L2×I4=8 ⚠️"] | ||
|
|
||
| D1 -->|"Lagrådet favorable<br/>opinion published"| D3 | ||
| D3 -->|"No new evidence<br/>(4 days)"| D7 | ||
|
|
||
| style D1 fill:#dc3545,color:#fff | ||
| style D3 fill:#ffc107,color:#000 | ||
| style D7 fill:#ffc107,color:#000,stroke:#dc3545,stroke-width:3px |
There was a problem hiding this comment.
The temporal-evolution Mermaid template flags Day 7 with ⚠️, but your staleness rules define warnings/STALE marking only for scores older than 7 days (and Day 7 in the worked example is still “Current”). Align the template with the staleness table (e.g., remove the warning icon or label it explicitly as “Aging” when applicable).
| D7["📅 Day 7<br/>R2: L2×I4=8 ⚠️"] | |
| D1 -->|"Lagrådet favorable<br/>opinion published"| D3 | |
| D3 -->|"No new evidence<br/>(4 days)"| D7 | |
| style D1 fill:#dc3545,color:#fff | |
| style D3 fill:#ffc107,color:#000 | |
| style D7 fill:#ffc107,color:#000,stroke:#dc3545,stroke-width:3px | |
| D7["📅 Day 7<br/>R2: L2×I4=8"] | |
| D1 -->|"Lagrådet favorable<br/>opinion published"| D3 | |
| D3 -->|"No new evidence<br/>(4 days)"| D7 | |
| style D1 fill:#dc3545,color:#fff | |
| style D3 fill:#ffc107,color:#000 | |
| style D7 fill:#ffc107,color:#000 |
| From `threat-analysis.md`, extract threats at Kill Chain Stage ≥3 (Delivery or beyond): | ||
|
|
||
| | Threat | Severity | Kill Chain Stage | Affected Risk Category | | ||
| |:---|:---:|:---:|:---| | ||
| | `[Threat name]` | `[1–5]` | `[Stage name]` | `[Coalition/Policy/Budget/Electoral/External]` | | ||
|
|
||
| #### Step 2: Map Threats to Risk Scores | ||
|
|
||
| Transfer threat severity to risk likelihood per the mapping table above, then compute L×I: | ||
|
|
||
| | Risk Category | Threat-Informed L | Impact (I) | Score | Prior Score | Δ | | ||
| |:---|:---:|:---:|:---:|:---:|:---:| | ||
| | `[Category]` | `[1–5]` | `[1–5]` | `[L×I]` | `[previous]` | `[change]` | |
There was a problem hiding this comment.
The synthesis protocol hard-codes affected risk categories as [Coalition/Policy/Budget/Electoral/External], but the risk methodology defines a different set of risk categories (e.g., Coalition stability, Policy implementation, Legislative integrity, Economic governance, Social cohesion, Democratic process). This mismatch will make cross-framework mapping ambiguous. Consider referencing the risk methodology’s canonical category names (or linking to the exact section/table) instead of introducing a new taxonomy here.
| From `threat-analysis.md`, extract threats at Kill Chain Stage ≥3 (Delivery or beyond): | |
| | Threat | Severity | Kill Chain Stage | Affected Risk Category | | |
| |:---|:---:|:---:|:---| | |
| | `[Threat name]` | `[1–5]` | `[Stage name]` | `[Coalition/Policy/Budget/Electoral/External]` | | |
| #### Step 2: Map Threats to Risk Scores | |
| Transfer threat severity to risk likelihood per the mapping table above, then compute L×I: | |
| | Risk Category | Threat-Informed L | Impact (I) | Score | Prior Score | Δ | | |
| |:---|:---:|:---:|:---:|:---:|:---:| | |
| | `[Category]` | `[1–5]` | `[1–5]` | `[L×I]` | `[previous]` | `[change]` | | |
| From `threat-analysis.md`, extract threats at Kill Chain Stage ≥3 (Delivery or beyond), and assign each threat to the **canonical risk category names defined in this methodology**: | |
| | Threat | Severity | Kill Chain Stage | Affected Risk Category | | |
| |:---|:---:|:---:|:---| | |
| | `[Threat name]` | `[1–5]` | `[Stage name]` | `[Coalition stability / Policy implementation / Legislative integrity / Economic governance / Social cohesion / Democratic process]` | | |
| #### Step 2: Map Threats to Risk Scores | |
| Transfer threat severity to risk likelihood per the mapping table above, using the same canonical risk categories from the risk methodology, then compute L×I: | |
| | Risk Category | Threat-Informed L | Impact (I) | Score | Prior Score | Δ | | |
| |:---|:---:|:---:|:---:|:---:|:---:| | |
| | `[Canonical category name]` | `[1–5]` | `[1–5]` | `[L×I]` | `[previous]` | `[change]` | |
|
@copilot apply changes based on the comments in this thread |
…licate H1, Mermaid⚠️ , and risk categories 1. ai-driven-analysis-guide: Data Freshness example now uses canonical serialized format 2. ai-driven-analysis-guide: carry-forward uses neutral `Last evidence:` + staleness status 3. political-style-guide: removed duplicate markdown H1, kept HTML <h1> 4. political-risk-methodology: removed⚠️ from Mermaid Day 7 (still Current) 5. political-threat-framework: replaced hard-coded risk categories with canonical names Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/df46514a-fba4-4a90-93c7-591021931f39 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 5 review comments addressed in fa91de2:
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
DomainKeytaxonomy with all keys listed in classification guide (lines 246, 470)[YYYY-MM-DD]placeholders in style guide Example 6Last evidence:format for Current state in risk methodology Day 7justiceDomainKey in per-file output exampleDocument Controlblock andversion: 1.0.0HTML commentconstitutional,defence) in classification guide&→&in Mermaid diagram so ampersand renders correctly in fenced code block**Data Freshness**: Documents sourced from **YYYY-MM-DD**format parseable byanalysis-reader.ts⚠️ Last evidence:carry-forward marker with neutralLast evidence: [date]field + separate staleness status derived from aging table#H1 in style guide — HTML<h1>is the single top-level heading, body sections use##[Coalition/Policy/Budget/Electoral/External]with canonical risk category names from risk methodology (Coalition stability / Policy implementation / Legislative integrity / Economic governance / Social cohesion / Democratic process)