Skip to content

Enhance analysis methodologies with temporal analysis, Bayesian worked examples, and cross-methodology linkage#1563

Merged
pethers merged 11 commits intomainfrom
copilot/enhance-analysis-methodologies
Apr 6, 2026
Merged

Enhance analysis methodologies with temporal analysis, Bayesian worked examples, and cross-methodology linkage#1563
pethers merged 11 commits intomainfrom
copilot/enhance-analysis-methodologies

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 5, 2026

  • political-risk-methodology.md (v2.1) — Temporal Analysis Protocol, Bayesian Updating Worked Example
  • political-swot-framework.md (v2.2) — SWOT Evolution Tracking, Cross-Day Delta
  • ai-driven-analysis-guide.md (v4.2) — Empty Data Handling Protocol, Per-File Output Example
  • political-classification-guide.md (v2.2) — Committee→Domain Canonical Mapping
  • political-style-guide.md (v2.1) — 8 Bad→Good Rewrite Examples
  • political-threat-framework.md (v3.1) — Cross-Methodology Linkage Protocol
  • Version numbers incremented in all 6 files
  • All Mermaid diagrams render correctly
  • Review feedback (round 1): fixed Day 7 staleness label; aligned Data Freshness format
  • Review feedback (round 2): updated "13-domain taxonomy" → 15-domain DomainKey taxonomy with all keys listed in classification guide (lines 246, 470)
  • Review feedback (round 2): replaced hard-coded near-future dates with [YYYY-MM-DD] placeholders in style guide Example 6
  • Review feedback (round 2): clarified threat framework trigger as logical OR of classification gate (SENSITIVE/RESTRICTED) and severity gate (≥3), with explicit conflict resolution rules
  • Review feedback (round 2): replaced ⚠️ staleness marker with neutral Last evidence: format for Current state in risk methodology Day 7
  • Review feedback (round 3): corrected weekday labels in Bayesian worked example (Tue/Thu/Sat/Mon/Tue match 2026-03-24 through 2026-03-31)
  • Review feedback (round 3): replaced old 3-letter "JUS" domain code with canonical justice DomainKey in per-file output example
  • Review feedback (round 3): reconciled Mermaid diagram — severity now informs both Likelihood (L) & Impact (I), consistent with earlier Threat-to-Risk section
  • Review feedback (round 3): aligned Severity→Likelihood mapping labels to canonical calibration table (Negligible/Minor/Moderate/Major/Severe)
  • Review feedback (round 4): consolidated style guide to single authoritative metadata section — removed legacy Document Control block and version: 1.0.0 HTML comment
  • Review feedback (round 4): replaced legacy 3-letter codes "CON and DEF" with DomainKey terms (constitutional, defence) in classification guide
  • Review feedback (round 4): rephrased threat framework SWOT trigger gate to use existing SWOT fields (Impact = High, Confidence = Medium/High) instead of non-existent 1–5 severity scale
  • Review feedback (round 5): fixed && in Mermaid diagram so ampersand renders correctly in fenced code block
  • Review feedback (round 5): replaced duplicated Severity→Likelihood mapping table with reference to existing authoritative Threat-to-Risk Integration table, covering both L and I
  • Review feedback (round 5): disambiguated "L" in style guide Example 6 — spelled out "Liberalerna (L)" for party, "Likelihood"/"Impact" for risk variables
  • Review feedback (round 6): fixed Data Freshness empty-day example to use canonical **Data Freshness**: Documents sourced from **YYYY-MM-DD** format parseable by analysis-reader.ts
  • Review feedback (round 6): replaced ⚠️ Last evidence: carry-forward marker with neutral Last evidence: [date] field + separate staleness status derived from aging table
  • Review feedback (round 6): removed duplicate markdown # H1 in style guide — HTML <h1> is the single top-level heading, body sections use ##
  • Review feedback (round 6): removed ⚠️ icon and red stroke from Mermaid template Day 7 node — Day 7 is still "Current" per staleness rules (last evidence 4 days ago = Aging threshold, but the worked example shows 2 days)
  • Review feedback (round 6): replaced hard-coded [Coalition/Policy/Budget/Electoral/External] with canonical risk category names from risk methodology (Coalition stability / Policy implementation / Legislative integrity / Economic governance / Social cohesion / Democratic process)

@github-actions github-actions bot added the size-xs Extra small change (< 10 lines) label Apr 5, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

🏷️ Automatic Labeling Summary

This PR has been automatically labeled based on the files changed and PR metadata.

Applied Labels: size-xs

Label Categories

  • 🗳️ Content: news, dashboard, visualization, intelligence
  • 💻 Technology: html-css, javascript, workflow, security
  • 📊 Data: cia-data, riksdag-data, data-pipeline, schema
  • 🌍 I18n: i18n, translation, rtl
  • 🔒 ISMS: isms, iso-27001, nist-csf, cis-controls
  • 🏗️ Infrastructure: ci-cd, deployment, performance, monitoring
  • 🔄 Quality: testing, accessibility, documentation, refactor
  • 🤖 AI: agent, skill, agentic-workflow

For more information, see .github/labeler.yml.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

…ked examples, cross-methodology linkage, and actionable implementation details

- political-risk-methodology.md (v2.1): Add Temporal Analysis Protocol (re-scoring triggers, staleness rules, evolution template) and Bayesian Updating Worked Example (7-day ECHR challenge scenario)
- political-swot-framework.md (v2.2): Add SWOT Evolution Tracking (SWOT Delta template, inter-quadrant migration rules with Mermaid diagram, cross-document aggregation protocol)
- ai-driven-analysis-guide.md (v4.2): Add Empty Data Handling Protocol (lookback strategy, carry-forward protocol, minimum output) and Per-File Output Example
- political-classification-guide.md (v2.2): Add Committee→Domain Canonical Mapping (15 committees matching COMMITTEE_TO_DOMAIN code constant)
- political-style-guide.md (v2.1): Add 8 Bad→Good Rewrite Examples covering all prohibited pattern categories with dok_ids
- political-threat-framework.md (v3.1): Add Cross-Methodology Linkage Protocol (Severity→Likelihood mapping, SWOT vs threat scope, synthesis integration with Mermaid)

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/2c76dd9f-0404-454c-ae79-eb5e75d6ed03

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
@github-actions github-actions bot added documentation Documentation updates size-l Large change (250-1000 lines) labels Apr 6, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

Copilot AI changed the title [WIP] Enhance analysis methodologies with practical sections and examples Enhance analysis methodologies with temporal analysis, Bayesian worked examples, and cross-methodology linkage Apr 6, 2026
Copilot AI requested a review from pethers April 6, 2026 00:12
@pethers pethers marked this pull request as ready for review April 6, 2026 00:13
Copilot AI review requested due to automatic review settings April 6, 2026 00:13
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the six analysis methodology guides under analysis/methodologies/ to be more operational for AI agents by adding concrete protocols, worked examples, and explicit cross-methodology integration rules (documentation-only change).

Changes:

  • Add actionable “how-to” sections (temporal/staleness rules, empty-data handling, SWOT evolution tracking) with templates and Mermaid diagrams.
  • Add worked examples (Bayesian updating, bad→good rewrites, per-file analysis output) to reduce generic/empty outputs.
  • Establish explicit cross-framework linkages (threat↔risk↔SWOT) and document the canonical committee→domain mapping aligned with the codebase.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
analysis/methodologies/political-risk-methodology.md Adds Temporal Analysis Protocol + 7-day Bayesian worked example and templates.
analysis/methodologies/political-swot-framework.md Adds SWOT evolution/delta tracking, migration rules, and aggregation protocol.
analysis/methodologies/ai-driven-analysis-guide.md Adds empty-data handling protocol (lookback/direct retrieval/carry-forward) + per-file output example.
analysis/methodologies/political-classification-guide.md Adds canonical committee→domain mapping table + pipeline diagram + code reference.
analysis/methodologies/political-style-guide.md Adds 8 bad→good rewrite examples tied to prohibited patterns with MCP-style references.
analysis/methodologies/political-threat-framework.md Adds cross-methodology linkage protocol (severity→likelihood mapping, synthesis workflow, dashboard diagram).

| 2026-03-24 | Initial assessment | 3 | 4 | 12 | — | MEDIUM |
| 2026-03-26 | Lagrådet favorable opinion | 2 | 4 | 8 | −4 | HIGH |
| 2026-03-28 | S files KU complaint | 3 | 4 | 12 | +4 | HIGH |
| 2026-03-30 | No new evidence | 3 | 4 | 12 | 0 | HIGH (aging) |
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the 7-day evolution summary, the 2026-03-30 row labels confidence as "HIGH (aging)", but Day 7 is only 2 days after the last evidence (2026-03-28) and the Staleness Rules section defines 0–3 days as Current. Please make the summary consistent with the staleness rules / Day 7 narrative (either change the label to Current, or adjust the day count/example).

Suggested change
| 2026-03-30 | No new evidence | 3 | 4 | 12 | 0 | HIGH (aging) |
| 2026-03-30 | No new evidence | 3 | 4 | 12 | 0 | HIGH (Current) |

Copilot uses AI. Check for mistakes.
Comment on lines +1677 to +1680
The `pre-article-analysis.ts` pipeline automatically looks back up to **5 business days** (configurable via `MAX_LOOKBACK_BUSINESS_DAYS`). If documents are found via lookback, the `dataFreshness` field records the actual date:

```markdown
**Data Freshness:** Lookback applied — documents from 2026-04-01 (target: 2026-04-03)
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Data Freshness" example under the Lookback Strategy doesn't match the repo's actual serialized format, and downstream parsing expects a specific string. scripts/pre-article-analysis/markdown-serializer.ts emits "Data Freshness: Documents sourced from YYYY-MM-DD via lookback fallback (article date: YYYY-MM-DD).", and scripts/analysis-reader.ts extracts freshness using that pattern. Please align the example text here to the real output format (or explicitly note the canonical string to preserve parsing compatibility).

Suggested change
The `pre-article-analysis.ts` pipeline automatically looks back up to **5 business days** (configurable via `MAX_LOOKBACK_BUSINESS_DAYS`). If documents are found via lookback, the `dataFreshness` field records the actual date:
```markdown
**Data Freshness:** Lookback applied — documents from 2026-04-01 (target: 2026-04-03)
The `pre-article-analysis.ts` pipeline automatically looks back up to **5 business days** (configurable via `MAX_LOOKBACK_BUSINESS_DAYS`). If documents are found via lookback, the `dataFreshness` field records the actual date using the canonical serialized format expected by downstream parsing:
```markdown
**Data Freshness**: Documents sourced from **2026-04-01** via lookback fallback (article date: 2026-04-03).

Copilot uses AI. Check for mistakes.
@pethers
Copy link
Copy Markdown
Member

pethers commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

…ata Freshness format with canonical serializer output

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/3a27acef-d009-45d8-ab21-9e9ccdde9a5b

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

Both review comments applied in ff8dc82:

  • Fixed Day 7 staleness label from "HIGH (aging)" to "HIGH (Current)" — 2 days since last evidence falls within the 0–3 day "Current" window per the staleness rules.
  • Aligned Data Freshness example with the canonical serializer format (**Data Freshness**: Documents sourced from **YYYY-MM-DD** via lookback fallback (article date: YYYY-MM-DD).).

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.

Comment on lines +477 to +481
## 🏛️ Committee→Domain Canonical Mapping (v2.2)

This table is the **authoritative single source of truth** for mapping Riksdag committee codes to policy domains. Both the TypeScript classification code (`scripts/data-transformers/constants/committee-names.ts: COMMITTEE_TO_DOMAIN`) and AI analysis agents reference this table. Any discrepancy should be resolved by updating the code to match this document.

### Primary Mapping: 15 Riksdag Committees → Policy Domains
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new Committee→Domain mapping uses 15 DomainKey values (e.g., eu-foreign, social-insurance) but earlier parts of this same methodology still require a “13-domain taxonomy (ECO, DEF, …)”. As written, it’s unclear which taxonomy analysis outputs should follow and how these two sets relate, which can lead to inconsistent classifications. Please either (a) update the earlier 13-domain requirement to match the 15-domain DomainKey system used in code, or (b) explicitly document how the 13-code taxonomy maps to these domain keys and when each is used.

Copilot uses AI. Check for mistakes.
Comment on lines +967 to +972
> - 📅 **2026-04-07:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy
> - 📅 **2026-04-15:** FiU spring amending budget deadline — SD budget demands must be resolved before this date
> - 📅 **2026-04-22:** SCB partisympatiundersökning Q1 2026 release — L's polling position relative to 4% threshold
> - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20

**What changed:** Replaced vague future reference with 3 dated events, specific MCP-observable triggers, and a quantified escalation pathway.
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These worked examples include hard-coded near-future dates (e.g., 2026-04-07/15/22). Since this is a long-lived methodology/style guide, those dates will quickly become stale and may mislead readers or automated checks that flag “future dates.” Consider converting these to placeholders (e.g., [YYYY-MM-DD]) or using clearly historical example dates.

Suggested change
> - 📅 **2026-04-07:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy
> - 📅 **2026-04-15:** FiU spring amending budget deadline — SD budget demands must be resolved before this date
> - 📅 **2026-04-22:** SCB partisympatiundersökning Q1 2026 release — L's polling position relative to 4% threshold
> - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20
**What changed:** Replaced vague future reference with 3 dated events, specific MCP-observable triggers, and a quantified escalation pathway.
> - 📅 **[YYYY-MM-DD]:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy
> - 📅 **[YYYY-MM-DD]:** FiU spring amending budget deadline — SD budget demands must be resolved before this date
> - 📅 **[YYYY-MM-DD]:** SCB partisympatiundersökning Q1 release — L's polling position relative to 4% threshold
> - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20
**What changed:** Replaced a vague future reference with placeholder-dated events, specific MCP-observable triggers, and a quantified escalation pathway.

Copilot uses AI. Check for mistakes.
Comment on lines +895 to +912
### SWOT "Threats" vs. Dedicated Threat Analysis

The SWOT Threats quadrant and the dedicated threat framework serve **different purposes** at different analytical depths:

| Dimension | SWOT Threats Quadrant | Dedicated Threat Framework |
|:---|:---|:---|
| **Scope** | Broad — any external factor that could harm the actor | Specific — decomposed attack paths, progression stages |
| **Depth** | Summary (1–3 lines per entry) | Deep analysis (Attack Trees, Kill Chain, Diamond Model, Actor Profiling) |
| **Framework** | Single quadrant within 4-quadrant SWOT | 4+ complementary frameworks combined |
| **Output** | Threat list with confidence labels | Threat taxonomy, severity calibration, forward indicators |
| **Time horizon** | Current snapshot | Current + prognostic (forward indicators) |
| **When to use** | Every analysis (SWOT is mandatory) | High-significance events (SENSITIVE/RESTRICTED classification) |

#### Integration Rules

1. **Every dedicated threat finding** MUST produce a corresponding SWOT Threat entry — the SWOT entry is the **summary** of the deeper analysis
2. **Not every SWOT Threat** requires a dedicated threat analysis — only those scoring ≥3 severity warrant full framework treatment
3. **Cross-reference format:** SWOT Threat entries from dedicated analysis include `(see threat-analysis.md §[section])` annotation
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this section, “Dedicated Threat Framework” is described as used for “High-significance events (SENSITIVE/RESTRICTED classification)”, but the Integration Rules then say only SWOT threats with severity ≥3 warrant full framework treatment. Those criteria can diverge (e.g., a SENSITIVE event with severity 2, or a PUBLIC event with severity 4). Please clarify which rule is primary (classification gate vs. severity threshold) and how to handle conflicts, so analysts apply the framework consistently.

Copilot uses AI. Check for mistakes.
Comment on lines +567 to +569
**No new evidence for 2 days.** Score carries forward with staleness marker:

- Risk: L=3, I=4, Score=**12** `⚠️ Last evidence: 2026-03-28 (2 days ago)`
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The staleness marker usage is internally inconsistent here: the rule says the ⚠️ STALE marker is for scores older than 7 days, but the worked example applies a ⚠️ marker at 2 days (“Current”), and the Mermaid template marks Day 7 with ⚠️ at 4 days (“Aging”). To avoid confusion (and potential downstream parsing heuristics), consider reserving ⚠️ specifically for Aging/Stale/Expired states, and using a neutral marker (or just “Last evidence: …”) for Current scores.

Suggested change
**No new evidence for 2 days.** Score carries forward with staleness marker:
- Risk: L=3, I=4, Score=**12** `⚠️ Last evidence: 2026-03-28 (2 days ago)`
**No new evidence for 2 days.** Score carries forward unchanged:
- Risk: L=3, I=4, Score=**12** `Last evidence: 2026-03-28 (2 days ago)`

Copilot uses AI. Check for mistakes.
@pethers
Copy link
Copy Markdown
Member

pethers commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

… in classification guide, rephrase threat trigger to SWOT Impact/Confidence fields

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/d95d85c4-4ec7-4787-876c-ddcaaf9f6416

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

All 3 review comments addressed in 15d46cc:

  1. Style guide — Removed the legacy Document Control block (lines 264–268) and <!-- version: 1.0.0 --> HTML comment, leaving the header metadata (line 19) and footer Document Control table (line 998) as the single authoritative source, both showing v2.1.
  2. Classification guide — Replaced legacy 3-letter codes "CON and DEF" with DomainKey terms constitutional and defence in the domain ambiguity resolution row.
  3. Threat framework — Rephrased the SWOT trigger gate from "≥3 severity on the 1–5 scale" to "Impact = High and Confidence = Medium or High", matching the SWOT framework's actual entry fields. Updated conflict resolution and rule Sync styles.css from Hack23/homepage #2 text accordingly.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.

R["⚠️ Risk Methodology<br/>Likelihood × Impact"]
S["💼 SWOT Framework<br/>Threats Quadrant"]

T -->|"Severity informs<br/>Likelihood (L) &amp; Impact (I)"| R
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this Mermaid diagram label, &amp; is inside a fenced code block, so it will render literally as “&” (not “&”). Use a plain & (or rephrase without needing an ampersand) so the diagram text renders correctly.

Suggested change
T -->|"Severity informs<br/>Likelihood (L) &amp; Impact (I)"| R
T -->|"Severity informs<br/>Likelihood (L) & Impact (I)"| R

Copilot uses AI. Check for mistakes.
Comment on lines +883 to +893
#### Severity → Likelihood Mapping Table

| Threat Severity (this framework) | Risk Likelihood (risk-methodology) | Rationale |
|:---:|:---:|:---|
| 1 (Negligible) | L=1 (Rare) | Theoretical threat, no active indicators |
| 2 (Minor) | L=2 (Unlikely) | Threat identified but no progression observed |
| 3 (Moderate) | L=3 (Possible) | Active threat indicators, early Kill Chain stages |
| 4 (Major) | L=4 (Likely) | Threat at mid-to-late Kill Chain stage, multiple indicators |
| 5 (Severe) | L=5 (Almost Certain) | Threat at final Kill Chain stage, imminent materialization |

**Rule:** When a threat assessment produces a severity score, it MUST be reflected in the corresponding risk category's Likelihood value. If a threat scores Severity=4, the corresponding risk cannot have Likelihood=1 without explicit justification.
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new section says severity informs both Likelihood (L) and Impact (I), but the added mapping table + MUST-rule only cover Likelihood. Earlier in this same document there’s already a “Threat-to-Risk Integration” table mapping severity to both L and I; duplicating a different/partial mapping is inconsistent and likely to confuse readers. Consider referencing the existing table instead of duplicating it here, or expand this table/rules to cover both L and I and keep terminology aligned.

Suggested change
#### Severity → Likelihood Mapping Table
| Threat Severity (this framework) | Risk Likelihood (risk-methodology) | Rationale |
|:---:|:---:|:---|
| 1 (Negligible) | L=1 (Rare) | Theoretical threat, no active indicators |
| 2 (Minor) | L=2 (Unlikely) | Threat identified but no progression observed |
| 3 (Moderate) | L=3 (Possible) | Active threat indicators, early Kill Chain stages |
| 4 (Major) | L=4 (Likely) | Threat at mid-to-late Kill Chain stage, multiple indicators |
| 5 (Severe) | L=5 (Almost Certain) | Threat at final Kill Chain stage, imminent materialization |
**Rule:** When a threat assessment produces a severity score, it MUST be reflected in the corresponding risk category's Likelihood value. If a threat scores Severity=4, the corresponding risk cannot have Likelihood=1 without explicit justification.
#### Severity → Risk Alignment
Use the existing **Threat-to-Risk Integration** table earlier in this document as the authoritative mapping between this framework's severity scores and the risk methodology values.
**Rule:** When a threat assessment produces a severity score, it MUST be reflected in the corresponding risk category's **Likelihood (L)** and **Impact (I)** values in accordance with the existing Threat-to-Risk Integration mapping. If a threat scores `Severity=4`, the corresponding risk assessment cannot assign materially inconsistent values (for example, `L=1` and/or `I=1`) without explicit justification.

Copilot uses AI. Check for mistakes.
> - 📅 **[YYYY-MM-DD]:** Plenary votering on bet. 2025/26:JuU15 — L's plenary vote will signal coalition cohesion on justice policy
> - 📅 **[YYYY-MM-DD]:** FiU spring amending budget deadline — SD budget demands must be resolved before this date
> - 📅 **[YYYY-MM-DD]:** SCB partisympatiundersökning Q1 release — L's polling position relative to 4% threshold
> - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Example 6, “L” is used both as the party abbreviation (Liberalerna) and as the Likelihood variable in the same bullet (“L votes…” then “L=3”). This is ambiguous for readers. Consider spelling out the party name at least once (e.g., “Liberalerna (L)”) and/or writing “Likelihood=3” to disambiguate.

Suggested change
> - **Trigger:** If L votes against JuU15 in plenary AND SCB shows L below 4%, Coalition Risk escalates from L=3,I=5,Score=15 to L=4,I=5,Score=20
> - **Trigger:** If Liberalerna (L) votes against JuU15 in plenary AND SCB shows Liberalerna below 4%, Coalition Risk escalates from Likelihood=3, Impact=5, Score=15 to Likelihood=4, Impact=5, Score=20

Copilot uses AI. Check for mistakes.
@pethers
Copy link
Copy Markdown
Member

pethers commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

…disambiguate L party/Likelihood

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/af080565-78e5-4ba0-a0df-367d5cffa499

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

All 3 review comments addressed in 88097ac:

  1. Threat framework Mermaid — Fixed &amp;& so the ampersand renders correctly inside the fenced code block.
  2. Threat framework severity mapping — Replaced the duplicated partial Severity→Likelihood table with a reference to the existing authoritative Threat-to-Risk Integration table, now covering both L and I consistently.
  3. Style guide Example 6 — Disambiguated "L" by spelling out "Liberalerna (L)" for the party name and "Likelihood" / "Impact" for risk variables.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

@pethers
Copy link
Copy Markdown
Member

pethers commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

These 3 changes were already applied in 88097ac:

  1. Fixed &amp;& in Mermaid diagram (line 873)
  2. Replaced duplicated severity mapping table with reference to existing Threat-to-Risk Integration table (lines 883–887)
  3. Disambiguated "L" → "Liberalerna (L)" and "Likelihood"/"Impact" in style guide Example 6 (line 963)

@pethers pethers requested a review from Copilot April 6, 2026 11:58
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.

**Generated:** 2026-04-03 07:30 UTC
**Documents Analyzed:** 0 (new) | 12 (carry-forward from 2026-04-01)
**Parliamentary Calendar:** Riksdagen in session; no plenary votes scheduled for 2026-04-03
**Data Freshness:** No new documents since 2026-04-01 (2 business days)
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example uses **Data Freshness:** No new documents since …, but downstream parsing expects the canonical serialized format **Data Freshness**: Documents sourced from **YYYY-MM-DD** … (colon outside bold + fixed prefix). If an agent follows this example, analysis-reader.ts won’t extract dataFreshness. Update the example (and any guidance) to use the canonical string when you want the field parsed, or avoid labeling non-canonical text as “Data Freshness”.

Suggested change
**Data Freshness:** No new documents since 2026-04-01 (2 business days)
**Data Freshness**: Documents sourced from **2026-04-01**; no new documents for 2 business days

Copilot uses AI. Check for mistakes.
Comment on lines +1705 to +1707
| `synthesis-summary.md` | Most recent risk dashboard with staleness markers | Previous day's `risk-assessment.md` |
| `risk-assessment.md` | All active risk scores with `⚠️ Last evidence: [date]` markers | Previous day's risk scores |
| `swot-analysis.md` | Active SWOT entries with confidence decay applied | Previous day's SWOT |
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Carry-forward guidance mandates ⚠️ Last evidence: [date] markers for all risk scores. This conflicts with the risk methodology’s staleness rules (warning markers should be reserved for aging/stale/expired states, not necessarily every entry). Consider requiring a neutral Last evidence: YYYY-MM-DD field plus a separate staleness/status label derived from the aging table.

Copilot uses AI. Check for mistakes.
Comment on lines 264 to 268
# Political Intelligence Style Guide

<!-- version: 1.0.0 | updated: 2026-03-26 | author: Hack23 AB -->

## Purpose

This style guide establishes standards for political intelligence reporting across all article types published by Riksdagsmonitor. Inspired by the [ISMS Style Guide](https://github.com/Hack23/ISMS-PUBLIC/blob/main/STYLE_GUIDE.md), it adapts documentation and communication standards to the domain of parliamentary intelligence reporting.
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section introduces a second top-level title (# Political Intelligence Style Guide) even though the document already has an HTML <h1> at the top. Having multiple H1s can confuse navigation/TOC and makes the structure ambiguous. Consider removing this markdown H1 or downgrading it to a ## heading that fits the existing hierarchy.

Copilot uses AI. Check for mistakes.
Comment on lines +503 to +510
D7["📅 Day 7<br/>R2: L2×I4=8 ⚠️"]

D1 -->|"Lagrådet favorable<br/>opinion published"| D3
D3 -->|"No new evidence<br/>(4 days)"| D7

style D1 fill:#dc3545,color:#fff
style D3 fill:#ffc107,color:#000
style D7 fill:#ffc107,color:#000,stroke:#dc3545,stroke-width:3px
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The temporal-evolution Mermaid template flags Day 7 with ⚠️, but your staleness rules define warnings/STALE marking only for scores older than 7 days (and Day 7 in the worked example is still “Current”). Align the template with the staleness table (e.g., remove the warning icon or label it explicitly as “Aging” when applicable).

Suggested change
D7["📅 Day 7<br/>R2: L2×I4=8 ⚠️"]
D1 -->|"Lagrådet favorable<br/>opinion published"| D3
D3 -->|"No new evidence<br/>(4 days)"| D7
style D1 fill:#dc3545,color:#fff
style D3 fill:#ffc107,color:#000
style D7 fill:#ffc107,color:#000,stroke:#dc3545,stroke-width:3px
D7["📅 Day 7<br/>R2: L2×I4=8"]
D1 -->|"Lagrådet favorable<br/>opinion published"| D3
D3 -->|"No new evidence<br/>(4 days)"| D7
style D1 fill:#dc3545,color:#fff
style D3 fill:#ffc107,color:#000
style D7 fill:#ffc107,color:#000

Copilot uses AI. Check for mistakes.
Comment on lines +923 to +935
From `threat-analysis.md`, extract threats at Kill Chain Stage ≥3 (Delivery or beyond):

| Threat | Severity | Kill Chain Stage | Affected Risk Category |
|:---|:---:|:---:|:---|
| `[Threat name]` | `[1–5]` | `[Stage name]` | `[Coalition/Policy/Budget/Electoral/External]` |

#### Step 2: Map Threats to Risk Scores

Transfer threat severity to risk likelihood per the mapping table above, then compute L×I:

| Risk Category | Threat-Informed L | Impact (I) | Score | Prior Score | Δ |
|:---|:---:|:---:|:---:|:---:|:---:|
| `[Category]` | `[1–5]` | `[1–5]` | `[L×I]` | `[previous]` | `[change]` |
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The synthesis protocol hard-codes affected risk categories as [Coalition/Policy/Budget/Electoral/External], but the risk methodology defines a different set of risk categories (e.g., Coalition stability, Policy implementation, Legislative integrity, Economic governance, Social cohesion, Democratic process). This mismatch will make cross-framework mapping ambiguous. Consider referencing the risk methodology’s canonical category names (or linking to the exact section/table) instead of introducing a new taxonomy here.

Suggested change
From `threat-analysis.md`, extract threats at Kill Chain Stage ≥3 (Delivery or beyond):
| Threat | Severity | Kill Chain Stage | Affected Risk Category |
|:---|:---:|:---:|:---|
| `[Threat name]` | `[1–5]` | `[Stage name]` | `[Coalition/Policy/Budget/Electoral/External]` |
#### Step 2: Map Threats to Risk Scores
Transfer threat severity to risk likelihood per the mapping table above, then compute L×I:
| Risk Category | Threat-Informed L | Impact (I) | Score | Prior Score | Δ |
|:---|:---:|:---:|:---:|:---:|:---:|
| `[Category]` | `[1–5]` | `[1–5]` | `[L×I]` | `[previous]` | `[change]` |
From `threat-analysis.md`, extract threats at Kill Chain Stage ≥3 (Delivery or beyond), and assign each threat to the **canonical risk category names defined in this methodology**:
| Threat | Severity | Kill Chain Stage | Affected Risk Category |
|:---|:---:|:---:|:---|
| `[Threat name]` | `[1–5]` | `[Stage name]` | `[Coalition stability / Policy implementation / Legislative integrity / Economic governance / Social cohesion / Democratic process]` |
#### Step 2: Map Threats to Risk Scores
Transfer threat severity to risk likelihood per the mapping table above, using the same canonical risk categories from the risk methodology, then compute L×I:
| Risk Category | Threat-Informed L | Impact (I) | Score | Prior Score | Δ |
|:---|:---:|:---:|:---:|:---:|:---:|
| `[Canonical category name]` | `[1–5]` | `[1–5]` | `[L×I]` | `[previous]` | `[change]` |

Copilot uses AI. Check for mistakes.
@pethers
Copy link
Copy Markdown
Member

pethers commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

…licate H1, Mermaid ⚠️, and risk categories

1. ai-driven-analysis-guide: Data Freshness example now uses canonical serialized format
2. ai-driven-analysis-guide: carry-forward uses neutral `Last evidence:` + staleness status
3. political-style-guide: removed duplicate markdown H1, kept HTML <h1>
4. political-risk-methodology: removed ⚠️ from Mermaid Day 7 (still Current)
5. political-threat-framework: replaced hard-coded risk categories with canonical names

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/df46514a-fba4-4a90-93c7-591021931f39

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 6, 2026

@copilot apply changes based on the comments in this thread

All 5 review comments addressed in fa91de2:

  1. AI-driven analysis guide — Fixed Data Freshness empty-day example to use canonical **Data Freshness**: Documents sourced from **YYYY-MM-DD** format parseable by analysis-reader.ts.
  2. AI-driven analysis guide — Replaced ⚠️ Last evidence: carry-forward marker with neutral Last evidence: [date] field + separate staleness status derived from the aging table.
  3. Style guide — Removed duplicate markdown # H1, keeping the HTML <h1> as the single top-level heading; body sections now start at ##.
  4. Risk methodology — Removed ⚠️ icon and red stroke from Mermaid template Day 7 node — Day 7 is still within the staleness rules' window.
  5. Threat framework — Replaced hard-coded [Coalition/Policy/Budget/Electoral/External] with canonical risk category names from the risk methodology (Coalition stability / Policy implementation / Legislative integrity / Economic governance / Social cohesion / Democratic process).

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

@pethers pethers merged commit bd20c50 into main Apr 6, 2026
11 checks passed
@pethers pethers deleted the copilot/enhance-analysis-methodologies branch April 6, 2026 12:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Documentation updates size-l Large change (250-1000 lines) size-xs Extra small change (< 10 lines)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Enhance analysis methodologies with temporal analysis, Bayesian updating, and cross-methodology linkage

3 participants