Conversation
🏷️ Automatic Labeling SummaryThis PR has been automatically labeled based on the files changed and PR metadata. Applied Labels: size-xs Label Categories
For more information, see |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
- Add catalog-downloaded-data.ts script + 11 tests - Add per-file-political-intelligence.md template (SWOT.md quality) - Add ai-driven-analysis-guide.md methodology - Add per-file-intelligence-analysis.md prompt (v2) - Update SHARED_PROMPT_PATTERNS.md with per-file block - Update news-evening-analysis.md with AI-driven analysis - Update all 10 other agentic workflow .md files Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/21aec9bc-8b00-486e-bf37-be3a77e1d64b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/21aec9bc-8b00-486e-bf37-be3a77e1d64b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
- Fix analysis/README.md: remove duplicate section, add per-file analysis guide, update workflow integration, add conflict resolution, update script references - Improve political-swot-framework.md: add EXPIRED entry handling rules, landscape SWOT weighting (Gov 0.40, S 0.25, SD 0.20, minor 0.15), conflict resolution, minimum requirements, update pipeline to v2 - Improve political-risk-methodology.md: remove duplicate technical section, add AI analysis protocol, add Risk-to-SWOT integration rules, add dimension weighting, add calibration cross-references - Improve political-classification-guide.md: remove duplicate technical section, add borderline classification guidance, add score divergence protocol - Improve political-threat-framework.md: add severity calibration table (1-5), add escalation criteria, add Threat-to-Risk integration mapping, add AI analysis protocol with MCP tool references - Improve templates: add temporal decay tracking to SWOT template, add severity reference to threat template, add escalation decision table, add risk freshness/escalation to risk template, add score reconciliation to significance template, add conflicting impact resolution to stakeholder template, add calibration example to classification template, expand per-file template risk section to 6 categories with scoring reference Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/a27cff28-2ee0-4999-be91-2724c85cb117 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…ication, L×I note, time budget guidance Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/a27cff28-2ee0-4999-be91-2724c85cb117 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/0088c258-7514-4a3f-b3bf-76d636f48db2 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
All 3 comments addressed in
These were addressed in the prior round (
All 8 ninth-round review comments addressed in 1–7. Next Review dates — unified to All 13 catalog tests pass. |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| 7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 8. **`scripts/prompts/v1/stakeholder-perspectives.md`** — Multi-perspective analysis instructions | ||
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) | ||
| 9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
There was a problem hiding this comment.
This workflow’s "Required Skills" section points to scripts/prompts/v2/quality-criteria.md, but later the "MANDATORY Quality Validation" section still instructs using scripts/prompts/v1/quality-criteria.md. Align these references (either update the later instruction to v2, or keep both sections on v1) to avoid agents applying different rubrics.
| 9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) | |
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
|
@copilot apply changes based on the comments in this thread |
…alysis.md Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b0d11322-d8e5-404c-936b-6ab5c88a9ca4 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Round 9 changes were already applied in
Fixed in |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
|
|
||
| After article generation, verify EACH article meets these minimum standards before committing. | ||
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). | ||
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). Use the following reference documents to support consistent, in-depth analysis: |
There was a problem hiding this comment.
This file’s quality validation section still cites scripts/prompts/v1/quality-criteria.md, but there is a v2 rubric intended to replace it (scripts/prompts/v2/quality-criteria.md). To avoid inconsistent guidance (especially since you’re adding v2 per-file analysis references right below), update the rubric reference to v2.
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). Use the following reference documents to support consistent, in-depth analysis: | |
| Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). Use the following reference documents to support consistent, in-depth analysis: |
.github/workflows/news-week-ahead.md
Outdated
| 7. **`scripts/prompts/v1/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 8. **`scripts/prompts/v1/stakeholder-perspectives.md`** — Multi-perspective analysis instructions | ||
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
There was a problem hiding this comment.
The required-reading list still includes scripts/prompts/v1/quality-criteria.md, but scripts/prompts/v2/quality-criteria.md exists and is marked as replacing v1. Since this workflow is already updated to use v2 political-analysis and per-file analysis instructions, referencing the v2 quality rubric would keep the guidance consistent.
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) | |
| 9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
| allEntries.push({ | ||
| id: basename, | ||
| type: docType, | ||
| path: path.relative(process.cwd(), filePath).split(path.sep).join('/'), | ||
| analysisPath: path.relative(process.cwd(), analysisPath).split(path.sep).join('/'), | ||
| hasAnalysis, | ||
| sizeBytes: stat.size, | ||
| meta, | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // De-duplicate vote entries: when the same id appears from both | ||
| // documents/votes and votes/YYYY-MM-DD, prefer the date-stamped path. | ||
| // Only votes are scoped for dedup (the only type with two scan dirs). | ||
| const bestByKey = new Map<string, (typeof allEntries)[number]>(); | ||
| for (const e of allEntries) { | ||
| const key = `${e.type}::${e.id}`; | ||
| const existing = bestByKey.get(key); | ||
| if (!existing) { | ||
| bestByKey.set(key, e); | ||
| } else if (e.type === 'votes') { | ||
| // Prefer the entry whose path contains a date directory (votes/YYYY-MM-DD) | ||
| const existingHasDate = /votes\/\d{4}-\d{2}-\d{2}\//.test(existing.path); | ||
| const currentHasDate = /votes\/\d{4}-\d{2}-\d{2}\//.test(e.path); | ||
| if (currentHasDate && !existingHasDate) { | ||
| bestByKey.set(key, e); | ||
| } | ||
| // Otherwise keep existing (first-seen or already date-stamped) | ||
| } | ||
| // Non-vote duplicates: keep first-seen (shouldn't occur with current DATA_SUBDIRS) | ||
| } | ||
| const dedupedEntries = [...bestByKey.values()]; | ||
|
|
||
| // Compute totals from the full scan (before pendingOnly filter) | ||
| const totalCompleted = dedupedEntries.filter((e) => e.hasAnalysis).length; | ||
| const totalPending = dedupedEntries.length - totalCompleted; | ||
|
|
||
| // Apply pendingOnly filter after computing totals | ||
| const entries = pendingOnly | ||
| ? dedupedEntries.filter((e) => !e.hasAnalysis) | ||
| : dedupedEntries; | ||
|
|
||
| // Ensure deterministic ordering across platforms/filesystems. | ||
| // Use simple < / > string compare (locale-independent) for stable collation. | ||
| entries.sort((a, b) => { | ||
| if (a.type < b.type) return -1; | ||
| if (a.type > b.type) return 1; | ||
| if (a.id < b.id) return -1; | ||
| if (a.id > b.id) return 1; | ||
| return 0; | ||
| }); | ||
|
|
||
| return { | ||
| generatedAt: new Date().toISOString(), | ||
| dataRoot: path.relative(process.cwd(), dataRoot).split(path.sep).join('/'), | ||
| totalFiles: dedupedEntries.length, |
There was a problem hiding this comment.
path/analysisPath/dataRoot are made relative to process.cwd(). That makes the catalog output dependent on the caller’s working directory (e.g., running the script from a subfolder produces different paths), and it’s inconsistent with the repo’s other CLIs that anchor paths to the repo root (see scripts/pre-article-analysis.ts:72-75). Consider deriving a stable REPO_ROOT via fileURLToPath(import.meta.url) and using it for default dataRoot and for path.relative(...) outputs.
| | **Document ID** | `[REQUIRED: dok_id or file identifier]` | | ||
| | **Document Type** | `[REQUIRED: proposition / motion / committeeReport / vote / speech / question / interpellation / government / worldbank / scb]` | | ||
| | **Title** | `[REQUIRED: document title or descriptor]` | |
There was a problem hiding this comment.
The required Document Type value list uses singular forms like committeeReport, vote, and speech, but the repo’s canonical type keys (directories + tooling) are plural (e.g. committeeReports, votes, speeches; see analysis/data/README.md:159 and the catalog script types). Aligning these names avoids inconsistent output and makes it easier to automate processing of per-file analyses.
| @@ -505,6 +534,9 @@ If the generated article lacks these analytical sections, manually add contextua | |||
|
|
|||
| After article generation, verify EACH article meets these minimum standards before committing. | |||
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). | |||
There was a problem hiding this comment.
This workflow still points to scripts/prompts/v1/quality-criteria.md for the mandatory quality rubric, but scripts/prompts/v2/quality-criteria.md explicitly replaces v1 (see v2 header comment). Since the rest of this workflow is already referencing v2 analysis prompts, update this reference to v2 to avoid conflicting instructions.
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). | |
| Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). |
.github/workflows/news-motions.md
Outdated
| @@ -503,6 +532,9 @@ If the generated article lacks these analytical sections, manually add contextua | |||
|
|
|||
| After article generation, verify EACH article meets these minimum standards before committing. | |||
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). | |||
There was a problem hiding this comment.
This workflow still references scripts/prompts/v1/quality-criteria.md for mandatory quality validation, even though scripts/prompts/v2/quality-criteria.md exists and is marked as replacing v1. To keep the workflow’s guidance consistent with the other v2 prompt references, switch the rubric link to v2.
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). | |
| Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). |
| @@ -481,6 +510,9 @@ If the generated article lacks these analytical sections, manually add contextua | |||
|
|
|||
| After article generation, verify EACH article meets these minimum standards before committing. | |||
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). | |||
There was a problem hiding this comment.
The mandatory quality validation step still points to scripts/prompts/v1/quality-criteria.md, but the repo has a v2 rubric that replaces v1 (scripts/prompts/v2/quality-criteria.md). Since this workflow is already updated to use v2 political-analysis/per-file prompts, updating the rubric reference to v2 will prevent mixed-version guidance.
| Apply the quality rubric from **`scripts/prompts/v1/quality-criteria.md`** (minimum score: 7/10). | |
| Apply the quality rubric from **`scripts/prompts/v2/quality-criteria.md`** (minimum score: 7/10). |
|
@copilot apply changes based on the comments in this thread |
…ROOT in catalog script, plural Document Type values Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b2bcfe36-7359-4ac6-893d-0b16f356463d Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
All 7 eleventh-round review comments addressed in 1–5. v1 → v2 quality-criteria — updated in both Required Skills and MANDATORY Quality Validation sections of news-interpellations.md, news-week-ahead.md, news-propositions.md, news-motions.md, news-committee-reports.md All 13 catalog tests pass, 0 CodeQL alerts. |
| expect(types).toContain('worldbank'); | ||
| expect(types).toContain('scb'); | ||
| }); | ||
|
|
There was a problem hiding this comment.
The tests cover votes/date subdirs, but not nested type subdirectories where basenames repeat (notably World Bank and mcp-responses). Add a regression test that creates e.g. worldbank/ind1/SE.json + worldbank/ind2/SE.json (and/or mcp-responses/server/toolA/1.json + .../toolB/1.json) and asserts the catalog contains both entries.
| it('handles nested type subdirectories with duplicate basenames', () => { | |
| writeJSON(path.join(tmpRoot, 'worldbank/ind1/SE.json'), { value: 1 }); | |
| writeJSON(path.join(tmpRoot, 'worldbank/ind2/SE.json'), { value: 2 }); | |
| const catalog = buildCatalog(tmpRoot); | |
| expect(catalog.totalFiles).toBe(2); | |
| }); |
| 7. **`scripts/prompts/v1/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 8. **`scripts/prompts/v1/stakeholder-perspectives.md`** — Multi-perspective analysis instructions | ||
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
There was a problem hiding this comment.
This workflow’s Required Skills still references scripts/prompts/v1/quality-criteria.md, but other updated workflows now point at scripts/prompts/v2/quality-criteria.md. Since the workflow already references v2 prompts elsewhere, update the quality criteria link to v2 to avoid agents applying an outdated rubric.
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) | |
| 9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
| 7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 8. **`scripts/prompts/v1/stakeholder-perspectives.md`** — Multi-perspective analysis instructions | ||
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) | ||
| 10. **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol | ||
| 11. **`analysis/methodologies/ai-driven-analysis-guide.md`** — Methodology for deep per-file analysis | ||
| 12. **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template | ||
|
|
There was a problem hiding this comment.
The Required Skills section still points to scripts/prompts/v1/quality-criteria.md even though v2 is used elsewhere in the repo (e.g. news-evening-analysis.md, news-propositions.md). Update this reference to scripts/prompts/v2/quality-criteria.md for consistency with the rest of the updated workflow docs.
| 7. **`scripts/prompts/v1/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 8. **`scripts/prompts/v1/stakeholder-perspectives.md`** — Multi-perspective analysis instructions | ||
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
There was a problem hiding this comment.
The Required Skills list still references scripts/prompts/v1/quality-criteria.md, while the updated workflow docs for daily news runs have switched to scripts/prompts/v2/quality-criteria.md. To keep analysts using a single rubric, update this reference to v2 here as well.
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) | |
| 9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
| 7. **`scripts/prompts/v1/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses) | ||
| 8. **`scripts/prompts/v1/stakeholder-perspectives.md`** — Multi-perspective analysis instructions | ||
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
There was a problem hiding this comment.
The Required Skills list still references scripts/prompts/v1/quality-criteria.md, while other workflows have been updated to use the v2 rubric. Updating this to scripts/prompts/v2/quality-criteria.md would prevent agents from mixing scoring standards across workflows.
| 9. **`scripts/prompts/v1/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) | |
| 9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10) |
| ## Per-File AI Analysis Block (copy into every analysis workflow) | ||
|
|
||
| > **Replaces script-based batch analysis.** The AI agent reads methodology documents and produces SWOT.md-quality per-file analysis for every downloaded MCP data file. | ||
|
|
||
| ````markdown | ||
| ### Per-File AI Political Intelligence Analysis | ||
|
|
||
| **Purpose:** Replace shallow script-based daily analysis with deep, AI-driven per-file analysis. | ||
| **Quality Standard:** Every analysis file must match [SWOT.md](../../SWOT.md) and [THREAT_MODEL.md](../../THREAT_MODEL.md) formatting quality. |
There was a problem hiding this comment.
Since this file is explicitly the copy/paste source for future workflow edits, it’s important that it doesn’t accidentally reintroduce v1 prompt references. The newly added per-file analysis block is v2-aligned, but the earlier “Shared Skill Block” section (above this diff hunk) still points to v1 political-analysis.md / quality-criteria.md. Consider updating that shared block to v2 as well so new workflows inherit the correct references by default.
| | Document Type | Key Fields to Extract | | ||
| |--------------|----------------------| | ||
| | **Propositions** | `dok_id`, `titel`, `rm`, `organ`, `datum`, `undertitel`, `summary` | | ||
| | **Motions** | `dok_id`, `titel`, `parti`, `rm`, `undertitel` | | ||
| | **Committee Reports** | `dok_id`, `titel`, `organ`, `rm`, `reservationer` | | ||
| | **Votes** | `votering_id`, `datum`, `ja`, `nej`, `avstar`, `franvarande`, `punkt` | | ||
| | **Speeches** | `anforande_id`, `talare`, `parti`, `debattnamn`, `anforandetext` | | ||
| | **Questions** | `dok_id`, `titel`, `parti`, `mottagare`, `svar` | | ||
| | **Interpellations** | `dok_id`, `titel`, `parti`, `mottagare`, `status` | | ||
| | **Government Docs** | `title`, `type`, `department`, `date`, `url` | | ||
| | **World Bank** | `indicator`, `country`, `date`, `value` | | ||
| | **SCB** | `table_id`, `variables`, `values` | | ||
|
|
There was a problem hiding this comment.
The “Key Fields to Extract” table omits file types that are part of the persisted dataset (events, mps, and mcp-responses) and also doesn’t reflect the nested World Bank structure (worldbank/{indicator}/{country}.json), which affects how an agent should interpret id/context. Consider expanding this table (and the surrounding protocol) so it covers all cataloged types and clarifies what identifiers to use for nested paths.
|
@copilot apply changes based on the comments in this thread |
…PT_PATTERNS, nested id fix, expanded Key Fields table, regression test Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/022eba6a-601b-45b8-a7d8-2e21d376e155 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/022eba6a-601b-45b8-a7d8-2e21d376e155 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 7 twelfth-round review comments addressed in 1–4. v1 → v2 quality-criteria — updated in news-article-generator.md, news-realtime-monitor.md, news-month-ahead.md, news-monthly-review.md All 14 catalog tests pass, 0 CodeQL alerts. |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.