From 92d8346a86969093a33ba71d2cc15b744b84e9f6 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 5 May 2026 15:58:04 +0000 Subject: [PATCH 1/5] Changelog: Nextflow v25.10.5 --- changelog/nextflow/v25.10.5.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 changelog/nextflow/v25.10.5.md diff --git a/changelog/nextflow/v25.10.5.md b/changelog/nextflow/v25.10.5.md new file mode 100644 index 000000000..2d5d44c15 --- /dev/null +++ b/changelog/nextflow/v25.10.5.md @@ -0,0 +1,14 @@ +--- +title: Nextflow 25.10.5 +date: 2026-05-05 +tags: [nextflow] +--- + +- Fix formatting bug with map key expression (#6893) [8290bc0ae] +- Fix incorrect evaluation of `secret` process directive (#6934) [3cef01585] +- Fix params block in included module (#6940) [4656c9b35] +- Fix params inclusion across modules in v2 parser (#6766) [a97071485] +- Fix resolution of params in resolved config text (#7072) [5aaf7926f] +- Bump nf-tower@1.17.6 + +https://github.com/nextflow-io/nextflow/releases/tag/v25.10.5 From 475d37b4f2a24194b50d58928a8c340608179b59 Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Wed, 6 May 2026 13:43:06 +1200 Subject: [PATCH 2/5] Style Changelog --- changelog/nextflow/v25.10.5.md | 32 ++++++++++++++++++++++++-------- 1 file changed, 24 insertions(+), 8 deletions(-) diff --git a/changelog/nextflow/v25.10.5.md b/changelog/nextflow/v25.10.5.md index 2d5d44c15..9ba5be70a 100644 --- a/changelog/nextflow/v25.10.5.md +++ b/changelog/nextflow/v25.10.5.md @@ -4,11 +4,27 @@ date: 2026-05-05 tags: [nextflow] --- -- Fix formatting bug with map key expression (#6893) [8290bc0ae] -- Fix incorrect evaluation of `secret` process directive (#6934) [3cef01585] -- Fix params block in included module (#6940) [4656c9b35] -- Fix params inclusion across modules in v2 parser (#6766) [a97071485] -- Fix resolution of params in resolved config text (#7072) [5aaf7926f] -- Bump nf-tower@1.17.6 - -https://github.com/nextflow-io/nextflow/releases/tag/v25.10.5 +:::info Nextflow 25.10 is a stable release +See the [Migrating to 25.10](https://docs.seqera.io/nextflow/migrations/25-10) for a comprehensive list of changes since the last stable release. +::: + +## Feature updates and improvements + +### Dependencies + +- Bumped nf-tower@1.17.6 by @pditommaso + +## Bug fixes + +### Language features + +- Fixed formatting bug with map key expression by @bentsherman in [#6893](https://github.com/nextflow-io/nextflow/pull/6893) +- Fixed incorrect evaluation of `secret` process directive by @bentsherman in [#6934](https://github.com/nextflow-io/nextflow/pull/6934) +- Fixed params block in included module by @bentsherman in [#6940](https://github.com/nextflow-io/nextflow/pull/6940) +- Fixed params inclusion across modules in v2 parser by @bentsherman in [#6766](https://github.com/nextflow-io/nextflow/pull/6766) + +### Configuration + +- Fixed resolution of params in resolved config text by @jorgee in [#7072](https://github.com/nextflow-io/nextflow/pull/7072) + +**Full changelog**: https://github.com/nextflow-io/nextflow/releases/tag/v25.10.5 \ No newline at end of file From e405d0575a6bdd6ce4a44234b0ee6ea552aa582f Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Wed, 6 May 2026 13:43:18 +1200 Subject: [PATCH 3/5] Style Changelog --- changelog/nextflow/v25.10.5.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/changelog/nextflow/v25.10.5.md b/changelog/nextflow/v25.10.5.md index 9ba5be70a..4fd1ee5bc 100644 --- a/changelog/nextflow/v25.10.5.md +++ b/changelog/nextflow/v25.10.5.md @@ -27,4 +27,4 @@ See the [Migrating to 25.10](https://docs.seqera.io/nextflow/migrations/25-10) f - Fixed resolution of params in resolved config text by @jorgee in [#7072](https://github.com/nextflow-io/nextflow/pull/7072) -**Full changelog**: https://github.com/nextflow-io/nextflow/releases/tag/v25.10.5 \ No newline at end of file +**Full changelog**: https://github.com/nextflow-io/nextflow/releases/tag/v25.10.5 From 95d786cb789463f7d99af7b9fc196efd4fec96ac Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Wed, 6 May 2026 13:48:11 +1200 Subject: [PATCH 4/5] Fix whitespace --- platform-cloud/docs/compute-envs/google-cloud-batch.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/platform-cloud/docs/compute-envs/google-cloud-batch.md b/platform-cloud/docs/compute-envs/google-cloud-batch.md index a36cf5780..872495f1a 100644 --- a/platform-cloud/docs/compute-envs/google-cloud-batch.md +++ b/platform-cloud/docs/compute-envs/google-cloud-batch.md @@ -100,11 +100,11 @@ You can manage your key from the **Service Accounts** page. **Workload Identity Federation** -Workload Identity Federation (WIF) is the recommended authentication method for production and regulated environments because it eliminates the need for long-lived service account keys. WIF uses short-lived OIDC tokens for authentication, which are generated by Seqera Platform. +Workload Identity Federation (WIF) is the recommended authentication method for production and regulated environments because it eliminates the need for long-lived service account keys. WIF uses short-lived OIDC tokens for authentication, which are generated by Seqera Platform. -This requires the following steps in the GCP Console: +This requires the following steps in the GCP Console: -1. Create a [Workload Identity Pool and Provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) in your Google Cloud project. +1. Create a [Workload Identity Pool and Provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers) in your Google Cloud project. 2. Set Seqera as an OIDC provider within the pool. Set the Issuer URL to `https://cloud.seqera.io/api`. 3. Set the **Allowed audiences**. If left empty, GCP derives a default audience from the provider resource path in the format `//iam.googleapis.com/projects/{PROJECT}/locations/global/workloadIden tityPools/{POOL}/providers/{PROVIDER}`. If you specify a custom value, it must match exactly what you enter in the Token audience field when creating the Google WIF credential in Seqera. From 2d67c9d69896539eb5336c40dcf1174c482a45cc Mon Sep 17 00:00:00 2001 From: Seqera Docs Bot Date: Wed, 6 May 2026 22:13:38 +0000 Subject: [PATCH 5/5] [automated] Fix code formatting --- .claude/skills/docs-state-assessment/SKILL.md | 112 +++++++++--------- .claude/skills/feature-docs/SKILL.md | 58 ++++----- .claude/skills/release-impact/SKILL.md | 72 +++++------ 3 files changed, 121 insertions(+), 121 deletions(-) diff --git a/.claude/skills/docs-state-assessment/SKILL.md b/.claude/skills/docs-state-assessment/SKILL.md index 0d744236b..2958d37a8 100644 --- a/.claude/skills/docs-state-assessment/SKILL.md +++ b/.claude/skills/docs-state-assessment/SKILL.md @@ -12,82 +12,82 @@ description: > priorities. This skill focuses on the state of the work — what exists, what's in flight, what's missing — not on personas, information architecture, or where content should live in the doc site. --- - + # Docs State Assessment - + Produce a structured gap analysis for a Seqera product area by pulling from four sources and cross-referencing them. - + ## Why this skill exists - + Documentation teams working across multiple product areas need to quickly answer: "What's the current state of docs for this project?" Doing this manually means opening GitHub, scanning PRs, checking Jira, reading the published docs, and browsing the repo — then holding all of that in your head while you figure out the deltas. This skill turns that into a repeatable process with a consistent output format. - + ## The four data sources - + Every assessment pulls from these four layers. Each one answers a different question: - + | Source | Question it answers | How to access it | |--------|-------------------|------------------| | **Published docs** (docs.seqera.io) | What do customers see today? | `web_fetch` the product area's doc index page, then fetch each subpage | | **GitHub PRs** (open) | What's shipping soon that may need docs? | Browse the repo's open PRs via Claude in Chrome, or use `gh` CLI if available | | **Jira backlog** | What's planned that will create doc needs? | `searchJiraIssuesUsingJql` — filter out security issues and Done items | | **Internal repo docs** | What source material exists that isn't published? | Browse the repo's `docs/` directory (or equivalent) via Chrome or filesystem | - + The power is in the cross-referencing. Any single source gives you a partial picture. The assessment finds the gaps between them. - + ## Before you start - + Gather these inputs from the user (or from conversation context): - + 1. **GitHub repo** — the repository to assess (e.g., `seqeralabs/portal`) 2. **Jira project key** — the Jira project tracking work for this product (e.g., `SH`) 3. **Published docs URL** — the root URL of the product's section on docs.seqera.io (e.g., `https://docs.seqera.io/platform-cloud/seqera-ai/`) 4. **Jira Cloud ID** — typically the Atlassian site URL (e.g., `seqera.atlassian.net`) If the user doesn't provide all four, ask for what's missing. Often the repo and Jira project are enough — you can infer the docs URL from the product area name, and the Cloud ID from the Jira URL. - + ## Practical notes on tooling - + **Private repos**: If the GitHub repo is private, `web_fetch` will return 404. Use Claude in Chrome to browse the repo instead — navigate to the PRs page and read the page text. This is the most reliable approach for private Seqera repos. - + **Large Jira result sets**: The SH project (Seqera AI) has thousands of issues. Always filter out security issues and Done/Canceled items in the JQL query. If results still exceed token limits, the output gets saved to a file — use `jq` or Python to extract just the fields you need (key, summary, status, issuetype, priority, assignee). - + **Parallelize where possible**: Steps 1 (published docs) and 3 (Jira) don't depend on each other. Fetch them in parallel. Step 2 (PRs) requires browser navigation, so it runs sequentially. Step 4 (internal docs) can run in parallel with Step 2 if you have filesystem access to the repo. - + ## The assessment process - + Work through these steps in order. Each step builds on the previous one. - + ### Step 1: Inventory the published docs - + Fetch the product area's index page on docs.seqera.io. Extract: - The full page list from the sidebar navigation - For each page: title, URL, and a brief note on what it covers Then fetch 3-5 of the most substantive pages (not just the index) to understand the depth of coverage. You're looking for: how detailed are they? Do they cover setup, configuration, advanced usage, troubleshooting? Or are they surface-level overviews? - + Record this as a table with columns: Page, Coverage depth (overview / moderate / detailed), Key topics covered. - + ### Step 2: Scan the GitHub PRs - + Navigate to the repo's open PRs page. Read through all pages (repos with active development often have 50-100+ open PRs). For each PR, note: - PR number and title - Scope prefix (feat, fix, docs, chore, etc.) - Status (Draft, Review required, Approved, Changes requested) - Priority labels if any (ship-this-week, next-sprint, backlog) Filter out noise: dependency bumps (renovate/dependabot), security patches, and pure CI/tooling changes rarely need docs. Focus on `feat()`, `fix()` that changes user-facing behavior, and `docs()` PRs. - + Group the remaining PRs by theme — features that relate to each other often imply the same doc page or section. Common groupings: authentication, new commands/features, UI changes, API changes, performance/reliability. - + ### Step 3: Query the Jira backlog - + Search for non-security, non-Done issues: - + ``` project = AND issuetype NOT IN ("Security Issue") AND status NOT IN (Done, Canceled) ORDER BY created DESC ``` - + Request fields: summary, status, issuetype, priority, assignee, labels, components. - + If the result set is too large for context, it will be saved to a file. Extract the essentials with a script: - + ```python import json, sys data = json.load(sys.stdin) @@ -102,78 +102,78 @@ for item in data: assignee = (f.get('assignee') or {}).get('displayName', 'Unassigned') print(f'{key} | {itype} | {status} | {assignee} | {summary}') ``` - + From the results, extract: - **Epics** — these represent large initiatives, each potentially implying an entire doc set - **Stories/Tasks with doc implications** — new features, migrations, integrations - **In-progress items** — these are shipping soonest and need docs coordination - **Bugs that affect doc accuracy** — behavioral changes that make current docs wrong Skip: internal tooling tasks, eval/benchmark work, and items that are purely backend optimization with no user-facing change. - + ### Step 4: Scan internal repo docs - + Browse the repo's documentation directory (usually `docs/`, sometimes scattered across component READMEs). Inventory what exists: - Design specs / PRDs (contain source material for customer-facing docs) - Architecture docs (internal reference) - Setup/onboarding guides (internal dev audience) - Process docs (internal workflows) The key question: does the repo contain information that should be in the published docs but isn't? Design specs are the most common source of this gap — they describe features in detail for the engineering team but that information never makes it to customer-facing documentation. - + ### Step 5: Cross-reference and produce the gap analysis - + This is where the four sources come together. Produce four delta categories: - + **1. Shipped but not documented** — Features or capabilities that exist in the product (merged to main, available to users) but have no corresponding published documentation. These are found by comparing internal repo docs and recent merged PRs against the published doc inventory. Severity: Critical or High. - + **2. In flight — will need docs soon** — Open PRs that are approved, in review, or labeled as shipping soon. Cross-reference with any Jira items they link to. For each, note what published doc page needs updating or whether a new page is needed. Severity: varies by ship timeline. - + **3. Planned — will create future doc needs** — Jira epics and stories in the backlog that will require documentation when they ship. These help the docs team plan capacity. Note estimated doc scope (new page, new section, update to existing page, entirely new doc set). - + **4. Published docs that may be stale** — Existing published pages where open PRs or Jira items suggest the behavior is changing. For example, if there's an auth hardening PR shipping this week, the published authentication page is at risk of becoming inaccurate. - + ## Output format - + Produce a markdown document with this structure: - + ```markdown # [Product Area]: Documentation State Assessment *Generated: [date]* - + ## Published docs inventory [Table of what exists on docs.seqera.io with coverage depth] - + ## Open PRs with doc implications [Grouped by theme, filtered to only doc-relevant PRs] - + ## Jira backlog with doc implications [Epics, stories, and bugs that affect docs — separated by type] - + ## Gap analysis - + ### Shipped but not documented [Table: Feature | Source | Gap severity] - + ### In flight — needs docs soon [Table: Feature | PR(s) | Jira | Doc action needed] - + ### Planned — future doc needs [Table: Initiative | Jira | Doc impact estimate] - + ### Published docs at risk of staleness [Table: Published page | Risk level | Source of drift] - + ## Recommended priorities [Ordered list: what to do this week, next sprint, next quarter] ``` - + ## Important guidance - + **Be specific about what's missing, not vague.** "Web UI docs are missing" is less useful than "The web UI has a sandbox file explorer (PR #872), an explain-failure action (PR #873), and a code selector (PR #835), but zero published documentation for any web UI feature." - + **Severity matters.** A missing config reference page for an enterprise product is more urgent than a missing page about an exploration-labeled prototype. Use the PR labels and Jira statuses to gauge urgency. - + **Don't boil the ocean.** If the repo has 200 open PRs, you don't need to analyze all of them. Focus on `feat()` and `fix()` scopes, skip renovate/dependabot, skip exploration-labeled items unless they're unusually significant. Call out how many you filtered and why. - + **Cross-reference by keyword, not just by ticket number.** A Jira epic about "background agents" and a PR titled "add trigger-session endpoint for background AI sessions" are related even if the PR doesn't reference the Jira key. Connect them. - + **The output is a working document, not a final report.** It should be useful in a planning session — scannable tables, clear severity ratings, and actionable next steps. Keep prose minimal; let the tables do the talking. diff --git a/.claude/skills/feature-docs/SKILL.md b/.claude/skills/feature-docs/SKILL.md index 0f142c890..6235a1869 100644 --- a/.claude/skills/feature-docs/SKILL.md +++ b/.claude/skills/feature-docs/SKILL.md @@ -9,52 +9,52 @@ description: > for this feature" or "can you draft docs from these sources". Also use when the user provides a mix of Jira links, Google Doc links, GitHub PR links, or uploaded PDFs and wants documentation produced from them. --- - + # Feature Docs - + Turn feature sources into Seqera Platform documentation updates. This skill takes a mix of inputs — Jira epics, PRDs, design docs, GitHub PRs, and the current published docs — and produces structured documentation outputs: gap analyses, draft content, full drop-in replacement files, and change summaries. - + ## Before you start - + Read the `seqera-brand-guidelines` skill (SKILL.md) for Seqera product terminology and tone. The brand skill covers naming conventions (e.g., "Seqera Platform" not "Tower", "Nextflow" capitalization) and visual identity. This skill covers doc structure and content workflow. - + Also read `references/docusaurus-conventions.md` in this skill's directory for Seqera-specific formatting patterns (admonitions, link references, frontmatter, heading hierarchy). - + ## How the skill works - + The workflow has five phases. Not every run needs all five — the user might just want a gap analysis, or they might want the full pipeline. Ask if unclear, but default to running all five phases and producing all outputs. - + ### Phase 1: Gather and read all sources - + The user will provide some combination of: - + - **Jira epic or ticket URLs** — Use the Jira MCP tools (`searchJiraIssuesUsingJql`, `getJiraIssue`) to read the epic, its child issues, acceptance criteria, and status. Extract feature names, scope, and what's been completed. - **PRD / Design Doc** — These may be uploaded as PDFs, shared as Google Doc links (use the browser to read them), or pasted into the conversation. Extract requirements, acceptance criteria, API field names, behavioral specifications, and out-of-scope items. - **GitHub PRs** — Use the browser to read PR descriptions and diffs. Focus on what changed, what fields were added, and what the PR description says about user-facing behavior. Check both open and merged PRs under the epic. - **Published docs (URLs)** — Fetch the current live documentation. This is your baseline. If the URL points to a raw markdown file on GitHub, that's the actual source of truth for the doc structure. Read everything before writing anything. The value of this skill is in the cross-referencing — catching things that one source mentions but another doesn't. - + **Practical tips for source gathering:** - Jira child issue queries can return huge results. If the response is too large, save it to a file and extract summaries programmatically. - Google Docs may need to be accessed via the browser's authenticated session. If `get_page_text` doesn't return the document body, try using `fetch()` with the `/export?format=txt` endpoint from within the page context. - GitHub PR pages can be very large. Use `document.querySelector('.markdown-body')?.innerText` via the JavaScript tool to extract just the PR description. - Some content may be blocked by browser content filters (especially text containing credentials or keys). Extract in smaller sections or use alternative methods. ### Phase 2: Gap analysis - + Compare what the current docs say against what the sources say should be true. For each gap, document: - + 1. **What the current docs say** (quote the specific text) 2. **What reality is now** (cite the source — which Jira ticket, which PR, which section of the PRD) 3. **What needs to change** (specific, actionable — not vague) 4. **Where in the doc** (section heading and approximate location) Organize gaps by severity: Critical (docs are actively wrong/misleading), Important (significant feature not documented), Minor (wording tweaks, label changes). - + Also flag **open questions** — things where the sources disagree or are ambiguous, where you'd need engineering or product confirmation. - + ### Phase 3: Draft content - + For each gap, draft the replacement content. Write it in the voice and format of the existing doc — match heading levels, step numbering style, admonition usage, and link reference patterns. See `references/docusaurus-conventions.md` for the specific patterns. - + Key principles: - **Preserve what works.** Don't rewrite sections that don't need changing. Surgically insert, replace, or remove content. - **Match the existing voice.** Seqera docs are instructional and direct. Steps use imperative mood ("Select...", "Enter..."). Explanatory text is concise but not terse. @@ -62,9 +62,9 @@ Key principles: - **API field names go in backticks.** E.g., `managedIdentityClientId`, `jobMaxWallClockTime`. - **Link references go at the bottom of the file.** Use descriptive names, not short abbreviations. ### Phase 4: Produce the output - + Depending on what the user asked for (or defaulting to all of the below): - + **Mode A: Gap analysis only** - A markdown file listing all gaps, organized by severity, with source citations and recommended changes. **Mode B: Gap analysis + draft content** @@ -74,31 +74,31 @@ Depending on what the user asked for (or defaulting to all of the below): **Mode D: Change summary** - A structured changelog document organized section-by-section: what was added, removed, or changed, and why. Useful for doc review and for understanding the scope of changes at a glance. The default output when the user says "draft docs for this feature" or similar is **Mode C + Mode D** — a full replacement file plus a change summary. - + All output files go in the outputs directory so the user can access them. - + ### Phase 5: Cross-reference verification - + After producing the output, verify it against every source. For each requirement or acceptance criterion in the PRD and design doc, check whether the updated docs reflect it. Produce a verification checklist: - + - ✅ Requirement is fully reflected in the updated docs - ⚠️ Partially reflected or acceptable omission (explain why) - ❌ Missing — needs to be added Flag anything that needs engineering/product confirmation (e.g., "Design doc mentions field X but it's unclear if it's user-facing"). - + This phase catches things you missed in the initial pass. It's the most important quality gate. - + ## Naming conventions for output files - + Use descriptive names based on the doc being updated: - + - `{page-name}-gap-analysis.md` — Gap analysis (Mode A/B) - `{page-name}-updated.md` — Full replacement (Mode C) - `{page-name}-change-summary.md` — Change summary (Mode D) For example: `azure-batch-gap-analysis.md`, `azure-batch-updated.md`, `azure-batch-change-summary.md`. - + ## When things go wrong - + - **Can't access a source**: Tell the user which source you couldn't read and why. Ask them to upload it as a PDF or paste the relevant content. Don't silently skip sources. - **Sources contradict each other**: Flag the contradiction explicitly in the gap analysis. Don't guess — list both versions and mark it as needing confirmation. - **Not sure if a feature is user-facing**: Include it in the gap analysis as an open question. Better to flag something unnecessary than to miss something important. diff --git a/.claude/skills/release-impact/SKILL.md b/.claude/skills/release-impact/SKILL.md index f08cd7ad2..883a936f6 100644 --- a/.claude/skills/release-impact/SKILL.md +++ b/.claude/skills/release-impact/SKILL.md @@ -13,40 +13,40 @@ description: > over docs-state-assessment (broad audit) and feature-docs (drafts new content) for single-release triage. --- - + # Release Impact Assessment - + Given a single release — a PR, a Jira ticket, or a changelog entry — produce a focused impact report telling a tech writer what needs to be touched in the Seqera docs. The skill checks three specific surfaces. It does not draft replacement content; it finds and classifies impact so the writer can act. - + ## Why this skill exists - + Most releases don't trigger a full doc audit, but many of them quietly break something: an env var gets renamed, a UI panel gets restyled and a screenshot goes stale, a page gets renamed and the five pages that linked to it now 404. Those three categories — env vars, visual assets, and internal links — are the ones we keep getting burned by, and they're mechanical enough that Claude can triage them reliably if it knows where to look. - + This skill is narrower than `docs-state-assessment` (which audits a whole product area over time) and narrower than `feature-docs` (which drafts full doc rewrites). Reach for it when you have *one* change and want to know *what it breaks*. - + ## Where things live (baked in) - + The skill assumes the Seqera docs repo: - + - **Docs repo (primary):** `https://github.com/seqeralabs/docs` (public, master branch) - **Live site (fallback):** `https://docs.seqera.io` - **Env vars reference directory:** `https://github.com/seqeralabs/docs/tree/master/platform-enterprise_docs/enterprise/configuration` If the user is clearly working against a different repo (e.g., a forward-looking branch or a different product doc set), ask before switching. Otherwise stick to the paths above. - + ## Inputs the skill accepts - + Any one of these, or a mix: - + - **A GitHub PR URL** (or several PRs against the product repo — not the docs repo). The PR is the richest source; the diff, the title, and the description usually tell you all three categories of impact at once. @@ -56,39 +56,39 @@ Any one of these, or a mix: need to infer PRs from it to get concrete detail. If a PR is obviously referenced, fetch it. If the user gives you only a vague description ("we just rebranded"), ask for at least one of the above. The value of the skill is precision, and precision needs a concrete artifact. - + ## The three-surface check - + Run all three in parallel where tooling allows. Each has its own reference doc with the specific commands, patterns, and edge cases. - + ### 1. Env var impact — see `references/env-vars-check.md` - + Diff the change against every file under `platform-enterprise_docs/enterprise/configuration`. Flag: added env vars missing from the reference, renamed/removed env vars still listed there, default-value changes, scope or applicability changes (e.g., "now only honored in standalone deployments"). - + ### 2. Visual asset impact — see `references/visual-assets-check.md` - + Decide whether the change touches the UI, the logo, or the brand palette. If yes, enumerate every image, video, or brand reference in the docs repo and classify which ones are plausibly stale. Group by page so the writer can triage in one pass. - + ### 3. Internal link impact — see `references/internal-links-check.md` - + If the change renames, moves, or removes a page or a heading anchor, scan the docs repo for every markdown link pointing at it. Return the source files and line numbers so the writer can fix or redirect them. - + ## How to work the inputs - + Read everything before writing anything. Even if the user only wants one of the three checks, read the full change first — the three surfaces overlap more than they look. A rebrand PR often also renames config keys; a UI refresh PR sometimes changes URL paths. - + Practical notes: - + - **PR diffs can be large.** For env var check, filter the diff to files matching `*.{yaml,yml,java,kt,ts,tsx,js,jsx,go,py,env,properties,conf}` and anything mentioning `System.getenv`, `process.env`, `os.environ`, or the string `TOWER_` / `SEQERA_` (common @@ -102,39 +102,39 @@ Practical notes: - **Parallelize:** the three checks are independent. Spawn them in parallel as subagents or tool calls where possible. ## Output format - + Produce a single markdown file in the outputs directory named `release-impact-{pr-or-jira-id}.md`. If the user gave you only changelog text with no ID, use a short slug derived from the first few words. - + Use this exact structure so the report is scannable and diffable across releases: - + ```markdown # Release impact: {title} *Generated: {date}* *Source: {PR URL / Jira key / "changelog entry"}* - + ## Summary {2–4 sentences: what the change is, and a one-line verdict per surface — "Env vars: 2 additions, 1 rename. Visual assets: no impact. Internal links: 1 page moved, 4 inbound links to fix."} - + ## Env var impact — {Critical | Important | Minor | None} {Table: Env var | Change type (added/renamed/removed/altered) | Reference file | Action needed} - + ## Visual asset impact — {Critical | Important | Minor | None} {Table: Asset path | Page it appears on | Reason it's likely stale | Action needed} - + ## Internal link impact — {Critical | Important | Minor | None} {Table: Source file:line | Broken target | Suggested fix} - + ## Open questions {Anything you couldn't determine without eng/product confirmation — e.g., "PR removes the `TOWER_OLD_VAR` env var, but it's referenced in the migration guide with no deprecation timeline. Is there a sunset date?"} ``` - + Severity guide: - + - **Critical** — docs will be actively wrong or broken the moment the release ships (a removed env var still listed as supported; a 404 link from a landing page). - **Important** — docs will be out of date or visually inconsistent but still usable (a renamed @@ -144,15 +144,15 @@ Severity guide: - **None** — the surface is unaffected. Say so explicitly; a clean bill of health is a real finding. ## What to skip - + - **Drafting replacement docs.** That's `feature-docs`' job. This skill only flags impact. - **External references** (blog posts, GitHub READMEs outside the docs repo). Out of scope. - **The whole product area's state.** If the user wants a broad audit, suggest `docs-state-assessment` instead. ## When sources are thin - + If the user gives you only a changelog line like "rebranded Studios UI", do the best you can: - + 1. Assume full visual-asset impact and enumerate every image/video/brand reference under the Studios section of the docs. 2. Mark env vars and links as "cannot assess without a PR — please provide one if you want