Skip to content

Refactor staging utilities and simplify release uploader CLI#194

Merged
leynos merged 7 commits intocodex/implement-network-and-command-functionsfrom
codex/action-unresolved-comments-on-pr-185-byfvch
Oct 10, 2025
Merged

Refactor staging utilities and simplify release uploader CLI#194
leynos merged 7 commits intocodex/implement-network-and-command-functionsfrom
codex/action-unresolved-comments-on-pr-185-byfvch

Conversation

@leynos
Copy link
Owner

@leynos leynos commented Oct 10, 2025

Summary

  • split the staging workflow into focused helper modules that share GitHub's multiline output writer and per-artefact staging helper
  • validate staging configuration checksum algorithms and alternative glob definitions to catch misconfiguration early while documenting the dataclasses and rejecting duplicate staged destinations or outputs
  • extract release uploader dependency shims, rely on Cyclopts when available, and funnel CLI parsing through a shared runtime option helper before delegating to argparse fallback
  • wrap and clarify the release design documentation paragraph around the staging helper outputs
  • normalise the Windows path export helper by using pathlib's Path.open for GitHub output consistency

Testing

  • make fmt
  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_68e85138f48883229e8082f03b90e95e

Summary by Sourcery

Refactor the staging workflow into a reusable, TOML-driven GitHub Action and accompanying Python library, simplify the release uploader CLI by centralizing argument handling with Cyclopts (and an argparse fallback), update release workflows to use the new composite action, enhance configuration validation and Windows path normalization, and expand documentation and tests to cover the new staging and upload flows.

New Features:

  • Add a generic staging composite GitHub Action backed by a TOML configuration file and a standalone Python package for artefact staging.
  • Extract and refactor the release uploader CLI to use dependency shims, shared runtime option normalization, and support Cyclopts or an argparse fallback.

Enhancements:

  • Validate staging configuration (checksum algorithms, glob patterns) and enforce unique staged destinations and output keys.
  • Normalize Windows path handling using pathlib and simplify workflow steps by replacing inline bash with a dedicated normalization script.

CI:

  • Update release workflows to call the new staging composite action (.github/actions/stage) and add target_key matrix entries for Windows and macOS jobs.

Documentation:

  • Revise the release design documentation to describe the new staging helper outputs and include an entity relationship diagram for the staging configuration schema.

Tests:

  • Add comprehensive tests for the staging helper package (config loader, staging logic, glob/template utilities), CLI entry point, composite action outputs, and Windows path normalization.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 10, 2025

Reviewer's Guide

This PR restructures the release staging workflow into a modular Python package under .github/actions/stage, extracts and unifies dependency handling for the release uploader CLI, and replaces ad-hoc staging scripts with a composite GitHub Action plus a generic CLI wrapper. Configuration validation, error checking, and output normalization (especially for Windows paths) are now handled in dedicated helpers, accompanied by comprehensive tests and updated documentation.

Entity relationship diagram for the new staging configuration schema

erDiagram
  COMMON {
    string bin_name
    string dist_dir
    string checksum_algorithm
    string staging_dir_template
    ArtefactConfig[] artefacts
  }
  TARGETS {
    string platform
    string arch
    string target
    string bin_ext
    string staging_dir_template
    ArtefactConfig[] artefacts
  }
  ArtefactConfig {
    string source
    boolean required
    string output
    string destination
    string[] alternatives
  }
  COMMON ||--o{ TARGETS : "has targets"
  COMMON ||--o{ ArtefactConfig : "has artefacts"
  TARGETS ||--o{ ArtefactConfig : "has artefacts"
Loading

Class diagram for the new staging helper module structure

classDiagram
  class ArtefactConfig {
    +string source
    +bool required
    +string output
    +string destination
    +list~string~ alternatives
  }
  class StagingConfig {
    +Path workspace
    +string bin_name
    +string dist_dir
    +string checksum_algorithm
    +list~ArtefactConfig~ artefacts
    +string platform
    +string arch
    +string target
    +string bin_ext
    +string staging_dir_template
    +string target_key
    +staging_dir() Path
    +staging_dir_name str
    +as_template_context() dict
  }
  class StageResult {
    +Path staging_dir
    +list~Path~ staged_artefacts
    +dict~str, Path~ outputs
    +dict~str, str~ checksums
  }
  class StageError {
    <<exception>>
  }
  ArtefactConfig <.. StagingConfig : contains
  StageResult <.. StagingConfig : uses
  StageError <|-- RuntimeError
Loading

File-Level Changes

Change Details Files
Modularize staging pipeline into reusable helper modules
  • Introduce config loader with dataclass models and TOML parsing
  • Extract core staging logic to resolve, copy, checksum, and export outputs
  • Add helpers for templating, glob matching, filesystem safety, env vars, and GitHub output
  • Enforce reserved key checks and collision detection
.github/actions/stage/scripts/stage_common/config.py
.github/actions/stage/scripts/stage_common/staging.py
.github/actions/stage/scripts/stage_common/template_utils.py
.github/actions/stage/scripts/stage_common/glob_utils.py
.github/actions/stage/scripts/stage_common/fs_utils.py
.github/actions/stage/scripts/stage_common/environment.py
.github/actions/stage/scripts/stage_common/github_output.py
.github/actions/stage/scripts/stage_common/checksum_utils.py
.github/actions/stage/scripts/stage_common/errors.py
.github/actions/stage/scripts/stage_common/__init__.py
Simplify and unify release uploader CLI
  • Extract cyclopts and plumbum shims into a helper module
  • Implement prepare_runtime_options to normalize inputs from env/CLI
  • Route CLI entry through run_cli fallback supporting argparse
  • Refactor upload_release_assets.py to use new shims and option helper
scripts/upload_release_assets.py
scripts/_release_upload_deps.py
Convert staging steps to a composite GitHub Action
  • Create composite action definition with uv installation and stage.py invocation
  • Replace per-OS staging wrappers with a single stage.py CLI script
  • Update release workflow to use composite action and pass config-file/target inputs
  • Simplify Windows normalization step to use a dedicated Python script
.github/actions/stage/action.yml
.github/actions/stage/scripts/stage.py
.github/actions/stage/scripts/normalise_windows_paths.py
.github/workflows/release.yml
Enhance documentation and schema explanation
  • Update design docs to describe the TOML schema and composite action flow
  • Embed ER diagram of config entities and clarify staging outputs
  • Remove outdated references to legacy staging scripts
docs/netsuke-design.md
Expand test coverage across staging, config, CLI, and action
  • Add behavioural tests for stage_artefacts and glob resolution
  • Add unit tests for config loader, error cases, and reserved key checks
  • Add CLI entry tests with stubbed cyclopts and GitHub output checks
  • Add lightweight tests validating action.yml outputs and uv installation
tests_python/test_stage_common_staging.py
tests_python/test_stage_common_config.py
tests_python/test_stage_cli.py
tests_python/test_stage_action.py
tests_python/test_stage_common_module.py
tests_python/stage_test_helpers.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 10, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/action-unresolved-comments-on-pr-185-byfvch

Comment @coderabbitai help to get the list of available commands and usage tips.

codescene-delta-analysis[bot]

This comment was marked as outdated.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and found some issues that need to be addressed.

  • Consider deriving the composite action’s declared outputs in action.yml directly from RESERVED_OUTPUT_KEYS to avoid manual duplication and drift.
  • Extract the environment-variable fallback and validation logic in _prepare_runtime_options into a shared helper to reduce duplication with require_env_path and make overrides more explicit.
  • With multiple small modules handling path matching, templating, and filesystem safety, think about consolidating or simplifying these utilities to streamline the codebase and improve discoverability.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider deriving the composite action’s declared outputs in action.yml directly from RESERVED_OUTPUT_KEYS to avoid manual duplication and drift.
- Extract the environment-variable fallback and validation logic in _prepare_runtime_options into a shared helper to reduce duplication with require_env_path and make overrides more explicit.
- With multiple small modules handling path matching, templating, and filesystem safety, think about consolidating or simplifying these utilities to streamline the codebase and improve discoverability.

## Individual Comments

### Comment 1
<location> `scripts/upload_release_assets.py:128` </location>
<code_context>
     seen[asset_name] = path


+def _prepare_runtime_options(
+    *,
+    release_tag: str | None,
</code_context>

<issue_to_address>
**issue (complexity):** Consider removing the custom argument normalization logic and using Cyclopts' built-in environment variable support to simplify the CLI setup.

You can avoid most of the extra indirection by leaning on Cyclopts’ built-in env support and killing off your custom `_prepare_runtime_options`. Here’s one way to simplify:

1.  Drop `_prepare_runtime_options` completely.
2.  In your app module, register each parameter with the appropriate `env` name.
3.  Call `main(...)` directly from your `@app.default`.

```python
from cyclopts import App, Parameter
from pathlib import Path
import os, sys

app = App(config=cyclopts.config.Env("INPUT_"))

@app.default
def cli(
    *,
    release_tag: Annotated[str, Parameter(required=True, env="INPUT_RELEASE_TAG")],
    bin_name:    Annotated[str, Parameter(required=True, env="INPUT_BIN_NAME")],
    dist_dir:    Annotated[Path, Parameter(env="INPUT_DIST_DIR")] = Path("dist"),
    dry_run:     Annotated[bool,  Parameter(env="INPUT_DRY_RUN")] = False,
) -> int:
    return main(
        release_tag=release_tag,
        bin_name=bin_name,
        dist_dir=dist_dir,
        dry_run=dry_run,
    )

if __name__ == "__main__":
    raise SystemExit(app(sys.argv[1:]))
```

Benefits:

-  Cyclopts will now coerce types and fall back to `INPUT_*` env vars automatically.
-  You remove both `_prepare_runtime_options` and `run_cli`, so all argument-normalisation is in one place.
-  You keep the exact same functionality (flags + env fallback) with far less boilerplate.
</issue_to_address>

### Comment 2
<location> `scripts/_release_upload_deps.py:21` </location>
<code_context>
+]
+
+
+class _FallbackParameter:
+    """Placeholder preserving ``typing.Annotated`` compatibility."""
+
</code_context>

<issue_to_address>
**issue (complexity):** Consider inlining simple stub functions in place of the _Fallback* classes to reduce boilerplate.

```suggestion
You can collapse most of the boilerplate by inlining simple stubs instead of the `_Fallback*` classes. For example, in `load_cyclopts` you don't need two whole classes—just define a tiny function and use `Any` for the missing‐parameter stub:

```python
from typing import Any

def load_cyclopts() -> CycloptsSupport:
    try:
        import cyclopts
        from cyclopts import App, Parameter
    except ModuleNotFoundError as exc:
        def app_stub(*args: object, **kwargs: object) -> None:
            raise RuntimeError(
                "Cyclopts is required for CLI usage; install it to run the script"
            ) from exc

        return CycloptsSupport(
            available=False,
            app=app_stub,
            parameter=Any,   # drop _FallbackParameter
        )

    app = App(config=cyclopts.config.Env("INPUT_", command=False))
    return CycloptsSupport(True, app, Parameter)
```

Then you can remove both `_FallbackApp` and `_FallbackParameter` entirely.

Likewise, simplify `load_plumbum`:

```python
def load_plumbum() -> PlumbumSupport:
    try:
        from plumbum import local
        from plumbum.commands import CommandNotFound, ProcessExecutionError
        from plumbum.commands.base import BoundCommand
    except ModuleNotFoundError as exc:
        def local_stub(*args, **kwargs):
            raise ModuleNotFoundError(
                "plumbum is required to execute releases; install it or run in dry‐run mode"
            ) from exc

        return PlumbumSupport(
            local=local_stub,
            command_not_found=ModuleNotFoundError,
            process_execution_error=RuntimeError,
            bound_command=Any,
        )

    return PlumbumSupport(local, CommandNotFound, ProcessExecutionError, BoundCommand)
```

This keeps behavior identical but cuts out dozens of lines of boilerplate.
</issue_to_address>

### Comment 3
<location> `.github/actions/stage/scripts/stage_common/config.py:196` </location>
<code_context>
+        return template_context | {"staging_dir_name": rendered_name}
+
+
+def load_config(config_file: Path, target_key: str) -> StagingConfig:
+    """Load staging configuration from ``config_file`` for ``target_key``.
+
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the new TOML config loader and model logic.

The new `load_config` function and related dataclasses implement nontrivial logic for parsing and validating TOML configuration files. You must provide both behavioural and unit tests to verify correct parsing, error handling, and edge cases.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 4
<location> `.github/actions/stage/scripts/stage_common/staging.py:66` </location>
<code_context>
+    digest: str
+
+
+def stage_artefacts(config: StagingConfig, github_output_file: Path) -> StageResult:
+    """Copy artefacts into ``config``'s staging directory.
+
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the artefact staging pipeline.

The new `stage_artefacts` function and its helpers perform file operations, output mapping, and error handling. You must provide both behavioural and unit tests to cover successful staging, error conditions, and output correctness.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 5
<location> `.github/actions/stage/scripts/stage.py:33` </location>
<code_context>
+app = cyclopts.App(help="Stage release artefacts using a TOML configuration file.")
+
+
+@app.default
+def main(config_file: Path, target: str) -> None:
+    """Stage artefacts for ``target`` using ``config_file``.
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the CLI entry point.

The new CLI entry point in `stage.py` should be covered by behavioural and unit tests to ensure correct argument parsing, error handling, and integration with the staging logic.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 6
<location> `.github/actions/stage/scripts/stage_common/glob_utils.py:10` </location>
<code_context>
+__all__ = ["glob_root_and_pattern", "match_candidate_path"]
+
+
+def match_candidate_path(workspace: Path, rendered: str) -> Path | None:
+    """Return the newest path matching ``rendered`` relative to ``workspace``."""
+
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the glob matching logic.

The new glob matching logic in `match_candidate_path` and `glob_root_and_pattern` should be covered by both behavioural and unit tests to ensure correct path resolution and edge case handling.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 7
<location> `.github/actions/stage/scripts/stage_common/template_utils.py:40` </location>
<code_context>
+        raise StageError(message) from exc
+
+
+def resolve_artefact_source(
+    workspace: Path, artefact: "ArtefactConfig", context: dict[str, typ.Any]
+) -> tuple[Path | None, list[RenderAttempt]]:
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for artefact source resolution and template rendering.

The artefact source resolution and template rendering logic should be covered by both behavioural and unit tests to verify correct template expansion and error handling.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 8
<location> `.github/actions/stage/scripts/stage_common/checksum_utils.py:11` </location>
<code_context>
+__all__ = ["write_checksum"]
+
+
+def write_checksum(path: Path, algorithm: str) -> str:
+    """Write the checksum sidecar for ``path`` using ``algorithm``."""
+
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for checksum writing logic.

The checksum writing logic should be covered by both behavioural and unit tests to ensure correct digest calculation and file output.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 9
<location> `.github/actions/stage/scripts/stage_common/fs_utils.py:12` </location>
<code_context>
+__all__ = ["safe_destination_path"]
+
+
+def safe_destination_path(staging_dir: Path, destination: str) -> Path:
+    """Return ``destination`` resolved beneath ``staging_dir``."""
+
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for safe destination path logic.

The logic for resolving safe destination paths should be covered by both behavioural and unit tests to verify correct path resolution and security checks.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 10
<location> `.github/actions/stage/scripts/stage_common/github_output.py:12` </location>
<code_context>
+__all__ = ["write_github_output"]
+
+
+def write_github_output(
+    file: Path, values: Mapping[str, str | Sequence[str]]
+) -> None:
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for GitHub output writing logic.

The logic for writing GitHub Actions outputs should be covered by both behavioural and unit tests to ensure correct formatting and file output.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 11
<location> `.github/actions/stage/scripts/stage_common/environment.py:13` </location>
<code_context>
+__all__ = ["require_env_path"]
+
+
+def require_env_path(name: str) -> Path:
+    """Return ``Path`` value for ``name`` or raise :class:`StageError`.
+
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for environment variable path resolution.

The environment variable path resolution logic should be covered by both behavioural and unit tests to verify correct error handling and path conversion.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 12
<location> `.github/actions/stage/scripts/stage_common/errors.py:4` </location>
<code_context>
+"""Error types shared across the staging helper package."""
+
+
+class StageError(RuntimeError):
+    """Raised when the staging pipeline cannot continue."""
--- /dev/null
</code_context>

<issue_to_address>
**issue (review_instructions):** Add unit tests for the custom StageError exception.

The custom exception should be covered by unit tests to verify correct raising and handling.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 13
<location> `.github/actions/stage/scripts/stage_common/__init__.py:3` </location>
<code_context>
+"""Public interface for the staging helper package."""
+
+from .config import ArtefactConfig, StagingConfig, load_config
+from .environment import require_env_path
+from .errors import StageError
</code_context>

<issue_to_address>
**issue (review_instructions):** Add unit tests for the public interface of the staging helper package.

The public interface should be covered by unit tests to verify correct imports and exposure of functionality.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 14
<location> `.github/workflows/scripts/normalise_windows_paths.py:27` </location>
<code_context>
+from pathlib import Path, PureWindowsPath
+
+
+def main() -> int:
+    """Read ``ARTEFACT_MAP`` and write normalised paths to ``GITHUB_OUTPUT``."""
+    try:
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the Windows path normalisation script.

The new script for normalising Windows paths should be covered by both behavioural and unit tests to verify correct environment variable handling and output formatting.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 15
<location> `scripts/_release_upload_deps.py:82` </location>
<code_context>
+    dry_run: bool
+
+
+def load_cyclopts() -> CycloptsSupport:
+    """Return Cyclopts components with graceful fallbacks."""
+
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for dependency loader and CLI fallback logic.

The new dependency loader and CLI fallback logic should be covered by both behavioural and unit tests to verify correct fallback behaviour and error handling.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 16
<location> `scripts/upload_release_assets.py:128` </location>
<code_context>
     seen[asset_name] = path


+def _prepare_runtime_options(
+    *,
+    release_tag: str | None,
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for runtime options preparation logic.

The new logic for preparing runtime options should be covered by both behavioural and unit tests to verify correct argument parsing and error handling.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 17
<location> `docs/netsuke-design.md:1690` </location>
<code_context>
+only then launches the Python entry point so workflows stay declarative. The
+helper writes SHA-256 sums for every staged file. It also exports a JSON map of
+the artefact outputs, allowing the workflow to hydrate downstream steps without
+hard-coded path logic. Figure 8.1 summarises the configuration entities,
+including optional keys reserved for templated directories and explicit
+artefact destinations that the helper can adopt without breaking compatibility.
</code_context>

<issue_to_address>
**suggestion (review_instructions):** This paragraph line exceeds 80 columns and should be wrapped for readability.

Please wrap this and other paragraph lines to a maximum of 80 columns, as per the documentation style guide.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*.md`

**Instructions:**
Paragraphs and bullets must be wrapped to 80 columns

</details>
</issue_to_address>

### Comment 18
<location> `docs/netsuke-design.md:1696` </location>
<code_context>
+
+Figure 8.1: Entity relationship for the staging configuration schema.
+
+```mermaid
+%% Figure 8.1: Entity relationship for the staging configuration schema.
+erDiagram
</code_context>

<issue_to_address>
**suggestion (review_instructions):** The mermaid code block should be checked for lines exceeding 120 columns.

Please ensure that all lines within this code block do not exceed 120 columns. If any do, consider splitting or reformatting them.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*.md`

**Instructions:**
Code blocks should be wrapped to 120 columns.

</details>
</issue_to_address>

### Comment 19
<location> `.github/actions/stage/scripts/stage_common/config.py:351` </location>
<code_context>
        return [] if not value else [value]

</code_context>

<issue_to_address>
**suggestion (code-quality):** Swap if/else branches of if expression to remove negation ([`swap-if-expression`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/swap-if-expression))

```suggestion
        return [value] if value else []
```

<br/><details><summary>Explanation</summary>Negated conditions are more difficult to read than positive ones, so it is best
to avoid them where we can. By swapping the `if` and `else` conditions around we
can invert the condition and make it positive.
</details>
</issue_to_address>

### Comment 20
<location> `scripts/upload_release_assets.py:153-160` </location>
<code_context>
def _prepare_runtime_options(
    *,
    release_tag: str | None,
    bin_name: str | None,
    dist_dir: str | Path | None,
    dry_run: bool | str | None,
    environ: typ.Mapping[str, str | None],
) -> RuntimeOptions:
    """Normalise CLI inputs from Cyclopts or ``argparse`` into runtime options."""

    resolved_release = release_tag or environ.get("INPUT_RELEASE_TAG")
    resolved_bin = bin_name or environ.get("INPUT_BIN_NAME")
    dist_source = dist_dir if dist_dir is not None else environ.get("INPUT_DIST_DIR")
    resolved_dist = Path(dist_source or "dist")

    dry_run_value: bool
    if isinstance(dry_run, bool):
        dry_run_value = dry_run
    elif dry_run is None:
        dry_run_value = False
    else:
        dry_run_value = _coerce_bool(dry_run)
    if not dry_run_value and (env_flag := environ.get("INPUT_DRY_RUN")):
        dry_run_value = _coerce_bool(env_flag)

    missing = [
        label
        for label, present in (
            ("--release-tag", resolved_release),
            ("--bin-name", resolved_bin),
        )
        if not present
    ]
    if missing:
        joined = ", ".join(missing)
        message = f"Missing required argument(s): {joined}"
        raise ValueError(message)

    return RuntimeOptions(
        release_tag=typ.cast(str, resolved_release),
        bin_name=typ.cast(str, resolved_bin),
        dist_dir=resolved_dist,
        dry_run=dry_run_value,
    )

</code_context>

<issue_to_address>
**issue (code-quality):** Use named expression to simplify assignment and conditional ([`use-named-expression`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/use-named-expression/))
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

seen[asset_name] = path


def _prepare_runtime_options(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider removing the custom argument normalization logic and using Cyclopts' built-in environment variable support to simplify the CLI setup.

You can avoid most of the extra indirection by leaning on Cyclopts’ built-in env support and killing off your custom _prepare_runtime_options. Here’s one way to simplify:

  1. Drop _prepare_runtime_options completely.
  2. In your app module, register each parameter with the appropriate env name.
  3. Call main(...) directly from your @app.default.
from cyclopts import App, Parameter
from pathlib import Path
import os, sys

app = App(config=cyclopts.config.Env("INPUT_"))

@app.default
def cli(
    *,
    release_tag: Annotated[str, Parameter(required=True, env="INPUT_RELEASE_TAG")],
    bin_name:    Annotated[str, Parameter(required=True, env="INPUT_BIN_NAME")],
    dist_dir:    Annotated[Path, Parameter(env="INPUT_DIST_DIR")] = Path("dist"),
    dry_run:     Annotated[bool,  Parameter(env="INPUT_DRY_RUN")] = False,
) -> int:
    return main(
        release_tag=release_tag,
        bin_name=bin_name,
        dist_dir=dist_dir,
        dry_run=dry_run,
    )

if __name__ == "__main__":
    raise SystemExit(app(sys.argv[1:]))

Benefits:

  • Cyclopts will now coerce types and fall back to INPUT_* env vars automatically.
  • You remove both _prepare_runtime_options and run_cli, so all argument-normalisation is in one place.
  • You keep the exact same functionality (flags + env fallback) with far less boilerplate.

]


class _FallbackParameter:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider inlining simple stub functions in place of the _Fallback* classes to reduce boilerplate.

Suggested change
class _FallbackParameter:
You can collapse most of the boilerplate by inlining simple stubs instead of the `_Fallback*` classes. For example, in `load_cyclopts` you don't need two whole classesjust define a tiny function and use `Any` for the missingparameter stub:
```python
from typing import Any
def load_cyclopts() -> CycloptsSupport:
try:
import cyclopts
from cyclopts import App, Parameter
except ModuleNotFoundError as exc:
def app_stub(*args: object, **kwargs: object) -> None:
raise RuntimeError(
"Cyclopts is required for CLI usage; install it to run the script"
) from exc
return CycloptsSupport(
available=False,
app=app_stub,
parameter=Any, # drop _FallbackParameter
)
app = App(config=cyclopts.config.Env("INPUT_", command=False))
return CycloptsSupport(True, app, Parameter)

Then you can remove both _FallbackApp and _FallbackParameter entirely.

Likewise, simplify load_plumbum:

def load_plumbum() -> PlumbumSupport:
    try:
        from plumbum import local
        from plumbum.commands import CommandNotFound, ProcessExecutionError
        from plumbum.commands.base import BoundCommand
    except ModuleNotFoundError as exc:
        def local_stub(*args, **kwargs):
            raise ModuleNotFoundError(
                "plumbum is required to execute releases; install it or run in dry‐run mode"
            ) from exc

        return PlumbumSupport(
            local=local_stub,
            command_not_found=ModuleNotFoundError,
            process_execution_error=RuntimeError,
            bound_command=Any,
        )

    return PlumbumSupport(local, CommandNotFound, ProcessExecutionError, BoundCommand)

This keeps behavior identical but cuts out dozens of lines of boilerplate.

dry_run: bool


def load_cyclopts() -> CycloptsSupport:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (review_instructions): Add behavioural and unit tests for dependency loader and CLI fallback logic.

The new dependency loader and CLI fallback logic should be covered by both behavioural and unit tests to verify correct fallback behaviour and error handling.

Review instructions:

Path patterns: **/*

Instructions:
For any new feature or change to an existing feature, both behavioural and unit tests are required.

if value is None:
return []
if isinstance(value, str):
return [] if not value else [value]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Swap if/else branches of if expression to remove negation (swap-if-expression)

Suggested change
return [] if not value else [value]
return [value] if value else []


ExplanationNegated conditions are more difficult to read than positive ones, so it is best
to avoid them where we can. By swapping the if and else conditions around we
can invert the condition and make it positive.

Comment on lines 153 to 160
missing = [
label
for label, present in (
("--release-tag", resolved_release),
("--bin-name", resolved_bin),
)
if not present
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Use named expression to simplify assignment and conditional (use-named-expression)

@leynos
Copy link
Owner Author

leynos commented Oct 10, 2025

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

tests_python/test_stage_common_staging.py

Comment on lines +79 to +123

def test_stage_artefacts_uses_alternative_glob(
    stage_common: object, workspace: Path
) -> None:
    """Fallback paths should be used when the preferred template is absent."""

    target = "x86_64-unknown-linux-gnu"
    write_workspace_inputs(workspace, target)
    generated = (
        workspace / "target" / "generated-man" / target / "release" / "netsuke.1"
    )
    generated.unlink()

    build_dir = workspace / "target" / target / "release" / "build"
    first = build_dir / "1" / "out" / "netsuke.1"
    second = build_dir / "2" / "out" / "netsuke.1"
    first.parent.mkdir(parents=True, exist_ok=True)
    second.parent.mkdir(parents=True, exist_ok=True)
    first.write_text(".TH 1", encoding="utf-8")
    second.write_text(".TH 2", encoding="utf-8")
    os.utime(first, (first.stat().st_atime, first.stat().st_mtime - 100))

    config = stage_common.StagingConfig(
        workspace=workspace,
        bin_name="netsuke",
        dist_dir="dist",
        checksum_algorithm="sha256",
        artefacts=[
            stage_common.ArtefactConfig(
                source="target/generated-man/{target}/release/{bin_name}.1",
                required=True,
                output="man_path",
                alternatives=["target/{target}/release/build/*/out/{bin_name}.1"],
            ),
        ],
        platform="linux",
        arch="amd64",
        target=target,
    )

    github_output = workspace / "outputs.txt"
    result = stage_common.stage_artefacts(config, github_output)
    staged_path = result.outputs["man_path"]
    assert (
        staged_path.read_text(encoding="utf-8") == ".TH 2"
    ), "Fallback glob should pick the newest man page"

❌ New issue: Code Duplication
The module contains 5 functions with similar structure: test_stage_artefacts_glob_selects_newest_candidate,test_stage_artefacts_rejects_duplicate_destinations,test_stage_artefacts_rejects_duplicate_outputs,test_stage_artefacts_rejects_reserved_outputs and 1 more functions

@leynos
Copy link
Owner Author

leynos commented Oct 10, 2025

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

tests_python/test_stage_common_config.py

Comment on lines +228 to +251

def test_load_config_requires_target_section(
    stage_common: object, workspace: Path
) -> None:
    """Missing target sections should raise ``StageError``."""

    config_file = workspace / "release-staging.toml"
    config_file.write_text(
        """\
[common]
bin_name = "netsuke"
checksum_algorithm = "sha256"
artefacts = [ { source = "LICENSE" } ]

[targets.other]
platform = "linux"
arch = "amd64"
target = "x86_64-unknown-linux-gnu"
""",
        encoding="utf-8",
    )

    with pytest.raises(stage_common.StageError) as exc:
        stage_common.load_config(config_file, "test")
    assert "Missing configuration key" in str(exc.value)

❌ New issue: Code Duplication
The module contains 5 functions with similar structure: test_load_config_rejects_unknown_checksum,test_load_config_requires_artefact_source,test_load_config_requires_common_bin_name,test_load_config_requires_target_platform and 1 more functions

@leynos
Copy link
Owner Author

leynos commented Oct 10, 2025

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

.github/actions/stage/scripts/stage_common/staging.py

Comment on lines +66 to +170

def stage_artefacts(config: StagingConfig, github_output_file: Path) -> StageResult:
    """Copy artefacts into ``config``'s staging directory.

    Parameters
    ----------
    config : StagingConfig
        Fully resolved configuration describing the artefacts to stage.
    github_output_file : Path
        Path to the ``GITHUB_OUTPUT`` file used to export workflow outputs.

    Returns
    -------
    StageResult
        Summary object describing the staging directory, staged artefacts,
        exported outputs, and checksum digests.

    Raises
    ------
    StageError
        Raised when required artefacts are missing or configuration templates
        render invalid destinations.

    Examples
    --------
    >>> from pathlib import Path  # doctest: +SKIP
    >>> from stage_common.config import load_config  # doctest: +SKIP
    >>> cfg = load_config(  # doctest: +SKIP
    ...     Path(".github/release-staging.toml"),
    ...     "linux-x86_64",
    ... )
    >>> result = stage_artefacts(cfg, Path("github_output.txt"))  # doctest: +SKIP
    >>> result.staged_artefacts[0].name  # doctest: +SKIP
    'netsuke'
    """
    staging_dir = config.staging_dir()
    context = config.as_template_context()

    if staging_dir.exists():
        shutil.rmtree(staging_dir)
    staging_dir.mkdir(parents=True, exist_ok=True)

    staged_paths: list[Path] = []
    outputs: dict[str, Path] = {}
    checksums: dict[str, str] = {}
    seen_destinations: set[Path] = set()
    seen_outputs: set[str] = set()

    for artefact in config.artefacts:
        outcome = _stage_single_artefact(
            config=config,
            artefact=artefact,
            staging_dir=staging_dir,
            context=context,
        )
        if outcome is None:
            continue
        if outcome.path in seen_destinations:
            message = (
                "Duplicate staged destination detected: "
                f"{outcome.path.as_posix()} from source {artefact.source}"
            )
            raise StageError(message)
        seen_destinations.add(outcome.path)
        staged_paths.append(outcome.path)
        checksums[outcome.path.name] = outcome.digest
        if outcome.output_key:
            if outcome.output_key in seen_outputs:
                message = (
                    "Duplicate artefact output key detected: "
                    f"{outcome.output_key}"
                )
                raise StageError(message)
            seen_outputs.add(outcome.output_key)
            outputs[outcome.output_key] = outcome.path

    if not staged_paths:
        message = "No artefacts were staged."
        raise StageError(message)

    staged_file_names = [path.name for path in sorted(staged_paths)]
    artefact_map_json = json.dumps(
        {key: path.as_posix() for key, path in sorted(outputs.items())}
    )
    checksum_map_json = json.dumps(dict(sorted(checksums.items())))

    if colliding_keys := RESERVED_OUTPUT_KEYS & outputs.keys():
        message = (
            "Artefact outputs collide with reserved keys: "
            f"{sorted(colliding_keys)}. Please rename these keys in your "
            "artefact configuration to avoid using reserved names: "
            f"{sorted(RESERVED_OUTPUT_KEYS)}."
        )
        raise StageError(message)

    exported_outputs: dict[str, str | list[str]] = {
        "artifact_dir": staging_dir.as_posix(),
        "dist_dir": staging_dir.parent.as_posix(),
        "staged_files": staged_file_names,
        "artefact_map": artefact_map_json,
        "checksum_map": checksum_map_json,
    } | {key: path.as_posix() for key, path in outputs.items()}

    write_github_output(github_output_file, exported_outputs)

    return StageResult(staging_dir, staged_paths, outputs, checksums)

❌ New issue: Bumpy Road Ahead
stage_artefacts has 2 blocks with nested conditional logic. Any nesting of 2 or deeper is considered. Threshold is 2 blocks per function

@coderabbitai

This comment was marked as resolved.

@coderabbitai

This comment was marked as resolved.

@coderabbitai

This comment was marked as resolved.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

codescene-delta-analysis[bot]

This comment was marked as outdated.

codescene-delta-analysis[bot]

This comment was marked as outdated.

codescene-delta-analysis[bot]

This comment was marked as outdated.

codescene-delta-analysis[bot]

This comment was marked as outdated.

@leynos
Copy link
Owner Author

leynos commented Oct 10, 2025

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

.github/actions/stage/scripts/stage_common/staging.py

Comment on lines +119 to +214

def stage_artefacts(config: StagingConfig, github_output_file: Path) -> StageResult:
    """Copy artefacts into ``config``'s staging directory.

    Parameters
    ----------
    config : StagingConfig
        Fully resolved configuration describing the artefacts to stage.
    github_output_file : Path
        Path to the ``GITHUB_OUTPUT`` file used to export workflow outputs.

    Returns
    -------
    StageResult
        Summary object describing the staging directory, staged artefacts,
        exported outputs, and checksum digests.

    Raises
    ------
    StageError
        Raised when required artefacts are missing or configuration templates
        render invalid destinations.

    Examples
    --------
    >>> from pathlib import Path  # doctest: +SKIP
    >>> from stage_common.config import load_config  # doctest: +SKIP
    >>> cfg = load_config(  # doctest: +SKIP
    ...     Path(".github/release-staging.toml"),
    ...     "linux-x86_64",
    ... )
    >>> result = stage_artefacts(cfg, Path("github_output.txt"))  # doctest: +SKIP
    >>> result.staged_artefacts[0].name  # doctest: +SKIP
    'netsuke'
    """
    staging_dir = config.staging_dir()
    context = config.as_template_context()

    if staging_dir.exists():
        shutil.rmtree(staging_dir)
    staging_dir.mkdir(parents=True, exist_ok=True)

    staged_paths: list[Path] = []
    outputs: dict[str, Path] = {}
    checksums: dict[str, str] = {}
    seen_destinations: set[Path] = set()
    seen_outputs: set[str] = set()

    for artefact in config.artefacts:
        outcome = _stage_single_artefact(
            config=config,
            artefact=artefact,
            staging_dir=staging_dir,
            context=context,
        )
        if outcome is None:
            continue
        _validate_and_record_destination(
            outcome.path, artefact.source, seen_destinations
        )
        staged_paths.append(outcome.path)
        checksums[outcome.path.name] = outcome.digest
        if outcome.output_key:
            _validate_and_record_output(
                outcome.output_key, outcome.path, seen_outputs, outputs
            )

    if not staged_paths:
        message = "No artefacts were staged."
        raise StageError(message)

    staged_file_names = [path.name for path in sorted(staged_paths)]
    artefact_map_json = json.dumps(
        {key: path.as_posix() for key, path in sorted(outputs.items())}
    )
    checksum_map_json = json.dumps(dict(sorted(checksums.items())))

    if colliding_keys := RESERVED_OUTPUT_KEYS & outputs.keys():
        message = (
            "Artefact outputs collide with reserved keys: "
            f"{sorted(colliding_keys)}. Please rename these keys in your "
            "artefact configuration to avoid using reserved names: "
            f"{sorted(RESERVED_OUTPUT_KEYS)}."
        )
        raise StageError(message)

    exported_outputs: dict[str, str | list[str]] = {
        "artifact_dir": staging_dir.as_posix(),
        "dist_dir": staging_dir.parent.as_posix(),
        "staged_files": staged_file_names,
        "artefact_map": artefact_map_json,
        "checksum_map": checksum_map_json,
    } | {key: path.as_posix() for key, path in outputs.items()}

    write_github_output(github_output_file, exported_outputs)

    return StageResult(staging_dir, staged_paths, outputs, checksums)

❌ New issue: Complex Method
stage_artefacts has a cyclomatic complexity of 12, threshold = 9

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 10, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Read the staging.py file to understand the full context
cat .github/actions/stage/scripts/stage_common/staging.py

Length of output: 8419


@leynos: I've analysed the cyclomatic complexity issue in stage_artefacts. The function mixes several concerns: directory initialization, artefact staging, output preparation, and validation. Extract three focused helper functions to reduce complexity from 12 to approximately 7.

Suggested fix

Add three helper functions before stage_artefacts:

def _initialize_staging_dir(staging_dir: Path) -> None:
    """Prepare the staging directory by removing any existing content.

    Parameters
    ----------
    staging_dir : Path
        The directory to prepare for staging artefacts.
    """
    if staging_dir.exists():
        shutil.rmtree(staging_dir)
    staging_dir.mkdir(parents=True, exist_ok=True)


def _prepare_output_data(
    staging_dir: Path,
    staged_paths: list[Path],
    outputs: dict[str, Path],
    checksums: dict[str, str],
) -> dict[str, str | list[str]]:
    """Build the output data structure for GitHub Actions workflow export.

    Parameters
    ----------
    staging_dir : Path
        The staging directory containing artefacts.
    staged_paths : list[Path]
        Paths to all staged artefacts.
    outputs : dict[str, Path]
        Mapping of output keys to artefact paths.
    checksums : dict[str, str]
        Mapping of artefact names to their checksum digests.

    Returns
    -------
    dict[str, str | list[str]]
        Dictionary containing all output values ready for GitHub export.
    """
    staged_file_names = [path.name for path in sorted(staged_paths)]
    artefact_map_json = json.dumps(
        {key: path.as_posix() for key, path in sorted(outputs.items())}
    )
    checksum_map_json = json.dumps(dict(sorted(checksums.items())))

    return {
        "artifact_dir": staging_dir.as_posix(),
        "dist_dir": staging_dir.parent.as_posix(),
        "staged_files": staged_file_names,
        "artefact_map": artefact_map_json,
        "checksum_map": checksum_map_json,
    } | {key: path.as_posix() for key, path in outputs.items()}


def _validate_no_reserved_key_collisions(outputs: dict[str, Path]) -> None:
    """Ensure artefact outputs don't use reserved key names.

    Parameters
    ----------
    outputs : dict[str, Path]
        Mapping of output keys to artefact paths.

    Raises
    ------
    StageError
        Raised when output keys collide with reserved names.
    """
    if colliding_keys := RESERVED_OUTPUT_KEYS & outputs.keys():
        message = (
            "Artefact outputs collide with reserved keys: "
            f"{sorted(colliding_keys)}. Please rename these keys in your "
            "artefact configuration to avoid using reserved names: "
            f"{sorted(RESERVED_OUTPUT_KEYS)}."
        )
        raise StageError(message)

Then replace the body of stage_artefacts (lines 153-214) with:

    staging_dir = config.staging_dir()
    context = config.as_template_context()

    _initialize_staging_dir(staging_dir)

    staged_paths: list[Path] = []
    outputs: dict[str, Path] = {}
    checksums: dict[str, str] = {}
    seen_destinations: set[Path] = set()
    seen_outputs: set[str] = set()

    for artefact in config.artefacts:
        outcome = _stage_single_artefact(
            config=config,
            artefact=artefact,
            staging_dir=staging_dir,
            context=context,
        )
        if outcome is None:
            continue
        _validate_and_record_destination(
            outcome.path, artefact.source, seen_destinations
        )
        staged_paths.append(outcome.path)
        checksums[outcome.path.name] = outcome.digest
        if outcome.output_key:
            _validate_and_record_output(
                outcome.output_key, outcome.path, seen_outputs, outputs
            )

    if not staged_paths:
        message = "No artefacts were staged."
        raise StageError(message)

    _validate_no_reserved_key_collisions(outputs)
    exported_outputs = _prepare_output_data(
        staging_dir, staged_paths, outputs, checksums
    )
    write_github_output(github_output_file, exported_outputs)

    return StageResult(staging_dir, staged_paths, outputs, checksums)

AI agent prompt

Refactor the `stage_artefacts` function in `.github/actions/stage/scripts/stage_common/staging.py` to reduce cyclomatic complexity from 12 to below 9 by extracting three helper functions.

Add these three private helper functions immediately before the `stage_artefacts` function definition (around line 119):

1. `_initialize_staging_dir(staging_dir: Path) -> None`
   - Move the directory initialization logic (lines 153-156)
   - Check if `staging_dir.exists()`, remove it with `shutil.rmtree` if it does
   - Create the directory with `staging_dir.mkdir(parents=True, exist_ok=True)`

2. `_prepare_output_data(staging_dir: Path, staged_paths: list[Path], outputs: dict[str, Path], checksums: dict[str, str]) -> dict[str, str | list[str]]`
   - Move the output preparation logic (lines 184-195)
   - Build `staged_file_names` list from sorted staged_paths
   - Create `artefact_map_json` and `checksum_map_json` strings
   - Return the merged dictionary of all outputs

3. `_validate_no_reserved_key_collisions(outputs: dict[str, Path]) -> None`
   - Move the reserved key validation (lines 197-204)
   - Check for collisions with `RESERVED_OUTPUT_KEYS`
   - Raise `StageError` with the existing error message if collisions found

Then refactor the `stage_artefacts` function body:
- Replace lines 153-156 with: `_initialize_staging_dir(staging_dir)`
- Replace lines 197-204 with: `_validate_no_reserved_key_collisions(outputs)`
- Replace lines 184-195 and 206-212 with:
  ```python
  _validate_no_reserved_key_collisions(outputs)
  exported_outputs = _prepare_output_data(
      staging_dir, staged_paths, outputs, checksums
  )
  write_github_output(github_output_file, exported_outputs)
  ```

This refactoring extracts three conditional branches into dedicated functions, reducing the cyclomatic complexity of `stage_artefacts` to approximately 7 whilst maintaining all existing functionality and error handling. Ensure all docstrings follow the NumPy style with proper type annotations.

This refactoring isolates initialization, validation, and output preparation concerns into focused, testable functions, reducing the complexity of the main orchestration logic.

@leynos leynos merged commit 7178137 into codex/implement-network-and-command-functions Oct 10, 2025
8 of 11 checks passed
@leynos leynos deleted the codex/action-unresolved-comments-on-pr-185-byfvch branch October 10, 2025 19:56
Copy link

@codescene-delta-analysis codescene-delta-analysis bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gates Failed
Enforce critical code health rules (1 file with Bumpy Road Ahead)
Enforce advisory code health rules (6 files with Complex Method, Excess Number of Function Arguments, Code Duplication)

Gates Passed
4 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce critical code health rules Violations Code Health Impact
glob_utils.py 1 critical rule 9.54 Suppress
Enforce advisory code health rules Violations Code Health Impact
test_stage_common_staging.py 2 advisory rules 9.10 Suppress
glob_utils.py 1 advisory rule 9.54 Suppress
staging.py 1 advisory rule 9.54 Suppress
config.py 1 advisory rule 9.69 Suppress
_release_upload_deps.py 1 advisory rule 9.69 Suppress
test_stage_common_config.py 1 advisory rule 9.69 Suppress

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

Comment on lines +96 to +142
def run_cli(
support: CycloptsSupport,
*,
coerce_bool: Callable[[object], bool],
main: Callable[..., int],
tokens: Sequence[str] | None = None,
) -> int:
"""Execute the uploader CLI using Cyclopts or ``argparse`` fallback."""

arguments = list(tokens) if tokens is not None else None

if support.available:
return support.app(arguments)

parser = ArgumentParser(description=main.__doc__ or "Upload release assets.")
parser.add_argument("--release-tag")
parser.add_argument("--bin-name")
parser.add_argument("--dist-dir")
parser.add_argument("--dry-run", action="store_true")
args = parser.parse_args(arguments if arguments is not None else sys.argv[1:])

release_tag = args.release_tag or os.environ.get("INPUT_RELEASE_TAG")
bin_name = args.bin_name or os.environ.get("INPUT_BIN_NAME")
dist_dir_value = args.dist_dir or os.environ.get("INPUT_DIST_DIR") or "dist"
dry_run_flag = args.dry_run

if not dry_run_flag and (env_flag := os.environ.get("INPUT_DRY_RUN")):
dry_run_flag = coerce_bool(env_flag)

missing = [
label
for label, present in (
("--release-tag", release_tag),
("--bin-name", bin_name),
)
if not present
]
if missing:
joined = ", ".join(missing)
parser.exit(status=1, message=f"Missing required argument(s): {joined}\n")

return main(
release_tag=release_tag,
bin_name=bin_name,
dist_dir=Path(dist_dir_value),
dry_run=dry_run_flag,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❌ New issue: Complex Method
run_cli has a cyclomatic complexity of 11, threshold = 9

Suppress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant