Refactor staging utilities and simplify release uploader CLI#194
Conversation
Reviewer's GuideThis PR restructures the release staging workflow into a modular Python package under .github/actions/stage, extracts and unifies dependency handling for the release uploader CLI, and replaces ad-hoc staging scripts with a composite GitHub Action plus a generic CLI wrapper. Configuration validation, error checking, and output normalization (especially for Windows paths) are now handled in dedicated helpers, accompanied by comprehensive tests and updated documentation. Entity relationship diagram for the new staging configuration schemaerDiagram
COMMON {
string bin_name
string dist_dir
string checksum_algorithm
string staging_dir_template
ArtefactConfig[] artefacts
}
TARGETS {
string platform
string arch
string target
string bin_ext
string staging_dir_template
ArtefactConfig[] artefacts
}
ArtefactConfig {
string source
boolean required
string output
string destination
string[] alternatives
}
COMMON ||--o{ TARGETS : "has targets"
COMMON ||--o{ ArtefactConfig : "has artefacts"
TARGETS ||--o{ ArtefactConfig : "has artefacts"
Class diagram for the new staging helper module structureclassDiagram
class ArtefactConfig {
+string source
+bool required
+string output
+string destination
+list~string~ alternatives
}
class StagingConfig {
+Path workspace
+string bin_name
+string dist_dir
+string checksum_algorithm
+list~ArtefactConfig~ artefacts
+string platform
+string arch
+string target
+string bin_ext
+string staging_dir_template
+string target_key
+staging_dir() Path
+staging_dir_name str
+as_template_context() dict
}
class StageResult {
+Path staging_dir
+list~Path~ staged_artefacts
+dict~str, Path~ outputs
+dict~str, str~ checksums
}
class StageError {
<<exception>>
}
ArtefactConfig <.. StagingConfig : contains
StageResult <.. StagingConfig : uses
StageError <|-- RuntimeError
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Hey there - I've reviewed your changes and found some issues that need to be addressed.
- Consider deriving the composite action’s declared outputs in action.yml directly from RESERVED_OUTPUT_KEYS to avoid manual duplication and drift.
- Extract the environment-variable fallback and validation logic in _prepare_runtime_options into a shared helper to reduce duplication with require_env_path and make overrides more explicit.
- With multiple small modules handling path matching, templating, and filesystem safety, think about consolidating or simplifying these utilities to streamline the codebase and improve discoverability.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider deriving the composite action’s declared outputs in action.yml directly from RESERVED_OUTPUT_KEYS to avoid manual duplication and drift.
- Extract the environment-variable fallback and validation logic in _prepare_runtime_options into a shared helper to reduce duplication with require_env_path and make overrides more explicit.
- With multiple small modules handling path matching, templating, and filesystem safety, think about consolidating or simplifying these utilities to streamline the codebase and improve discoverability.
## Individual Comments
### Comment 1
<location> `scripts/upload_release_assets.py:128` </location>
<code_context>
seen[asset_name] = path
+def _prepare_runtime_options(
+ *,
+ release_tag: str | None,
</code_context>
<issue_to_address>
**issue (complexity):** Consider removing the custom argument normalization logic and using Cyclopts' built-in environment variable support to simplify the CLI setup.
You can avoid most of the extra indirection by leaning on Cyclopts’ built-in env support and killing off your custom `_prepare_runtime_options`. Here’s one way to simplify:
1. Drop `_prepare_runtime_options` completely.
2. In your app module, register each parameter with the appropriate `env` name.
3. Call `main(...)` directly from your `@app.default`.
```python
from cyclopts import App, Parameter
from pathlib import Path
import os, sys
app = App(config=cyclopts.config.Env("INPUT_"))
@app.default
def cli(
*,
release_tag: Annotated[str, Parameter(required=True, env="INPUT_RELEASE_TAG")],
bin_name: Annotated[str, Parameter(required=True, env="INPUT_BIN_NAME")],
dist_dir: Annotated[Path, Parameter(env="INPUT_DIST_DIR")] = Path("dist"),
dry_run: Annotated[bool, Parameter(env="INPUT_DRY_RUN")] = False,
) -> int:
return main(
release_tag=release_tag,
bin_name=bin_name,
dist_dir=dist_dir,
dry_run=dry_run,
)
if __name__ == "__main__":
raise SystemExit(app(sys.argv[1:]))
```
Benefits:
- Cyclopts will now coerce types and fall back to `INPUT_*` env vars automatically.
- You remove both `_prepare_runtime_options` and `run_cli`, so all argument-normalisation is in one place.
- You keep the exact same functionality (flags + env fallback) with far less boilerplate.
</issue_to_address>
### Comment 2
<location> `scripts/_release_upload_deps.py:21` </location>
<code_context>
+]
+
+
+class _FallbackParameter:
+ """Placeholder preserving ``typing.Annotated`` compatibility."""
+
</code_context>
<issue_to_address>
**issue (complexity):** Consider inlining simple stub functions in place of the _Fallback* classes to reduce boilerplate.
```suggestion
You can collapse most of the boilerplate by inlining simple stubs instead of the `_Fallback*` classes. For example, in `load_cyclopts` you don't need two whole classes—just define a tiny function and use `Any` for the missing‐parameter stub:
```python
from typing import Any
def load_cyclopts() -> CycloptsSupport:
try:
import cyclopts
from cyclopts import App, Parameter
except ModuleNotFoundError as exc:
def app_stub(*args: object, **kwargs: object) -> None:
raise RuntimeError(
"Cyclopts is required for CLI usage; install it to run the script"
) from exc
return CycloptsSupport(
available=False,
app=app_stub,
parameter=Any, # drop _FallbackParameter
)
app = App(config=cyclopts.config.Env("INPUT_", command=False))
return CycloptsSupport(True, app, Parameter)
```
Then you can remove both `_FallbackApp` and `_FallbackParameter` entirely.
Likewise, simplify `load_plumbum`:
```python
def load_plumbum() -> PlumbumSupport:
try:
from plumbum import local
from plumbum.commands import CommandNotFound, ProcessExecutionError
from plumbum.commands.base import BoundCommand
except ModuleNotFoundError as exc:
def local_stub(*args, **kwargs):
raise ModuleNotFoundError(
"plumbum is required to execute releases; install it or run in dry‐run mode"
) from exc
return PlumbumSupport(
local=local_stub,
command_not_found=ModuleNotFoundError,
process_execution_error=RuntimeError,
bound_command=Any,
)
return PlumbumSupport(local, CommandNotFound, ProcessExecutionError, BoundCommand)
```
This keeps behavior identical but cuts out dozens of lines of boilerplate.
</issue_to_address>
### Comment 3
<location> `.github/actions/stage/scripts/stage_common/config.py:196` </location>
<code_context>
+ return template_context | {"staging_dir_name": rendered_name}
+
+
+def load_config(config_file: Path, target_key: str) -> StagingConfig:
+ """Load staging configuration from ``config_file`` for ``target_key``.
+
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the new TOML config loader and model logic.
The new `load_config` function and related dataclasses implement nontrivial logic for parsing and validating TOML configuration files. You must provide both behavioural and unit tests to verify correct parsing, error handling, and edge cases.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 4
<location> `.github/actions/stage/scripts/stage_common/staging.py:66` </location>
<code_context>
+ digest: str
+
+
+def stage_artefacts(config: StagingConfig, github_output_file: Path) -> StageResult:
+ """Copy artefacts into ``config``'s staging directory.
+
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the artefact staging pipeline.
The new `stage_artefacts` function and its helpers perform file operations, output mapping, and error handling. You must provide both behavioural and unit tests to cover successful staging, error conditions, and output correctness.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 5
<location> `.github/actions/stage/scripts/stage.py:33` </location>
<code_context>
+app = cyclopts.App(help="Stage release artefacts using a TOML configuration file.")
+
+
+@app.default
+def main(config_file: Path, target: str) -> None:
+ """Stage artefacts for ``target`` using ``config_file``.
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the CLI entry point.
The new CLI entry point in `stage.py` should be covered by behavioural and unit tests to ensure correct argument parsing, error handling, and integration with the staging logic.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 6
<location> `.github/actions/stage/scripts/stage_common/glob_utils.py:10` </location>
<code_context>
+__all__ = ["glob_root_and_pattern", "match_candidate_path"]
+
+
+def match_candidate_path(workspace: Path, rendered: str) -> Path | None:
+ """Return the newest path matching ``rendered`` relative to ``workspace``."""
+
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the glob matching logic.
The new glob matching logic in `match_candidate_path` and `glob_root_and_pattern` should be covered by both behavioural and unit tests to ensure correct path resolution and edge case handling.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 7
<location> `.github/actions/stage/scripts/stage_common/template_utils.py:40` </location>
<code_context>
+ raise StageError(message) from exc
+
+
+def resolve_artefact_source(
+ workspace: Path, artefact: "ArtefactConfig", context: dict[str, typ.Any]
+) -> tuple[Path | None, list[RenderAttempt]]:
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for artefact source resolution and template rendering.
The artefact source resolution and template rendering logic should be covered by both behavioural and unit tests to verify correct template expansion and error handling.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 8
<location> `.github/actions/stage/scripts/stage_common/checksum_utils.py:11` </location>
<code_context>
+__all__ = ["write_checksum"]
+
+
+def write_checksum(path: Path, algorithm: str) -> str:
+ """Write the checksum sidecar for ``path`` using ``algorithm``."""
+
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for checksum writing logic.
The checksum writing logic should be covered by both behavioural and unit tests to ensure correct digest calculation and file output.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 9
<location> `.github/actions/stage/scripts/stage_common/fs_utils.py:12` </location>
<code_context>
+__all__ = ["safe_destination_path"]
+
+
+def safe_destination_path(staging_dir: Path, destination: str) -> Path:
+ """Return ``destination`` resolved beneath ``staging_dir``."""
+
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for safe destination path logic.
The logic for resolving safe destination paths should be covered by both behavioural and unit tests to verify correct path resolution and security checks.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 10
<location> `.github/actions/stage/scripts/stage_common/github_output.py:12` </location>
<code_context>
+__all__ = ["write_github_output"]
+
+
+def write_github_output(
+ file: Path, values: Mapping[str, str | Sequence[str]]
+) -> None:
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for GitHub output writing logic.
The logic for writing GitHub Actions outputs should be covered by both behavioural and unit tests to ensure correct formatting and file output.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 11
<location> `.github/actions/stage/scripts/stage_common/environment.py:13` </location>
<code_context>
+__all__ = ["require_env_path"]
+
+
+def require_env_path(name: str) -> Path:
+ """Return ``Path`` value for ``name`` or raise :class:`StageError`.
+
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for environment variable path resolution.
The environment variable path resolution logic should be covered by both behavioural and unit tests to verify correct error handling and path conversion.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 12
<location> `.github/actions/stage/scripts/stage_common/errors.py:4` </location>
<code_context>
+"""Error types shared across the staging helper package."""
+
+
+class StageError(RuntimeError):
+ """Raised when the staging pipeline cannot continue."""
--- /dev/null
</code_context>
<issue_to_address>
**issue (review_instructions):** Add unit tests for the custom StageError exception.
The custom exception should be covered by unit tests to verify correct raising and handling.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 13
<location> `.github/actions/stage/scripts/stage_common/__init__.py:3` </location>
<code_context>
+"""Public interface for the staging helper package."""
+
+from .config import ArtefactConfig, StagingConfig, load_config
+from .environment import require_env_path
+from .errors import StageError
</code_context>
<issue_to_address>
**issue (review_instructions):** Add unit tests for the public interface of the staging helper package.
The public interface should be covered by unit tests to verify correct imports and exposure of functionality.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 14
<location> `.github/workflows/scripts/normalise_windows_paths.py:27` </location>
<code_context>
+from pathlib import Path, PureWindowsPath
+
+
+def main() -> int:
+ """Read ``ARTEFACT_MAP`` and write normalised paths to ``GITHUB_OUTPUT``."""
+ try:
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the Windows path normalisation script.
The new script for normalising Windows paths should be covered by both behavioural and unit tests to verify correct environment variable handling and output formatting.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 15
<location> `scripts/_release_upload_deps.py:82` </location>
<code_context>
+ dry_run: bool
+
+
+def load_cyclopts() -> CycloptsSupport:
+ """Return Cyclopts components with graceful fallbacks."""
+
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for dependency loader and CLI fallback logic.
The new dependency loader and CLI fallback logic should be covered by both behavioural and unit tests to verify correct fallback behaviour and error handling.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 16
<location> `scripts/upload_release_assets.py:128` </location>
<code_context>
seen[asset_name] = path
+def _prepare_runtime_options(
+ *,
+ release_tag: str | None,
</code_context>
<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for runtime options preparation logic.
The new logic for preparing runtime options should be covered by both behavioural and unit tests to verify correct argument parsing and error handling.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*`
**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.
</details>
</issue_to_address>
### Comment 17
<location> `docs/netsuke-design.md:1690` </location>
<code_context>
+only then launches the Python entry point so workflows stay declarative. The
+helper writes SHA-256 sums for every staged file. It also exports a JSON map of
+the artefact outputs, allowing the workflow to hydrate downstream steps without
+hard-coded path logic. Figure 8.1 summarises the configuration entities,
+including optional keys reserved for templated directories and explicit
+artefact destinations that the helper can adopt without breaking compatibility.
</code_context>
<issue_to_address>
**suggestion (review_instructions):** This paragraph line exceeds 80 columns and should be wrapped for readability.
Please wrap this and other paragraph lines to a maximum of 80 columns, as per the documentation style guide.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Paragraphs and bullets must be wrapped to 80 columns
</details>
</issue_to_address>
### Comment 18
<location> `docs/netsuke-design.md:1696` </location>
<code_context>
+
+Figure 8.1: Entity relationship for the staging configuration schema.
+
+```mermaid
+%% Figure 8.1: Entity relationship for the staging configuration schema.
+erDiagram
</code_context>
<issue_to_address>
**suggestion (review_instructions):** The mermaid code block should be checked for lines exceeding 120 columns.
Please ensure that all lines within this code block do not exceed 120 columns. If any do, consider splitting or reformatting them.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Code blocks should be wrapped to 120 columns.
</details>
</issue_to_address>
### Comment 19
<location> `.github/actions/stage/scripts/stage_common/config.py:351` </location>
<code_context>
return [] if not value else [value]
</code_context>
<issue_to_address>
**suggestion (code-quality):** Swap if/else branches of if expression to remove negation ([`swap-if-expression`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/swap-if-expression))
```suggestion
return [value] if value else []
```
<br/><details><summary>Explanation</summary>Negated conditions are more difficult to read than positive ones, so it is best
to avoid them where we can. By swapping the `if` and `else` conditions around we
can invert the condition and make it positive.
</details>
</issue_to_address>
### Comment 20
<location> `scripts/upload_release_assets.py:153-160` </location>
<code_context>
def _prepare_runtime_options(
*,
release_tag: str | None,
bin_name: str | None,
dist_dir: str | Path | None,
dry_run: bool | str | None,
environ: typ.Mapping[str, str | None],
) -> RuntimeOptions:
"""Normalise CLI inputs from Cyclopts or ``argparse`` into runtime options."""
resolved_release = release_tag or environ.get("INPUT_RELEASE_TAG")
resolved_bin = bin_name or environ.get("INPUT_BIN_NAME")
dist_source = dist_dir if dist_dir is not None else environ.get("INPUT_DIST_DIR")
resolved_dist = Path(dist_source or "dist")
dry_run_value: bool
if isinstance(dry_run, bool):
dry_run_value = dry_run
elif dry_run is None:
dry_run_value = False
else:
dry_run_value = _coerce_bool(dry_run)
if not dry_run_value and (env_flag := environ.get("INPUT_DRY_RUN")):
dry_run_value = _coerce_bool(env_flag)
missing = [
label
for label, present in (
("--release-tag", resolved_release),
("--bin-name", resolved_bin),
)
if not present
]
if missing:
joined = ", ".join(missing)
message = f"Missing required argument(s): {joined}"
raise ValueError(message)
return RuntimeOptions(
release_tag=typ.cast(str, resolved_release),
bin_name=typ.cast(str, resolved_bin),
dist_dir=resolved_dist,
dry_run=dry_run_value,
)
</code_context>
<issue_to_address>
**issue (code-quality):** Use named expression to simplify assignment and conditional ([`use-named-expression`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/use-named-expression/))
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
scripts/upload_release_assets.py
Outdated
| seen[asset_name] = path | ||
|
|
||
|
|
||
| def _prepare_runtime_options( |
There was a problem hiding this comment.
issue (complexity): Consider removing the custom argument normalization logic and using Cyclopts' built-in environment variable support to simplify the CLI setup.
You can avoid most of the extra indirection by leaning on Cyclopts’ built-in env support and killing off your custom _prepare_runtime_options. Here’s one way to simplify:
- Drop
_prepare_runtime_optionscompletely. - In your app module, register each parameter with the appropriate
envname. - Call
main(...)directly from your@app.default.
from cyclopts import App, Parameter
from pathlib import Path
import os, sys
app = App(config=cyclopts.config.Env("INPUT_"))
@app.default
def cli(
*,
release_tag: Annotated[str, Parameter(required=True, env="INPUT_RELEASE_TAG")],
bin_name: Annotated[str, Parameter(required=True, env="INPUT_BIN_NAME")],
dist_dir: Annotated[Path, Parameter(env="INPUT_DIST_DIR")] = Path("dist"),
dry_run: Annotated[bool, Parameter(env="INPUT_DRY_RUN")] = False,
) -> int:
return main(
release_tag=release_tag,
bin_name=bin_name,
dist_dir=dist_dir,
dry_run=dry_run,
)
if __name__ == "__main__":
raise SystemExit(app(sys.argv[1:]))Benefits:
- Cyclopts will now coerce types and fall back to
INPUT_*env vars automatically. - You remove both
_prepare_runtime_optionsandrun_cli, so all argument-normalisation is in one place. - You keep the exact same functionality (flags + env fallback) with far less boilerplate.
scripts/_release_upload_deps.py
Outdated
| ] | ||
|
|
||
|
|
||
| class _FallbackParameter: |
There was a problem hiding this comment.
issue (complexity): Consider inlining simple stub functions in place of the _Fallback* classes to reduce boilerplate.
| class _FallbackParameter: | |
| You can collapse most of the boilerplate by inlining simple stubs instead of the `_Fallback*` classes. For example, in `load_cyclopts` you don't need two whole classes—just define a tiny function and use `Any` for the missing‐parameter stub: | |
| ```python | |
| from typing import Any | |
| def load_cyclopts() -> CycloptsSupport: | |
| try: | |
| import cyclopts | |
| from cyclopts import App, Parameter | |
| except ModuleNotFoundError as exc: | |
| def app_stub(*args: object, **kwargs: object) -> None: | |
| raise RuntimeError( | |
| "Cyclopts is required for CLI usage; install it to run the script" | |
| ) from exc | |
| return CycloptsSupport( | |
| available=False, | |
| app=app_stub, | |
| parameter=Any, # drop _FallbackParameter | |
| ) | |
| app = App(config=cyclopts.config.Env("INPUT_", command=False)) | |
| return CycloptsSupport(True, app, Parameter) |
Then you can remove both _FallbackApp and _FallbackParameter entirely.
Likewise, simplify load_plumbum:
def load_plumbum() -> PlumbumSupport:
try:
from plumbum import local
from plumbum.commands import CommandNotFound, ProcessExecutionError
from plumbum.commands.base import BoundCommand
except ModuleNotFoundError as exc:
def local_stub(*args, **kwargs):
raise ModuleNotFoundError(
"plumbum is required to execute releases; install it or run in dry‐run mode"
) from exc
return PlumbumSupport(
local=local_stub,
command_not_found=ModuleNotFoundError,
process_execution_error=RuntimeError,
bound_command=Any,
)
return PlumbumSupport(local, CommandNotFound, ProcessExecutionError, BoundCommand)This keeps behavior identical but cuts out dozens of lines of boilerplate.
| dry_run: bool | ||
|
|
||
|
|
||
| def load_cyclopts() -> CycloptsSupport: |
There was a problem hiding this comment.
issue (review_instructions): Add behavioural and unit tests for dependency loader and CLI fallback logic.
The new dependency loader and CLI fallback logic should be covered by both behavioural and unit tests to verify correct fallback behaviour and error handling.
Review instructions:
Path patterns: **/*
Instructions:
For any new feature or change to an existing feature, both behavioural and unit tests are required.
| if value is None: | ||
| return [] | ||
| if isinstance(value, str): | ||
| return [] if not value else [value] |
There was a problem hiding this comment.
suggestion (code-quality): Swap if/else branches of if expression to remove negation (swap-if-expression)
| return [] if not value else [value] | |
| return [value] if value else [] |
Explanation
Negated conditions are more difficult to read than positive ones, so it is bestto avoid them where we can. By swapping the
if and else conditions around wecan invert the condition and make it positive.
scripts/upload_release_assets.py
Outdated
| missing = [ | ||
| label | ||
| for label, present in ( | ||
| ("--release-tag", resolved_release), | ||
| ("--bin-name", resolved_bin), | ||
| ) | ||
| if not present | ||
| ] |
There was a problem hiding this comment.
issue (code-quality): Use named expression to simplify assignment and conditional (use-named-expression)
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: tests_python/test_stage_common_staging.py Comment on lines +79 to +123 def test_stage_artefacts_uses_alternative_glob(
stage_common: object, workspace: Path
) -> None:
"""Fallback paths should be used when the preferred template is absent."""
target = "x86_64-unknown-linux-gnu"
write_workspace_inputs(workspace, target)
generated = (
workspace / "target" / "generated-man" / target / "release" / "netsuke.1"
)
generated.unlink()
build_dir = workspace / "target" / target / "release" / "build"
first = build_dir / "1" / "out" / "netsuke.1"
second = build_dir / "2" / "out" / "netsuke.1"
first.parent.mkdir(parents=True, exist_ok=True)
second.parent.mkdir(parents=True, exist_ok=True)
first.write_text(".TH 1", encoding="utf-8")
second.write_text(".TH 2", encoding="utf-8")
os.utime(first, (first.stat().st_atime, first.stat().st_mtime - 100))
config = stage_common.StagingConfig(
workspace=workspace,
bin_name="netsuke",
dist_dir="dist",
checksum_algorithm="sha256",
artefacts=[
stage_common.ArtefactConfig(
source="target/generated-man/{target}/release/{bin_name}.1",
required=True,
output="man_path",
alternatives=["target/{target}/release/build/*/out/{bin_name}.1"],
),
],
platform="linux",
arch="amd64",
target=target,
)
github_output = workspace / "outputs.txt"
result = stage_common.stage_artefacts(config, github_output)
staged_path = result.outputs["man_path"]
assert (
staged_path.read_text(encoding="utf-8") == ".TH 2"
), "Fallback glob should pick the newest man page"❌ New issue: Code Duplication |
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: tests_python/test_stage_common_config.py Comment on lines +228 to +251 def test_load_config_requires_target_section(
stage_common: object, workspace: Path
) -> None:
"""Missing target sections should raise ``StageError``."""
config_file = workspace / "release-staging.toml"
config_file.write_text(
"""\
[common]
bin_name = "netsuke"
checksum_algorithm = "sha256"
artefacts = [ { source = "LICENSE" } ]
[targets.other]
platform = "linux"
arch = "amd64"
target = "x86_64-unknown-linux-gnu"
""",
encoding="utf-8",
)
with pytest.raises(stage_common.StageError) as exc:
stage_common.load_config(config_file, "test")
assert "Missing configuration key" in str(exc.value)❌ New issue: Code Duplication |
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: .github/actions/stage/scripts/stage_common/staging.py Comment on lines +66 to +170 def stage_artefacts(config: StagingConfig, github_output_file: Path) -> StageResult:
"""Copy artefacts into ``config``'s staging directory.
Parameters
----------
config : StagingConfig
Fully resolved configuration describing the artefacts to stage.
github_output_file : Path
Path to the ``GITHUB_OUTPUT`` file used to export workflow outputs.
Returns
-------
StageResult
Summary object describing the staging directory, staged artefacts,
exported outputs, and checksum digests.
Raises
------
StageError
Raised when required artefacts are missing or configuration templates
render invalid destinations.
Examples
--------
>>> from pathlib import Path # doctest: +SKIP
>>> from stage_common.config import load_config # doctest: +SKIP
>>> cfg = load_config( # doctest: +SKIP
... Path(".github/release-staging.toml"),
... "linux-x86_64",
... )
>>> result = stage_artefacts(cfg, Path("github_output.txt")) # doctest: +SKIP
>>> result.staged_artefacts[0].name # doctest: +SKIP
'netsuke'
"""
staging_dir = config.staging_dir()
context = config.as_template_context()
if staging_dir.exists():
shutil.rmtree(staging_dir)
staging_dir.mkdir(parents=True, exist_ok=True)
staged_paths: list[Path] = []
outputs: dict[str, Path] = {}
checksums: dict[str, str] = {}
seen_destinations: set[Path] = set()
seen_outputs: set[str] = set()
for artefact in config.artefacts:
outcome = _stage_single_artefact(
config=config,
artefact=artefact,
staging_dir=staging_dir,
context=context,
)
if outcome is None:
continue
if outcome.path in seen_destinations:
message = (
"Duplicate staged destination detected: "
f"{outcome.path.as_posix()} from source {artefact.source}"
)
raise StageError(message)
seen_destinations.add(outcome.path)
staged_paths.append(outcome.path)
checksums[outcome.path.name] = outcome.digest
if outcome.output_key:
if outcome.output_key in seen_outputs:
message = (
"Duplicate artefact output key detected: "
f"{outcome.output_key}"
)
raise StageError(message)
seen_outputs.add(outcome.output_key)
outputs[outcome.output_key] = outcome.path
if not staged_paths:
message = "No artefacts were staged."
raise StageError(message)
staged_file_names = [path.name for path in sorted(staged_paths)]
artefact_map_json = json.dumps(
{key: path.as_posix() for key, path in sorted(outputs.items())}
)
checksum_map_json = json.dumps(dict(sorted(checksums.items())))
if colliding_keys := RESERVED_OUTPUT_KEYS & outputs.keys():
message = (
"Artefact outputs collide with reserved keys: "
f"{sorted(colliding_keys)}. Please rename these keys in your "
"artefact configuration to avoid using reserved names: "
f"{sorted(RESERVED_OUTPUT_KEYS)}."
)
raise StageError(message)
exported_outputs: dict[str, str | list[str]] = {
"artifact_dir": staging_dir.as_posix(),
"dist_dir": staging_dir.parent.as_posix(),
"staged_files": staged_file_names,
"artefact_map": artefact_map_json,
"checksum_map": checksum_map_json,
} | {key: path.as_posix() for key, path in outputs.items()}
write_github_output(github_output_file, exported_outputs)
return StageResult(staging_dir, staged_paths, outputs, checksums)❌ New issue: Bumpy Road Ahead |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: .github/actions/stage/scripts/stage_common/staging.py Comment on lines +119 to +214 def stage_artefacts(config: StagingConfig, github_output_file: Path) -> StageResult:
"""Copy artefacts into ``config``'s staging directory.
Parameters
----------
config : StagingConfig
Fully resolved configuration describing the artefacts to stage.
github_output_file : Path
Path to the ``GITHUB_OUTPUT`` file used to export workflow outputs.
Returns
-------
StageResult
Summary object describing the staging directory, staged artefacts,
exported outputs, and checksum digests.
Raises
------
StageError
Raised when required artefacts are missing or configuration templates
render invalid destinations.
Examples
--------
>>> from pathlib import Path # doctest: +SKIP
>>> from stage_common.config import load_config # doctest: +SKIP
>>> cfg = load_config( # doctest: +SKIP
... Path(".github/release-staging.toml"),
... "linux-x86_64",
... )
>>> result = stage_artefacts(cfg, Path("github_output.txt")) # doctest: +SKIP
>>> result.staged_artefacts[0].name # doctest: +SKIP
'netsuke'
"""
staging_dir = config.staging_dir()
context = config.as_template_context()
if staging_dir.exists():
shutil.rmtree(staging_dir)
staging_dir.mkdir(parents=True, exist_ok=True)
staged_paths: list[Path] = []
outputs: dict[str, Path] = {}
checksums: dict[str, str] = {}
seen_destinations: set[Path] = set()
seen_outputs: set[str] = set()
for artefact in config.artefacts:
outcome = _stage_single_artefact(
config=config,
artefact=artefact,
staging_dir=staging_dir,
context=context,
)
if outcome is None:
continue
_validate_and_record_destination(
outcome.path, artefact.source, seen_destinations
)
staged_paths.append(outcome.path)
checksums[outcome.path.name] = outcome.digest
if outcome.output_key:
_validate_and_record_output(
outcome.output_key, outcome.path, seen_outputs, outputs
)
if not staged_paths:
message = "No artefacts were staged."
raise StageError(message)
staged_file_names = [path.name for path in sorted(staged_paths)]
artefact_map_json = json.dumps(
{key: path.as_posix() for key, path in sorted(outputs.items())}
)
checksum_map_json = json.dumps(dict(sorted(checksums.items())))
if colliding_keys := RESERVED_OUTPUT_KEYS & outputs.keys():
message = (
"Artefact outputs collide with reserved keys: "
f"{sorted(colliding_keys)}. Please rename these keys in your "
"artefact configuration to avoid using reserved names: "
f"{sorted(RESERVED_OUTPUT_KEYS)}."
)
raise StageError(message)
exported_outputs: dict[str, str | list[str]] = {
"artifact_dir": staging_dir.as_posix(),
"dist_dir": staging_dir.parent.as_posix(),
"staged_files": staged_file_names,
"artefact_map": artefact_map_json,
"checksum_map": checksum_map_json,
} | {key: path.as_posix() for key, path in outputs.items()}
write_github_output(github_output_file, exported_outputs)
return StageResult(staging_dir, staged_paths, outputs, checksums)❌ New issue: Complex Method |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 8419 @leynos: I've analysed the cyclomatic complexity issue in Suggested fixAdd three helper functions before def _initialize_staging_dir(staging_dir: Path) -> None:
"""Prepare the staging directory by removing any existing content.
Parameters
----------
staging_dir : Path
The directory to prepare for staging artefacts.
"""
if staging_dir.exists():
shutil.rmtree(staging_dir)
staging_dir.mkdir(parents=True, exist_ok=True)
def _prepare_output_data(
staging_dir: Path,
staged_paths: list[Path],
outputs: dict[str, Path],
checksums: dict[str, str],
) -> dict[str, str | list[str]]:
"""Build the output data structure for GitHub Actions workflow export.
Parameters
----------
staging_dir : Path
The staging directory containing artefacts.
staged_paths : list[Path]
Paths to all staged artefacts.
outputs : dict[str, Path]
Mapping of output keys to artefact paths.
checksums : dict[str, str]
Mapping of artefact names to their checksum digests.
Returns
-------
dict[str, str | list[str]]
Dictionary containing all output values ready for GitHub export.
"""
staged_file_names = [path.name for path in sorted(staged_paths)]
artefact_map_json = json.dumps(
{key: path.as_posix() for key, path in sorted(outputs.items())}
)
checksum_map_json = json.dumps(dict(sorted(checksums.items())))
return {
"artifact_dir": staging_dir.as_posix(),
"dist_dir": staging_dir.parent.as_posix(),
"staged_files": staged_file_names,
"artefact_map": artefact_map_json,
"checksum_map": checksum_map_json,
} | {key: path.as_posix() for key, path in outputs.items()}
def _validate_no_reserved_key_collisions(outputs: dict[str, Path]) -> None:
"""Ensure artefact outputs don't use reserved key names.
Parameters
----------
outputs : dict[str, Path]
Mapping of output keys to artefact paths.
Raises
------
StageError
Raised when output keys collide with reserved names.
"""
if colliding_keys := RESERVED_OUTPUT_KEYS & outputs.keys():
message = (
"Artefact outputs collide with reserved keys: "
f"{sorted(colliding_keys)}. Please rename these keys in your "
"artefact configuration to avoid using reserved names: "
f"{sorted(RESERVED_OUTPUT_KEYS)}."
)
raise StageError(message)Then replace the body of staging_dir = config.staging_dir()
context = config.as_template_context()
_initialize_staging_dir(staging_dir)
staged_paths: list[Path] = []
outputs: dict[str, Path] = {}
checksums: dict[str, str] = {}
seen_destinations: set[Path] = set()
seen_outputs: set[str] = set()
for artefact in config.artefacts:
outcome = _stage_single_artefact(
config=config,
artefact=artefact,
staging_dir=staging_dir,
context=context,
)
if outcome is None:
continue
_validate_and_record_destination(
outcome.path, artefact.source, seen_destinations
)
staged_paths.append(outcome.path)
checksums[outcome.path.name] = outcome.digest
if outcome.output_key:
_validate_and_record_output(
outcome.output_key, outcome.path, seen_outputs, outputs
)
if not staged_paths:
message = "No artefacts were staged."
raise StageError(message)
_validate_no_reserved_key_collisions(outputs)
exported_outputs = _prepare_output_data(
staging_dir, staged_paths, outputs, checksums
)
write_github_output(github_output_file, exported_outputs)
return StageResult(staging_dir, staged_paths, outputs, checksums)AI agent promptThis refactoring isolates initialization, validation, and output preparation concerns into focused, testable functions, reducing the complexity of the main orchestration logic. |
7178137
into
codex/implement-network-and-command-functions
There was a problem hiding this comment.
Gates Failed
Enforce critical code health rules
(1 file with Bumpy Road Ahead)
Enforce advisory code health rules
(6 files with Complex Method, Excess Number of Function Arguments, Code Duplication)
Gates Passed
4 Quality Gates Passed
See analysis details in CodeScene
Reason for failure
| Enforce critical code health rules | Violations | Code Health Impact | |
|---|---|---|---|
| glob_utils.py | 1 critical rule | 9.54 | Suppress |
| Enforce advisory code health rules | Violations | Code Health Impact | |
|---|---|---|---|
| test_stage_common_staging.py | 2 advisory rules | 9.10 | Suppress |
| glob_utils.py | 1 advisory rule | 9.54 | Suppress |
| staging.py | 1 advisory rule | 9.54 | Suppress |
| config.py | 1 advisory rule | 9.69 | Suppress |
| _release_upload_deps.py | 1 advisory rule | 9.69 | Suppress |
| test_stage_common_config.py | 1 advisory rule | 9.69 | Suppress |
Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.
| def run_cli( | ||
| support: CycloptsSupport, | ||
| *, | ||
| coerce_bool: Callable[[object], bool], | ||
| main: Callable[..., int], | ||
| tokens: Sequence[str] | None = None, | ||
| ) -> int: | ||
| """Execute the uploader CLI using Cyclopts or ``argparse`` fallback.""" | ||
|
|
||
| arguments = list(tokens) if tokens is not None else None | ||
|
|
||
| if support.available: | ||
| return support.app(arguments) | ||
|
|
||
| parser = ArgumentParser(description=main.__doc__ or "Upload release assets.") | ||
| parser.add_argument("--release-tag") | ||
| parser.add_argument("--bin-name") | ||
| parser.add_argument("--dist-dir") | ||
| parser.add_argument("--dry-run", action="store_true") | ||
| args = parser.parse_args(arguments if arguments is not None else sys.argv[1:]) | ||
|
|
||
| release_tag = args.release_tag or os.environ.get("INPUT_RELEASE_TAG") | ||
| bin_name = args.bin_name or os.environ.get("INPUT_BIN_NAME") | ||
| dist_dir_value = args.dist_dir or os.environ.get("INPUT_DIST_DIR") or "dist" | ||
| dry_run_flag = args.dry_run | ||
|
|
||
| if not dry_run_flag and (env_flag := os.environ.get("INPUT_DRY_RUN")): | ||
| dry_run_flag = coerce_bool(env_flag) | ||
|
|
||
| missing = [ | ||
| label | ||
| for label, present in ( | ||
| ("--release-tag", release_tag), | ||
| ("--bin-name", bin_name), | ||
| ) | ||
| if not present | ||
| ] | ||
| if missing: | ||
| joined = ", ".join(missing) | ||
| parser.exit(status=1, message=f"Missing required argument(s): {joined}\n") | ||
|
|
||
| return main( | ||
| release_tag=release_tag, | ||
| bin_name=bin_name, | ||
| dist_dir=Path(dist_dir_value), | ||
| dry_run=dry_run_flag, | ||
| ) |
There was a problem hiding this comment.
❌ New issue: Complex Method
run_cli has a cyclomatic complexity of 11, threshold = 9
Summary
Testing
https://chatgpt.com/codex/tasks/task_e_68e85138f48883229e8082f03b90e95e
Summary by Sourcery
Refactor the staging workflow into a reusable, TOML-driven GitHub Action and accompanying Python library, simplify the release uploader CLI by centralizing argument handling with Cyclopts (and an argparse fallback), update release workflows to use the new composite action, enhance configuration validation and Windows path normalization, and expand documentation and tests to cover the new staging and upload flows.
New Features:
Enhancements:
CI:
.github/actions/stage) and addtarget_keymatrix entries for Windows and macOS jobs.Documentation:
Tests: