Conversation
Reviewer's GuideThis PR replaces platform-specific staging scripts with a reusable composite GitHub Action that reads a TOML configuration to stage and emit artefact metadata, updates workflows to invoke the new action, refactors tests to exercise the generic staging logic, and refreshes documentation accordingly. Entity relationship diagram for the new TOML staging configurationerDiagram
COMMON {
string bin_name
string dist_dir
string checksum_algorithm
string staging_dir_template
ArtefactConfig[] artefacts
}
TARGETS {
string platform
string arch
string target
string bin_ext
string staging_dir_template
ArtefactConfig[] artefacts
}
ArtefactConfig {
string source
boolean required
string output
string destination
string[] alternatives
}
COMMON ||--o{ TARGETS : "has targets"
COMMON ||--o{ ArtefactConfig : "has artefacts"
TARGETS ||--o{ ArtefactConfig : "has artefacts"
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Summary by CodeRabbit
WalkthroughIntroduce a TOML-driven, cross-platform release staging system with a new stage_common library (config, pipeline, output, resolution), update the Stage action entrypoint, and rework CI workflows to consume a JSON artefact map. Add Windows path normalisation script, expand tests extensively, adjust docs, refine Cyclopts stubs, and add a Makefile type-check target. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor GH as GitHub Actions
participant Stage as Stage Action
participant Conf as load_config
participant Pipe as staging.pipeline
participant Out as staging.output
GH->>Stage: run (config-file, target_key)
Stage->>Conf: load_config(Path(config-file), target_key)
Conf-->>Stage: StagingConfig
Stage->>Pipe: stage_artefacts(config, GITHUB_OUTPUT)
Pipe->>Pipe: init staging dir
loop for each artefact
Pipe->>Pipe: resolve source (glob/direct)
alt required missing
Pipe-->>Stage: raise StageError
else optional missing
Pipe-->>Pipe: warn and skip
end
Pipe->>Pipe: copy artefact, checksum
end
Pipe->>Out: _prepare_output_data + write_github_output
Out-->>Pipe: outputs written
Pipe-->>Stage: StageResult
Stage-->>GH: outputs (artefact_map, checksum_map, paths)
sequenceDiagram
autonumber
actor GH as GitHub Actions (Windows)
participant NWP as normalise_windows_paths.py
GH->>NWP: env {BINARY_PATH, LICENSE_PATH, GITHUB_OUTPUT}
NWP->>NWP: validate env present
NWP->>NWP: convert to PureWindowsPath
NWP->>GITHUB_OUTPUT: append binary_path=..., license_path=...
NWP-->>GH: exit 0
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
5f49f14 to
32019ec
Compare
|
@coderabbitai Have the following now been resolved?
|
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: tests_python/test_stage_config.py Comment on lines +262 to +284 def test_load_config_requires_target_section(
stage_common: object, workspace: Path
) -> None:
"""Missing target sections should raise ``StageError``."""
config_file = workspace / "release-staging.toml"
config_file.write_text(
"""\
[common]
bin_name = "netsuke"
checksum_algorithm = "sha256"
artefacts = [ { source = "LICENSE" } ]
[targets.other]
platform = "linux"
arch = "amd64"
target = "x86_64-unknown-linux-gnu"
""",
encoding="utf-8",
)
with pytest.raises(stage_common.StageError) as exc:
stage_common.load_config(config_file, "test")
assert "Missing configuration key" in str(exc.value)❌ New issue: Code Duplication |
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Actionable comments posted: 4
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
.github/actions/stage/scripts/stage_common/staging.py(1 hunks)tests_python/test_stage_artefacts.py(1 hunks)tests_python/test_stage_config.py(1 hunks)tests_python/test_stage_glob.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
**/*.py: - Keep cyclomatic complexity ≤ 12
- Follow single responsibility and CQRS (command/query segregation)
- Docstrings must follow the
numpystyle guide. Use a single-line summary for private functions and methods, and full structured docs for all public interfaces.- Move conditionals with >2 branches to predicate/helper functions
- Avoid
eval,exec,pickle, monkey-patching,ctypes, unsafe shell- Every module must begin with a triple-quoted docstring explaining its purpose, utility, and usage, including example calls if appropriate.
- Comments and docs must follow en-GB-oxendict (-ize / -yse / -our) spelling and grammar
- Lint suppressions:
- Blanket
# noqa, file-level skips, and categories are forbidden- Only narrow in-line disables (
# noqa: XYZ) are permitted, and must be accompanied byFIXME:or a ticket link, and used only as a last resort.- Use
pytestfixtures for shared setup (conftest.pyorfixtures/)- Replace duplicate tests with
@pytest.mark.parametrize- Prefer
pytest-mockorunittest.mockfor stubs/mocks- Use
assert …, "message"over bare asserts- Reflect all API/behaviour changes in
docs/and update roadmap on completion- Files must not exceed 400 logical lines:
- Decompose large modules into subpackages
- Split large
match/caseor dispatch tables by domain and collocate with targets if appropriate- Move bulky data (fixtures, templates) to external files for parsing at runtime
- Mutable defaults and shadowed built-ins are forbidden
- All code must have clear type hints using modern style (
A | B,list[str],class Foo[A]:,type Bar = int, etc.), with ABC imports drawn from the correct stdlib module.- All path manipulation must be performed using pathlib for cross platform safety. Do not use string manipulation for extraction of path components or filename elements.
Files:
tests_python/test_stage_config.pytests_python/test_stage_glob.pytests_python/test_stage_artefacts.py
🧬 Code graph analysis (4)
.github/actions/stage/scripts/stage_common/staging.py (2)
.github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5).github/actions/stage/scripts/stage_common/config.py (4)
ArtefactConfig(38-45)StagingConfig(49-89)staging_dir(64-66)as_template_context(73-89)
tests_python/test_stage_config.py (4)
tests_python/conftest.py (2)
stage_common(16-24)workspace(35-40).github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5).github/actions/stage/scripts/stage_common/environment.py (1)
require_env_path(13-30).github/actions/stage/scripts/stage_common/config.py (4)
StagingConfig(49-89)as_template_context(73-89)load_config(92-147)staging_dir(64-66)
tests_python/test_stage_glob.py (4)
tests_python/stage_test_helpers.py (1)
write_workspace_inputs(22-34)tests_python/conftest.py (3)
stage_common(16-24)workspace(35-40)staging_module(28-31).github/actions/stage/scripts/stage_common/config.py (2)
StagingConfig(49-89)ArtefactConfig(38-45).github/actions/stage/scripts/stage_common/staging.py (7)
stage_artefacts(198-246)_match_candidate_path(377-384)_glob_root_and_pattern(500-519)_match_glob_candidate(387-411)_contains_glob(414-417)_iter_absolute_matches(420-425)_newest_file(428-432)
tests_python/test_stage_artefacts.py (5)
tests_python/stage_test_helpers.py (2)
decode_output_file(10-19)write_workspace_inputs(22-34)tests_python/conftest.py (3)
staging_module(28-31)stage_common(16-24)workspace(35-40).github/actions/stage/scripts/stage_common/config.py (4)
staging_dir(64-66)StagingConfig(49-89)ArtefactConfig(38-45)as_template_context(73-89).github/actions/stage/scripts/stage_common/staging.py (11)
_initialize_staging_dir(92-112)_prepare_output_data(115-165)_validate_no_reserved_key_collisions(168-195)write_github_output(466-497)stage_artefacts(198-246)_iter_staged_artefacts(279-322)StagedArtefact(84-89)_StagingEnvironment(76-80)_stage_single_artefact(325-351)_RenderAttempt(60-62)_ensure_source_available(249-276).github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5)
🪛 Ruff (0.13.3)
.github/actions/stage/scripts/stage_common/staging.py
88-88: Remove quotes from type annotation
Remove quotes
(UP037)
420-420: Remove quotes from type annotation
Remove quotes
(UP037)
tests_python/test_stage_glob.py
151-151: Probable insecure usage of temporary file or directory: "/tmp/dist/*.zip"
(S108)
tests_python/test_stage_artefacts.py
68-68: Probable insecure usage of temporary file or directory: "/tmp/stage"
(S108)
⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: release / build-windows (x86_64-pc-windows-msvc, amd64, x64, windows-x86_64)
- GitHub Check: build-test (ubuntu-latest, stable)
🔇 Additional comments (2)
tests_python/test_stage_glob.py (1)
159-159: LGTM!The
matchparameter has been correctly added to narrow the exception check, as requested in previous review.tests_python/test_stage_config.py (1)
13-280: LGTM! Effective use of parametrisation to eliminate duplication.The
ValidationTestCasedataclass combined with@pytest.mark.parametrizeconsolidates five validation scenarios into a single test function, as requested in the PR objectives. This approach improves maintainability and reduces duplication whilst preserving clear test case identification.
| """Artefact staging logic shared across the CLI and composite action. | ||
|
|
||
| This module provides the core staging pipeline that copies artefacts from a | ||
| workspace into a staging directory, computes checksums, and exports outputs | ||
| for GitHub Actions workflows. | ||
|
|
||
| Usage | ||
| ----- | ||
| From the CLI entry point:: | ||
|
|
||
| import os | ||
| from pathlib import Path | ||
|
|
||
| from stage_common import load_config, stage_artefacts | ||
|
|
||
| config = load_config(Path(".github/release-staging.toml"), "linux-x86_64") | ||
| result = stage_artefacts(config, Path(os.environ["GITHUB_OUTPUT"])) | ||
| print(f"Staged {len(result.staged_artefacts)} artefacts.") | ||
|
|
||
| From the composite action:: | ||
|
|
||
| - name: Stage artefacts | ||
| uses: ./.github/actions/stage | ||
| with: | ||
| config-file: .github/release-staging.toml | ||
| target: linux-x86_64 | ||
| """ | ||
|
|
||
| from __future__ import annotations | ||
|
|
||
| import glob | ||
| import dataclasses | ||
| import hashlib | ||
| import json | ||
| import shutil | ||
| import sys | ||
| import typing as typ | ||
| from pathlib import Path, PurePosixPath, PureWindowsPath | ||
|
|
||
| from .errors import StageError | ||
|
|
||
| if typ.TYPE_CHECKING: | ||
| from pathlib import PurePath | ||
|
|
||
| from .config import ArtefactConfig, StagingConfig | ||
|
|
||
| __all__ = ["StageResult", "stage_artefacts"] | ||
|
|
||
|
|
||
| RESERVED_OUTPUT_KEYS: set[str] = { | ||
| "artifact_dir", | ||
| "dist_dir", | ||
| "staged_files", | ||
| "artefact_map", | ||
| "checksum_map", | ||
| } | ||
|
|
||
|
|
||
| @dataclasses.dataclass(slots=True) | ||
| class _RenderAttempt: | ||
| template: str | ||
| rendered: str | ||
|
|
||
|
|
||
| @dataclasses.dataclass(slots=True) | ||
| class StageResult: | ||
| """Outcome of :func:`stage_artefacts`.""" | ||
|
|
||
| staging_dir: Path | ||
| staged_artefacts: list[Path] | ||
| outputs: dict[str, Path] | ||
| checksums: dict[str, str] | ||
|
|
||
|
|
||
| @dataclasses.dataclass(slots=True) | ||
| class _StagingEnvironment: | ||
| """Encapsulates the staging directory and template context.""" | ||
|
|
||
| staging_dir: Path | ||
| context: dict[str, typ.Any] | ||
|
|
||
|
|
||
| @dataclasses.dataclass(slots=True, frozen=True) | ||
| class StagedArtefact: | ||
| """Describe a staged artefact yielded by :func:`_iter_staged_artefacts`.""" | ||
|
|
||
| path: Path | ||
| artefact: "ArtefactConfig" | ||
| checksum: str | ||
|
|
||
|
|
||
| def _initialize_staging_dir(staging_dir: Path) -> None: | ||
| """Create a clean staging directory ready to receive artefacts. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| staging_dir : Path | ||
| Absolute path to the staging directory that will hold staged | ||
| artefacts. | ||
|
|
||
| Examples | ||
| -------- | ||
| >>> staging_dir = Path("/tmp/stage") | ||
| >>> (staging_dir / "old").mkdir(parents=True, exist_ok=True) | ||
| >>> _initialize_staging_dir(staging_dir) | ||
| >>> staging_dir.exists() | ||
| True | ||
| """ | ||
|
|
||
| if staging_dir.exists(): | ||
| shutil.rmtree(staging_dir) | ||
| staging_dir.mkdir(parents=True, exist_ok=True) | ||
|
|
||
|
|
||
| def _prepare_output_data( | ||
| staging_dir: Path, | ||
| staged_paths: list[Path], | ||
| outputs: dict[str, Path], | ||
| checksums: dict[str, str], | ||
| ) -> dict[str, str | list[str]]: | ||
| """Assemble workflow outputs describing the staged artefacts. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| staging_dir : Path | ||
| Directory that now contains all staged artefacts. | ||
| staged_paths : list[Path] | ||
| Collection of artefact paths copied into ``staging_dir``. | ||
| outputs : dict[str, Path] | ||
| Mapping of configured GitHub Action output keys to staged artefact | ||
| destinations. | ||
| checksums : dict[str, str] | ||
| Mapping of staged artefact file names to their checksum digests. | ||
|
|
||
| Returns | ||
| ------- | ||
| dict[str, str | list[str]] | ||
| Dictionary describing the staging results ready to be exported to the | ||
| GitHub Actions output file. | ||
|
|
||
| Examples | ||
| -------- | ||
| >>> staging_dir = Path("/tmp/stage") | ||
| >>> staged = [staging_dir / "bin.tar.gz"] | ||
| >>> outputs = {"archive": staged[0]} | ||
| >>> checksums = {"bin.tar.gz": "abc123"} | ||
| >>> result = _prepare_output_data(staging_dir, staged, outputs, checksums) | ||
| >>> sorted(result) | ||
| ['archive', 'artefact_map', 'artifact_dir', 'checksum_map', 'dist_dir', | ||
| 'staged_files'] | ||
| """ | ||
|
|
||
| staged_file_names = [path.name for path in sorted(staged_paths)] | ||
| artefact_map_json = json.dumps( | ||
| {key: path.as_posix() for key, path in sorted(outputs.items())} | ||
| ) | ||
| checksum_map_json = json.dumps(dict(sorted(checksums.items()))) | ||
|
|
||
| return { | ||
| "artifact_dir": staging_dir.as_posix(), | ||
| "dist_dir": staging_dir.parent.as_posix(), | ||
| "staged_files": "\n".join(staged_file_names), | ||
| "artefact_map": artefact_map_json, | ||
| "checksum_map": checksum_map_json, | ||
| } | {key: path.as_posix() for key, path in outputs.items()} | ||
|
|
||
|
|
||
| def _validate_no_reserved_key_collisions(outputs: dict[str, Path]) -> None: | ||
| """Ensure user-defined outputs avoid the reserved workflow output keys. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| outputs : dict[str, Path] | ||
| Mapping of configured GitHub Action output keys to staged artefact | ||
| destinations. | ||
|
|
||
| Raises | ||
| ------ | ||
| StageError | ||
| Raised when a user-defined output key overlaps with reserved keys. | ||
|
|
||
| Examples | ||
| -------- | ||
| >>> reserved = {"artifact_dir": Path("/tmp/stage/artifact")} | ||
| >>> _validate_no_reserved_key_collisions(reserved) | ||
| Traceback (most recent call last): | ||
| StageError: Artefact outputs collide with reserved keys: artifact_dir | ||
| """ | ||
|
|
||
| if collisions := sorted(outputs.keys() & RESERVED_OUTPUT_KEYS): | ||
| message = ( | ||
| "Artefact outputs collide with reserved keys: " | ||
| f"{collisions}" | ||
| ) | ||
| raise StageError(message) | ||
|
|
||
|
|
||
| def stage_artefacts(config: StagingConfig, github_output_file: Path) -> StageResult: | ||
| """Copy artefacts into ``config``'s staging directory. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| config : StagingConfig | ||
| Fully resolved configuration describing the artefacts to stage. | ||
| github_output_file : Path | ||
| Path to the ``GITHUB_OUTPUT`` file used to export workflow outputs. | ||
|
|
||
| Returns | ||
| ------- | ||
| StageResult | ||
| Summary object describing the staging directory, staged artefacts, | ||
| exported outputs, and checksum digests. | ||
|
|
||
| Raises | ||
| ------ | ||
| StageError | ||
| Raised when required artefacts are missing or configuration templates | ||
| render invalid destinations. | ||
| """ | ||
| staging_dir = config.staging_dir() | ||
| context = config.as_template_context() | ||
|
|
||
| _initialize_staging_dir(staging_dir) | ||
|
|
||
| staged_paths: list[Path] = [] | ||
| outputs: dict[str, Path] = {} | ||
| checksums: dict[str, str] = {} | ||
|
|
||
| for staged in _iter_staged_artefacts(config, staging_dir, context): | ||
| staged_paths.append(staged.path) | ||
| checksums[staged.path.name] = staged.checksum | ||
|
|
||
| if staged.artefact.output: | ||
| outputs[staged.artefact.output] = staged.path | ||
|
|
||
| if not staged_paths: | ||
| message = "No artefacts were staged." | ||
| raise StageError(message) | ||
|
|
||
| _validate_no_reserved_key_collisions(outputs) | ||
| exported_outputs = _prepare_output_data( | ||
| staging_dir, staged_paths, outputs, checksums | ||
| ) | ||
| write_github_output(github_output_file, exported_outputs) | ||
|
|
||
| return StageResult(staging_dir, staged_paths, outputs, checksums) | ||
|
|
||
|
|
||
| def _ensure_source_available( | ||
| source_path: Path | None, | ||
| artefact: ArtefactConfig, | ||
| attempts: list[_RenderAttempt], | ||
| workspace: Path, | ||
| ) -> bool: | ||
| """Return ``True`` when ``source_path`` exists, otherwise handle the miss.""" | ||
|
|
||
| if source_path is not None: | ||
| return True | ||
|
|
||
| if artefact.required: | ||
| attempt_lines = ", ".join( | ||
| f"{attempt.template!r} -> {attempt.rendered!r}" for attempt in attempts | ||
| ) | ||
| message = ( | ||
| "Required artefact not found. " | ||
| f"Workspace={workspace.as_posix()} " | ||
| f"Attempts=[{attempt_lines}]" | ||
| ) | ||
| raise StageError(message) | ||
|
|
||
| warning = ( | ||
| "::warning title=Artefact Skipped::Optional artefact missing: " | ||
| f"{artefact.source}" | ||
| ) | ||
| print(warning, file=sys.stderr) | ||
| return False | ||
|
|
||
|
|
||
| def _iter_staged_artefacts( | ||
| config: StagingConfig, staging_dir: Path, context: dict[str, typ.Any] | ||
| ) -> typ.Iterator[StagedArtefact]: | ||
| """Yield :class:`StagedArtefact` entries describing staged artefacts. | ||
|
|
||
| Examples | ||
| -------- | ||
| >>> from pathlib import Path | ||
| >>> from types import SimpleNamespace | ||
| >>> workspace = Path('.') | ||
| >>> artefact = SimpleNamespace( | ||
| ... source='missing', alternatives=[], required=False, destination=None, output=None | ||
| ... ) | ||
| >>> staged = list( | ||
| ... _iter_staged_artefacts( | ||
| ... SimpleNamespace( | ||
| ... workspace=workspace, | ||
| ... artefacts=[artefact], | ||
| ... checksum_algorithm='sha256', | ||
| ... ), | ||
| ... workspace / 'stage', | ||
| ... {}, | ||
| ... ) | ||
| ... ) | ||
| >>> staged | ||
| [] | ||
| """ | ||
|
|
||
| env = _StagingEnvironment(staging_dir=staging_dir, context=context) | ||
|
|
||
| for artefact in config.artefacts: | ||
| source_path, attempts = _resolve_artefact_source( | ||
| config.workspace, artefact, context | ||
| ) | ||
| if not _ensure_source_available( | ||
| source_path, artefact, attempts, config.workspace | ||
| ): | ||
| continue | ||
|
|
||
| destination_path = _stage_single_artefact( | ||
| config, env, artefact, typ.cast(Path, source_path) | ||
| ) | ||
| digest = _write_checksum(destination_path, config.checksum_algorithm) | ||
| yield StagedArtefact(destination_path, artefact, digest) | ||
|
|
||
|
|
||
| def _stage_single_artefact( | ||
| config: StagingConfig, | ||
| env: _StagingEnvironment, | ||
| artefact: ArtefactConfig, | ||
| source_path: Path, | ||
| ) -> Path: | ||
| """Copy ``source_path`` into ``env.staging_dir`` and return the staged path.""" | ||
|
|
||
| artefact_context = env.context | { | ||
| "source_path": source_path.as_posix(), | ||
| "source_name": source_path.name, | ||
| } | ||
| destination_text = ( | ||
| _render_template(destination, artefact_context) | ||
| if (destination := artefact.destination) | ||
| else source_path.name | ||
| ) | ||
|
|
||
| destination_path = _safe_destination_path(env.staging_dir, destination_text) | ||
| if destination_path.exists(): | ||
| destination_path.unlink() | ||
| shutil.copy2(source_path, destination_path) | ||
| print( | ||
| f"Staged '{source_path.relative_to(config.workspace)}' ->" | ||
| f" '{destination_path.relative_to(config.workspace)}'", | ||
| ) | ||
| return destination_path | ||
|
|
||
|
|
||
| def _render_template(template: str, context: dict[str, typ.Any]) -> str: | ||
| """Format ``template`` with ``context`` (e.g., ``_render_template('{name}', {'name': 'bob'})`` -> ``'bob'``).""" | ||
| try: | ||
| return template.format(**context) | ||
| except KeyError as exc: | ||
| message = f"Invalid template key {exc} in '{template}'" | ||
| raise StageError(message) from exc | ||
|
|
||
|
|
||
| def _resolve_artefact_source( | ||
| workspace: Path, artefact: ArtefactConfig, context: dict[str, typ.Any] | ||
| ) -> tuple[Path | None, list[_RenderAttempt]]: | ||
| """Return the first artefact match and attempted renders (e.g., ``_resolve_artefact_source(Path('.'), ArtefactConfig(source='a'), {})`` -> ``(Path('a'), attempts)``).""" | ||
| attempts: list[_RenderAttempt] = [] | ||
| patterns = [artefact.source, *artefact.alternatives] | ||
| for pattern in patterns: | ||
| rendered = _render_template(pattern, context) | ||
| attempts.append(_RenderAttempt(pattern, rendered)) | ||
| if (candidate := _match_candidate_path(workspace, rendered)) is not None: | ||
| return candidate, attempts | ||
| return None, attempts | ||
|
|
||
|
|
||
| def _match_candidate_path(workspace: Path, rendered: str) -> Path | None: | ||
| """Return the newest path matching ``rendered`` (e.g., ``_match_candidate_path(Path('.'), 'dist/*.zip')`` -> ``Path('dist/app')``).""" | ||
| candidate = Path(rendered) | ||
| if _contains_glob(rendered): | ||
| return _match_glob_candidate(workspace, candidate, rendered) | ||
|
|
||
| base = candidate if candidate.is_absolute() else workspace / candidate | ||
| return base if base.is_file() else None | ||
|
|
||
|
|
||
| def _match_glob_candidate( | ||
| workspace: Path, candidate: Path, rendered: str | ||
| ) -> Path | None: | ||
| """Resolve the newest match for a glob ``rendered``. | ||
|
|
||
| Examples | ||
| -------- | ||
| >>> workspace = Path('.') | ||
| >>> candidate = workspace / 'dist' | ||
| >>> candidate.mkdir(exist_ok=True) | ||
| >>> file = candidate / 'netsuke.txt' | ||
| >>> _ = file.write_text('payload', encoding='utf-8') | ||
| >>> _match_glob_candidate(workspace, Path('dist/*.txt'), 'dist/*.txt') == file | ||
| True | ||
| """ | ||
|
|
||
| if candidate.is_absolute(): | ||
| matches = _iter_absolute_matches(candidate) | ||
| else: | ||
| windows_candidate = PureWindowsPath(rendered) | ||
| if windows_candidate.is_absolute(): | ||
| matches = _iter_absolute_matches(windows_candidate) | ||
| else: | ||
| matches = workspace.glob(rendered) | ||
| return _newest_file(matches) | ||
|
|
||
|
|
||
| def _contains_glob(pattern: str) -> bool: | ||
| """Return ``True`` when ``pattern`` contains glob wildcards.""" | ||
|
|
||
| return glob.has_magic(pattern) | ||
|
|
||
|
|
||
| def _iter_absolute_matches(candidate: "PurePath") -> typ.Iterable[Path]: | ||
| """Yield files for an absolute glob ``candidate`` on any platform.""" | ||
|
|
||
| root_text, pattern = _glob_root_and_pattern(candidate) | ||
| root = Path(root_text) | ||
| return root.glob(pattern) | ||
|
|
||
|
|
||
| def _newest_file(candidates: typ.Iterable[Path]) -> Path | None: | ||
| """Return the newest file from ``candidates`` (if any).""" | ||
|
|
||
| files = [path for path in candidates if path.is_file()] | ||
| return max(files, key=_mtime_key) if files else None | ||
|
|
||
|
|
||
| def _mtime_key(path: Path) -> tuple[int, str]: | ||
| """Provide a sortable key using mtime with a stable tie-breaker (e.g., ``_mtime_key(Path('file'))`` -> ``(123, 'file')``).""" | ||
| try: | ||
| return (int(path.stat().st_mtime_ns), path.as_posix()) | ||
| except OSError: | ||
| return (0, path.as_posix()) | ||
|
|
||
|
|
||
| def _safe_destination_path(staging_dir: Path, destination: str) -> Path: | ||
| """Resolve ``destination`` under ``staging_dir`` (e.g., ``_safe_destination_path(Path('/tmp'), 'bin')`` -> ``Path('/tmp/bin')``).""" | ||
| target = (staging_dir / destination).resolve() | ||
| staging_root = staging_dir.resolve() | ||
| if not target.is_relative_to(staging_root): | ||
| message = f"Destination escapes staging directory: {destination}" | ||
| raise StageError(message) | ||
| target.parent.mkdir(parents=True, exist_ok=True) | ||
| return target | ||
|
|
||
|
|
||
| def _write_checksum(path: Path, algorithm: str) -> str: | ||
| """Write the checksum sidecar for ``path`` (e.g., ``_write_checksum(Path('bin'), 'sha256')`` -> ``'abc123'``).""" | ||
| hasher = hashlib.new(algorithm) | ||
| with path.open("rb") as handle: | ||
| for chunk in iter(lambda: handle.read(8192), b""): | ||
| hasher.update(chunk) | ||
| digest = hasher.hexdigest() | ||
| checksum_path = path.with_name(f"{path.name}.{algorithm}") | ||
| checksum_path.write_text(f"{digest} {path.name}\n", encoding="utf-8") | ||
| return digest | ||
|
|
||
|
|
||
| def write_github_output(file: Path, values: dict[str, str | list[str]]) -> None: | ||
| """Append ``values`` to the GitHub Actions output ``file``. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| file : Path | ||
| Target ``GITHUB_OUTPUT`` file that receives the exported values. | ||
| values : dict[str, str | list[str]] | ||
| Mapping of output names to values ready for GitHub Actions | ||
| consumption. | ||
|
|
||
| Examples | ||
| -------- | ||
| >>> github_output = Path("/tmp/github_output") | ||
| >>> write_github_output(github_output, {"name": "value"}) | ||
| >>> "name=value" in github_output.read_text() | ||
| True | ||
| """ | ||
|
|
||
| file.parent.mkdir(parents=True, exist_ok=True) | ||
| with file.open("a", encoding="utf-8") as handle: | ||
| for key, value in values.items(): | ||
| if isinstance(value, list): | ||
| delimiter = f"gh_{key.upper()}" | ||
| handle.write(f"{key}<<{delimiter}\n") | ||
| handle.write("\n".join(value)) | ||
| handle.write(f"\n{delimiter}\n") | ||
| else: | ||
| escaped = ( | ||
| value.replace("%", "%25").replace("\r", "%0D").replace("\n", "%0A") | ||
| ) | ||
| handle.write(f"{key}={escaped}\n") | ||
|
|
||
|
|
||
| def _glob_root_and_pattern(candidate: PurePath) -> tuple[str, str]: | ||
| """Return the filesystem root and relative glob pattern for ``candidate``. | ||
|
|
||
| Windows globbing treats drive letters as part of the anchor. The relative | ||
| pattern therefore needs to drop the ``C:\\`` prefix before invoking | ||
| :meth:`Path.glob`. ``pathlib`` exposes the anchor and path parts so we can | ||
| slice off the leading entry regardless of the host platform. | ||
| """ | ||
|
|
||
| anchor = candidate.anchor | ||
| if not anchor: | ||
| message = f"Expected absolute path, received '{candidate}'" | ||
| raise ValueError(message) | ||
|
|
||
| root_text = (candidate.drive + candidate.root) or anchor or "/" | ||
| relative_parts = candidate.parts[1:] | ||
| pattern = ( | ||
| PurePosixPath(*relative_parts).as_posix() if relative_parts else "*" | ||
| ) |
There was a problem hiding this comment.
Decompose this module to comply with the 400-logical-line limit.
The file contains 518 lines, exceeding the mandated 400-logical-line threshold. Decompose the module into focused submodules within a staging package. Candidate splits:
- Extract glob/path resolution helpers (
_match_candidate_path,_match_glob_candidate,_glob_root_and_pattern,_newest_file,_mtime_key,_contains_glob,_iter_absolute_matches) intostaging/resolution.py. - Move output formatting (
_prepare_output_data,write_github_output,_validate_no_reserved_key_collisions) intostaging/output.py. - Retain core staging flow (
stage_artefacts,_iter_staged_artefacts,_stage_single_artefact,_initialize_staging_dir) instaging/pipeline.py.
As per coding guidelines.
🧰 Tools
🪛 Ruff (0.13.3)
88-88: Remove quotes from type annotation
Remove quotes
(UP037)
420-420: Remove quotes from type annotation
Remove quotes
(UP037)
🤖 Prompt for AI Agents
.github/actions/stage/scripts/stage_common/staging.py lines 1-518: the module
exceeds the 400-logical-line limit and should be split into focused submodules;
extract glob/path resolution helpers (_match_candidate_path,
_match_glob_candidate, _glob_root_and_pattern, _newest_file, _mtime_key,
_contains_glob, _iter_absolute_matches) into staging/resolution.py, move
output-related functions (_prepare_output_data, write_github_output,
_validate_no_reserved_key_collisions and RESERVED_OUTPUT_KEYS) into
staging/output.py, and keep core staging flow (stage_artefacts,
_iter_staged_artefacts, _stage_single_artefact, _initialize_staging_dir and
related dataclasses) in staging/pipeline.py; update imports and type hints
across modules to reference the new package layout, ensure tests and any callers
import from staging.pipeline or staging package, and run linting to confirm the
original behavior remains unchanged.
| """Describe a staged artefact yielded by :func:`_iter_staged_artefacts`.""" | ||
|
|
||
| path: Path | ||
| artefact: "ArtefactConfig" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove unnecessary quotes from type annotation.
The from __future__ import annotations import makes quotes redundant in type hints.
Apply this diff:
- artefact: "ArtefactConfig"
+ artefact: ArtefactConfigAs per static analysis hint.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| artefact: "ArtefactConfig" | |
| artefact: ArtefactConfig |
🧰 Tools
🪛 Ruff (0.13.3)
88-88: Remove quotes from type annotation
Remove quotes
(UP037)
🤖 Prompt for AI Agents
In .github/actions/stage/scripts/stage_common/staging.py around line 88, the
type annotation artefact: "ArtefactConfig" uses redundant quotes; with from
__future__ import annotations enabled, remove the quotes and change the
annotation to artefact: ArtefactConfig (no other behavior changes required).
tests_python/test_stage_artefacts.py
Outdated
| """Tests exercising artefact staging and output generation.""" | ||
|
|
||
| from __future__ import annotations | ||
|
|
||
| import json | ||
| from pathlib import Path | ||
|
|
||
| import pytest | ||
|
|
||
| from stage_test_helpers import decode_output_file, write_workspace_inputs | ||
|
|
||
|
|
||
| def test_initialize_staging_dir_removes_existing_contents( | ||
| staging_module: object, tmp_path: Path | ||
| ) -> None: | ||
| """The helper should clear any previous staging directory contents.""" | ||
|
|
||
| staging_dir = tmp_path / "stage" | ||
| stale_file = staging_dir / "stale.txt" | ||
| stale_file.parent.mkdir(parents=True, exist_ok=True) | ||
| stale_file.write_text("old", encoding="utf-8") | ||
|
|
||
| staging_module._initialize_staging_dir(staging_dir) | ||
|
|
||
| assert staging_dir.exists(), "Expected staging directory to be recreated" | ||
| assert list(staging_dir.iterdir()) == [], "Stale artefacts should be removed" | ||
|
|
||
|
|
||
| def test_prepare_output_data_returns_sorted_metadata( | ||
| staging_module: object, tmp_path: Path | ||
| ) -> None: | ||
| """Output preparation should normalise ordering and serialise metadata.""" | ||
|
|
||
| staging_dir = tmp_path / "dist" / "stage" | ||
| staged = [ | ||
| staging_dir / "b.bin", | ||
| staging_dir / "a.txt", | ||
| ] | ||
| outputs = { | ||
| "binary": staging_dir / "b.bin", | ||
| "manual": staging_dir / "a.txt", | ||
| } | ||
| checksums = {"b.bin": "bbb", "a.txt": "aaa"} | ||
|
|
||
| result = staging_module._prepare_output_data( | ||
| staging_dir, staged, outputs, checksums | ||
| ) | ||
|
|
||
| assert result["artifact_dir"].endswith("stage"), "Expected staging directory output" | ||
| assert result["dist_dir"].endswith("dist"), "Expected dist directory output" | ||
| assert result["staged_files"].splitlines() == [ | ||
| "a.txt", | ||
| "b.bin", | ||
| ], "Staged files should be sorted" | ||
| artefact_map = json.loads(result["artefact_map"]) | ||
| assert list(artefact_map) == ["binary", "manual"], "Outputs should be sorted" | ||
| checksum_map = json.loads(result["checksum_map"]) | ||
| assert list(checksum_map) == ["a.txt", "b.bin"], "Checksums should be sorted" | ||
|
|
||
|
|
||
| def test_validate_no_reserved_key_collisions_rejects_reserved_keys( | ||
| staging_module: object | ||
| ) -> None: | ||
| """Reserved workflow keys should trigger a stage error.""" | ||
|
|
||
| with pytest.raises(staging_module.StageError) as exc: | ||
| staging_module._validate_no_reserved_key_collisions( | ||
| {"artifact_dir": Path("/tmp/stage")} | ||
| ) | ||
|
|
||
| assert "collide with reserved keys" in str(exc.value) | ||
|
|
||
|
|
||
| def test_write_github_output_formats_values( | ||
| staging_module: object, tmp_path: Path | ||
| ) -> None: | ||
| """The GitHub output helper should escape strings and stream lists.""" | ||
|
|
||
| output_file = tmp_path / "github" / "output.txt" | ||
| output_file.parent.mkdir(parents=True, exist_ok=True) | ||
| output_file.write_text("initial=value\n", encoding="utf-8") | ||
|
|
||
| staging_module.write_github_output( | ||
| output_file, | ||
| { | ||
| "name": "value with%percent\nand newline", | ||
| "lines": ["one", "two"], | ||
| }, | ||
| ) | ||
|
|
||
| content = output_file.read_text(encoding="utf-8") | ||
| assert "initial=value" in content, "Existing output lines should remain" | ||
| assert ( | ||
| "name=value with%25percent%0Aand newline" in content | ||
| ), "String values should be escaped" | ||
| assert ( | ||
| "lines<<gh_LINES" in content | ||
| ), "List values should use the multi-line protocol" | ||
| assert "one\ntwo" in content, "List payload should be preserved" | ||
|
|
||
|
|
||
| def test_stage_artefacts_exports_metadata( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """The staging pipeline should copy inputs, hash them, and export outputs.""" | ||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="target/{target}/release/{bin_name}{bin_ext}", | ||
| required=True, | ||
| output="binary_path", | ||
| ), | ||
| stage_common.ArtefactConfig( | ||
| source="target/generated-man/{target}/release/{bin_name}.1", | ||
| required=True, | ||
| output="man_path", | ||
| ), | ||
| stage_common.ArtefactConfig( | ||
| source="LICENSE", | ||
| required=True, | ||
| output="license_path", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| result = stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| staging_dir = workspace / "dist" / "netsuke_linux_amd64" | ||
| assert result.staging_dir == staging_dir, "StageResult must record the staging directory" | ||
| assert staging_dir.exists(), "Expected staging directory to be created" | ||
|
|
||
| staged_files = {path.name for path in result.staged_artefacts} | ||
| assert staged_files == {"netsuke", "netsuke.1", "LICENSE"}, "Unexpected artefacts staged" | ||
| assert set(result.outputs) == {"binary_path", "man_path", "license_path"}, "Outputs missing expected keys" | ||
| expected_checksums = { | ||
| "netsuke": staging_dir / "netsuke.sha256", | ||
| "netsuke.1": staging_dir / "netsuke.1.sha256", | ||
| "LICENSE": staging_dir / "LICENSE.sha256", | ||
| } | ||
| assert set(result.checksums) == set(expected_checksums), "Checksum outputs missing entries" | ||
| for path in expected_checksums.values(): | ||
| assert path.exists(), f"Checksum file {path.name} was not written" | ||
|
|
||
| outputs = decode_output_file(github_output) | ||
| assert outputs["artifact_dir"] == staging_dir.as_posix(), "artifact_dir output should reference staging directory" | ||
| assert outputs["binary_path"].endswith("netsuke"), "binary_path output should point to the staged executable" | ||
| assert outputs["license_path"].endswith("LICENSE"), "license_path output should point to the staged licence" | ||
| assert outputs["dist_dir"].endswith("dist"), "dist_dir output should reflect parent directory" | ||
| staged_listing = outputs["staged_files"].splitlines() | ||
| assert staged_listing == sorted(staged_listing), "Staged files output should be sorted" | ||
| artefact_map = json.loads(outputs["artefact_map"]) | ||
| assert artefact_map["binary_path"].endswith("netsuke"), "artefact map should include the binary path" | ||
| assert artefact_map["license_path"].endswith("LICENSE"), "artefact map should include the licence path" | ||
| checksum_map = json.loads(outputs["checksum_map"]) | ||
| assert set(checksum_map) == {"netsuke", "netsuke.1", "LICENSE"}, "Checksum map missing entries" | ||
|
|
||
|
|
||
| def test_stage_artefacts_reinitialises_staging_dir( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Running the pipeline should recreate the staging directory afresh.""" | ||
|
|
||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| staging_dir = workspace / "dist" / "netsuke_linux_amd64" | ||
| stale = staging_dir / "obsolete.txt" | ||
| stale.parent.mkdir(parents=True, exist_ok=True) | ||
| stale.write_text("stale", encoding="utf-8") | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="target/{target}/release/{bin_name}{bin_ext}", | ||
| required=True, | ||
| output="binary_path", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| assert not stale.exists(), "Previous staging artefacts should be removed" | ||
| current_entries = {path.name for path in staging_dir.iterdir()} | ||
| assert "obsolete.txt" not in current_entries, "Old entries must not survive reinitialisation" | ||
|
|
||
|
|
||
| def test_stage_artefacts_rejects_reserved_output_key( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Configs using reserved workflow outputs should error out.""" | ||
|
|
||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="LICENSE", | ||
| required=True, | ||
| output="artifact_dir", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| github_output.write_text("", encoding="utf-8") | ||
| with pytest.raises(stage_common.StageError) as exc: | ||
| stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| assert "collide with reserved keys" in str(exc.value) | ||
| assert github_output.read_text(encoding="utf-8") == "", "Outputs should not be written when validation fails" | ||
|
|
||
|
|
||
| def test_stage_artefacts_appends_github_output( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Writing outputs should append to the existing ``GITHUB_OUTPUT`` file.""" | ||
|
|
||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="LICENSE", | ||
| required=True, | ||
| output="license_path", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| github_output.write_text("previous=value\n", encoding="utf-8") | ||
|
|
||
| stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| content = github_output.read_text(encoding="utf-8") | ||
| assert content.startswith("previous=value"), "Existing lines should remain at the top" | ||
| assert "artifact_dir=" in content, "New outputs should be appended to the file" | ||
|
|
||
|
|
||
| def test_stage_artefacts_warns_for_optional( | ||
| stage_common: object, workspace: Path, capfd: pytest.CaptureFixture[str] | ||
| ) -> None: | ||
| """Optional artefacts should emit a warning when absent but not abort.""" | ||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="target/{target}/release/{bin_name}{bin_ext}", | ||
| required=True, | ||
| ), | ||
| stage_common.ArtefactConfig( | ||
| source="missing.txt", | ||
| required=False, | ||
| output="missing", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| stage_common.stage_artefacts(config, workspace / "out.txt") | ||
| captured = capfd.readouterr() | ||
| assert ( | ||
| "::warning title=Artefact Skipped::Optional artefact missing" in captured.err | ||
| ), "Optional artefact warning missing" | ||
|
|
||
|
|
||
| def test_stage_artefacts_fails_with_attempt_context( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Missing required artefacts should include context in the error message.""" | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="missing-{target}", | ||
| required=True, | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="x86_64-unknown-linux-gnu", | ||
| ) | ||
|
|
||
| with pytest.raises(stage_common.StageError) as exc: | ||
| stage_common.stage_artefacts(config, workspace / "outputs.txt") | ||
|
|
||
| message = str(exc.value) | ||
| assert "Workspace=" in message, "Workspace context missing from error" | ||
| assert "missing-{target}" in message, "Template pattern missing from error" | ||
| assert ( | ||
| "missing-x86_64-unknown-linux-gnu" in message | ||
| ), "Rendered path missing from error" | ||
|
|
||
|
|
||
| def test_iter_staged_artefacts_yields_metadata( | ||
| stage_common: object, staging_module: object, workspace: Path | ||
| ) -> None: | ||
| """The iterator should yield dataclass entries with staged file metadata.""" | ||
|
|
||
| source = workspace / "LICENSE" | ||
| source.write_text("payload", encoding="utf-8") | ||
| artefact = stage_common.ArtefactConfig(source="LICENSE", required=True) | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[artefact], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="test", | ||
| ) | ||
|
|
||
| staging_dir = config.staging_dir() | ||
| context = config.as_template_context() | ||
| staging_module._initialize_staging_dir(staging_dir) | ||
|
|
||
| staged = list( | ||
| staging_module._iter_staged_artefacts(config, staging_dir, context) | ||
| ) | ||
|
|
||
| assert len(staged) == 1, "Expected the iterator to yield the staged artefact" | ||
| entry = staged[0] | ||
| assert isinstance(entry, staging_module.StagedArtefact) | ||
| assert entry.path.exists(), "Staged artefact path should exist on disk" | ||
| assert entry.checksum, "Iterator should include a checksum digest" | ||
| checksum_file = entry.path.with_name(f"{entry.path.name}.sha256") | ||
| assert checksum_file.exists(), "Checksum sidecar should be written" | ||
|
|
||
|
|
||
| def test_stage_artefacts_aligns_with_iterator( | ||
| stage_common: object, staging_module: object, workspace: Path | ||
| ) -> None: | ||
| """Behaviourally verify the iterator matches the public staging result.""" | ||
|
|
||
| (workspace / "first.txt").write_text("first", encoding="utf-8") | ||
| (workspace / "second.txt").write_text("second", encoding="utf-8") | ||
| artefacts = [ | ||
| stage_common.ArtefactConfig(source="first.txt", required=True), | ||
| stage_common.ArtefactConfig(source="second.txt", required=True), | ||
| ] | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=artefacts, | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="behavioural", | ||
| ) | ||
|
|
||
| staging_dir = config.staging_dir() | ||
| context = config.as_template_context() | ||
| staging_module._initialize_staging_dir(staging_dir) | ||
| iter_names = [ | ||
| entry.path.name | ||
| for entry in staging_module._iter_staged_artefacts( | ||
| config, staging_dir, context | ||
| ) | ||
| ] | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| result = stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| assert iter_names == [path.name for path in result.staged_artefacts] | ||
|
|
||
|
|
||
| def test_stage_single_artefact_overwrites_existing_file( | ||
| stage_common: object, staging_module: object, workspace: Path | ||
| ) -> None: | ||
| """The helper should replace existing staged files atomically.""" | ||
|
|
||
| source = workspace / "binary" | ||
| source.write_text("new", encoding="utf-8") | ||
| artefact = stage_common.ArtefactConfig( | ||
| source="binary", | ||
| destination="bin/{source_name}", | ||
| required=True, | ||
| ) | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[artefact], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="unit", | ||
| ) | ||
|
|
||
| staging_dir = config.staging_dir() | ||
| stale = staging_dir / "bin" / "binary" | ||
| stale.parent.mkdir(parents=True, exist_ok=True) | ||
| stale.write_text("old", encoding="utf-8") | ||
|
|
||
| context = config.as_template_context() | ||
| env = staging_module._StagingEnvironment( | ||
| staging_dir=staging_dir, | ||
| context=context, | ||
| ) | ||
| staged_path = staging_module._stage_single_artefact( | ||
| config, env, artefact, source | ||
| ) | ||
|
|
||
| assert staged_path == stale | ||
| assert staged_path.read_text(encoding="utf-8") == "new" | ||
|
|
||
|
|
||
| def test_stage_artefacts_honours_destination_templates( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Destination templates should be rendered beneath the staging directory.""" | ||
|
|
||
| source = workspace / "payload.bin" | ||
| source.write_text("payload", encoding="utf-8") | ||
| artefact = stage_common.ArtefactConfig( | ||
| source="payload.bin", | ||
| destination="artifacts/{bin_name}/{source_name}", | ||
| required=True, | ||
| output="payload_path", | ||
| ) | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[artefact], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="behavioural", | ||
| ) | ||
|
|
||
| github_output = workspace / "github.txt" | ||
| result = stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| staged_path = result.outputs["payload_path"] | ||
| relative = staged_path.relative_to(result.staging_dir) | ||
| assert relative.as_posix() == "artifacts/netsuke/payload.bin" | ||
|
|
||
|
|
||
| def test_ensure_source_available_required_error( | ||
| stage_common: object, staging_module: object, workspace: Path | ||
| ) -> None: | ||
| """Missing required artefacts should raise a StageError with context.""" | ||
|
|
||
| artefact = stage_common.ArtefactConfig(source="missing.bin", required=True) | ||
| attempts = [ | ||
| staging_module._RenderAttempt("missing.bin", "missing.bin"), | ||
| ] | ||
|
|
||
| with pytest.raises(stage_common.StageError) as exc: | ||
| staging_module._ensure_source_available( | ||
| None, artefact, attempts, workspace | ||
| ) | ||
|
|
||
| message = str(exc.value) | ||
| assert "Required artefact not found" in message | ||
| assert "missing.bin" in message | ||
|
|
||
|
|
||
| def test_ensure_source_available_optional_warning( | ||
| stage_common: object, | ||
| staging_module: object, | ||
| workspace: Path, | ||
| capfd: pytest.CaptureFixture[str], | ||
| ) -> None: | ||
| """Optional artefacts should be skipped with a warning instead of failing.""" | ||
|
|
||
| artefact = stage_common.ArtefactConfig(source="missing.txt", required=False) | ||
|
|
||
| should_stage = staging_module._ensure_source_available( | ||
| None, | ||
| artefact, | ||
| [staging_module._RenderAttempt("missing.txt", "missing.txt")], | ||
| workspace, | ||
| ) | ||
|
|
||
| captured = capfd.readouterr() | ||
| assert not should_stage | ||
| assert "Optional artefact missing" in captured.err |
There was a problem hiding this comment.
Decompose this test module to comply with the 400-logical-line limit.
The file contains 528 lines, exceeding the mandated 400-logical-line threshold. Split tests into focused modules:
- Move initialization and output preparation tests (
test_initialize_staging_dir_*,test_prepare_output_data_*,test_validate_*,test_write_github_output_*) intotest_stage_output.py. - Move integration tests (
test_stage_artefacts_exports_metadata,test_stage_artefacts_reinitialises_*,test_stage_artefacts_rejects_*,test_stage_artefacts_appends_*,test_stage_artefacts_warns_*,test_stage_artefacts_fails_*) intotest_stage_integration.py. - Move iterator and single-artefact tests (
test_iter_staged_artefacts_*,test_stage_single_artefact_*,test_ensure_source_available_*) intotest_stage_helpers.py.
As per coding guidelines.
🧰 Tools
🪛 Ruff (0.13.3)
68-68: Probable insecure usage of temporary file or directory: "/tmp/stage"
(S108)
🤖 Prompt for AI Agents
tests_python/test_stage_artefacts.py lines 1-528: the file exceeds the 400
logical line limit and must be decomposed; split the tests into three new
modules as follows — create tests_python/test_stage_output.py and move tests
whose names start with test_initialize_staging_dir_*,
test_prepare_output_data_*, test_validate_no_reserved_key_collisions_*, and
test_write_github_output_* into it; create
tests_python/test_stage_integration.py and move the integration-style tests
test_stage_artefacts_exports_metadata, test_stage_artefacts_reinitialises_*,
test_stage_artefacts_rejects_*, test_stage_artefacts_appends_*,
test_stage_artefacts_warns_*, and test_stage_artefacts_fails_* into it; create
tests_python/test_stage_helpers.py and move iterator/single-artefact tests
test_iter_staged_artefacts_*, test_stage_artefacts_aligns_with_iterator,
test_stage_single_artefact_*,
test_stage_artefacts_honours_with_destination_templates,
test_ensure_source_available_* into it; preserve original test code unchanged
other than moving, ensure module-level imports and fixtures (stage_common,
staging_module, workspace, tmp_path, capfd, write_workspace_inputs,
decode_output_file) are present or re-imported at top of each new file, and
delete the original large file or reduce it to a small index that re-exports
tests if desired.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
tests_python/test_stage_artefacts.py (1)
1-559: Split this oversized test module.Enforce the 400-line ceiling from the coding guidelines; this module currently spans roughly 560 lines (see Line 559) and breaches the limit. Partition the staging initialisation, integration, and helper tests into dedicated modules (e.g.,
test_stage_output.py,test_stage_integration.py,test_stage_helpers.py) to regain compliance. As per coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (7)
.github/actions/stage/scripts/stage_common/staging/__init__.py(1 hunks).github/actions/stage/scripts/stage_common/staging/output.py(1 hunks).github/actions/stage/scripts/stage_common/staging/pipeline.py(1 hunks).github/actions/stage/scripts/stage_common/staging/resolution.py(1 hunks)tests_python/conftest.py(1 hunks)tests_python/test_stage_artefacts.py(1 hunks)tests_python/test_stage_glob.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
**/*.py: - Keep cyclomatic complexity ≤ 12
- Follow single responsibility and CQRS (command/query segregation)
- Docstrings must follow the
numpystyle guide. Use a single-line summary for private functions and methods, and full structured docs for all public interfaces.- Move conditionals with >2 branches to predicate/helper functions
- Avoid
eval,exec,pickle, monkey-patching,ctypes, unsafe shell- Every module must begin with a triple-quoted docstring explaining its purpose, utility, and usage, including example calls if appropriate.
- Comments and docs must follow en-GB-oxendict (-ize / -yse / -our) spelling and grammar
- Lint suppressions:
- Blanket
# noqa, file-level skips, and categories are forbidden- Only narrow in-line disables (
# noqa: XYZ) are permitted, and must be accompanied byFIXME:or a ticket link, and used only as a last resort.- Use
pytestfixtures for shared setup (conftest.pyorfixtures/)- Replace duplicate tests with
@pytest.mark.parametrize- Prefer
pytest-mockorunittest.mockfor stubs/mocks- Use
assert …, "message"over bare asserts- Reflect all API/behaviour changes in
docs/and update roadmap on completion- Files must not exceed 400 logical lines:
- Decompose large modules into subpackages
- Split large
match/caseor dispatch tables by domain and collocate with targets if appropriate- Move bulky data (fixtures, templates) to external files for parsing at runtime
- Mutable defaults and shadowed built-ins are forbidden
- All code must have clear type hints using modern style (
A | B,list[str],class Foo[A]:,type Bar = int, etc.), with ABC imports drawn from the correct stdlib module.- All path manipulation must be performed using pathlib for cross platform safety. Do not use string manipulation for extraction of path components or filename elements.
Files:
tests_python/conftest.pytests_python/test_stage_glob.pytests_python/test_stage_artefacts.py
🧬 Code graph analysis (6)
.github/actions/stage/scripts/stage_common/staging/__init__.py (2)
.github/actions/stage/scripts/stage_common/staging/output.py (1)
write_github_output(110-143).github/actions/stage/scripts/stage_common/staging/pipeline.py (3)
StageResult(36-42)StagedArtefact(54-59)stage_artefacts(85-134)
.github/actions/stage/scripts/stage_common/staging/output.py (2)
.github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5).github/actions/stage/scripts/stage_common/config.py (1)
staging_dir(64-66)
.github/actions/stage/scripts/stage_common/staging/resolution.py (1)
tests_python/conftest.py (1)
workspace(56-61)
.github/actions/stage/scripts/stage_common/staging/pipeline.py (4)
.github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5).github/actions/stage/scripts/stage_common/staging/output.py (3)
_prepare_output_data(27-77)_validate_no_reserved_key_collisions(80-107)write_github_output(110-143).github/actions/stage/scripts/stage_common/staging/resolution.py (1)
_match_candidate_path(108-125).github/actions/stage/scripts/stage_common/config.py (4)
ArtefactConfig(38-45)StagingConfig(49-89)staging_dir(64-66)as_template_context(73-89)
tests_python/test_stage_glob.py (5)
tests_python/stage_test_helpers.py (1)
write_workspace_inputs(22-34)tests_python/conftest.py (3)
stage_common(16-24)workspace(56-61)staging_resolution(49-52).github/actions/stage/scripts/stage_common/config.py (2)
StagingConfig(49-89)ArtefactConfig(38-45).github/actions/stage/scripts/stage_common/staging/pipeline.py (1)
stage_artefacts(85-134).github/actions/stage/scripts/stage_common/staging/resolution.py (1)
_match_candidate_path(108-125)
tests_python/test_stage_artefacts.py (6)
tests_python/stage_test_helpers.py (2)
decode_output_file(10-19)write_workspace_inputs(22-34)tests_python/conftest.py (5)
stage_common(16-24)staging_package(28-31)staging_pipeline(35-38)staging_output(42-45)workspace(56-61).github/actions/stage/scripts/stage_common/staging/pipeline.py (9)
stage_artefacts(85-134)StageResult(36-42)StagedArtefact(54-59)_initialize_staging_dir(62-82)_iter_staged_artefacts(167-210)_StagingEnvironment(46-50)_stage_single_artefact(213-240)_RenderAttempt(30-32)_ensure_source_available(137-164).github/actions/stage/scripts/stage_common/staging/output.py (3)
write_github_output(110-143)_prepare_output_data(27-77)_validate_no_reserved_key_collisions(80-107).github/actions/stage/scripts/stage_common/config.py (4)
staging_dir(64-66)StagingConfig(49-89)ArtefactConfig(38-45)as_template_context(73-89).github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5)
🪛 Ruff (0.13.3)
.github/actions/stage/scripts/stage_common/staging/pipeline.py
85-85: Remove quotes from type annotation
Remove quotes
(UP037)
139-139: Remove quotes from type annotation
Remove quotes
(UP037)
168-168: Remove quotes from type annotation
Remove quotes
(UP037)
214-214: Remove quotes from type annotation
Remove quotes
(UP037)
216-216: Remove quotes from type annotation
Remove quotes
(UP037)
263-263: Remove quotes from type annotation
Remove quotes
(UP037)
tests_python/conftest.py
28-28: Unused function argument: stage_common
(ARG001)
35-35: Unused function argument: stage_common
(ARG001)
42-42: Unused function argument: stage_common
(ARG001)
49-49: Unused function argument: stage_common
(ARG001)
tests_python/test_stage_artefacts.py
87-87: Probable insecure usage of temporary file or directory: "/tmp/stage"
(S108)
| @pytest.fixture | ||
| def staging_package(stage_common: object) -> object: | ||
| """Expose the staging package for API boundary assertions.""" | ||
|
|
||
| return importlib.import_module("stage_common.staging") | ||
|
|
||
|
|
||
| @pytest.fixture | ||
| def staging_pipeline(stage_common: object) -> object: | ||
| """Expose the staging pipeline module for unit-level assertions.""" | ||
|
|
||
| return importlib.import_module("stage_common.staging.pipeline") | ||
|
|
||
|
|
||
| @pytest.fixture | ||
| def staging_output(stage_common: object) -> object: | ||
| """Expose the staging output helpers for direct testing.""" | ||
|
|
||
| return importlib.import_module("stage_common.staging.output") | ||
|
|
||
|
|
||
| @pytest.fixture | ||
| def staging_resolution(stage_common: object) -> object: | ||
| """Expose the path resolution helpers for direct testing.""" | ||
|
|
||
| return importlib.import_module("stage_common.staging.resolution") | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Consume the stage_common fixture argument.
Use the injected module (or explicitly delete it) so Ruff stops flagging ARG001 in these fixtures.
@pytest.fixture
def staging_package(stage_common: object) -> object:
"""Expose the staging package for API boundary assertions."""
-
- return importlib.import_module("stage_common.staging")
+ del stage_common
+ return importlib.import_module("stage_common.staging")
@pytest.fixture
def staging_pipeline(stage_common: object) -> object:
"""Expose the staging pipeline module for unit-level assertions."""
-
- return importlib.import_module("stage_common.staging.pipeline")
+ del stage_common
+ return importlib.import_module("stage_common.staging.pipeline")
@pytest.fixture
def staging_output(stage_common: object) -> object:
"""Expose the staging output helpers for direct testing."""
-
- return importlib.import_module("stage_common.staging.output")
+ del stage_common
+ return importlib.import_module("stage_common.staging.output")
@pytest.fixture
def staging_resolution(stage_common: object) -> object:
"""Expose the path resolution helpers for direct testing."""
-
- return importlib.import_module("stage_common.staging.resolution")
+ del stage_common
+ return importlib.import_module("stage_common.staging.resolution")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.fixture | |
| def staging_package(stage_common: object) -> object: | |
| """Expose the staging package for API boundary assertions.""" | |
| return importlib.import_module("stage_common.staging") | |
| @pytest.fixture | |
| def staging_pipeline(stage_common: object) -> object: | |
| """Expose the staging pipeline module for unit-level assertions.""" | |
| return importlib.import_module("stage_common.staging.pipeline") | |
| @pytest.fixture | |
| def staging_output(stage_common: object) -> object: | |
| """Expose the staging output helpers for direct testing.""" | |
| return importlib.import_module("stage_common.staging.output") | |
| @pytest.fixture | |
| def staging_resolution(stage_common: object) -> object: | |
| """Expose the path resolution helpers for direct testing.""" | |
| return importlib.import_module("stage_common.staging.resolution") | |
| @pytest.fixture | |
| def staging_package(stage_common: object) -> object: | |
| """Expose the staging package for API boundary assertions.""" | |
| del stage_common | |
| return importlib.import_module("stage_common.staging") | |
| @pytest.fixture | |
| def staging_pipeline(stage_common: object) -> object: | |
| """Expose the staging pipeline module for unit-level assertions.""" | |
| del stage_common | |
| return importlib.import_module("stage_common.staging.pipeline") | |
| @pytest.fixture | |
| def staging_output(stage_common: object) -> object: | |
| """Expose the staging output helpers for direct testing.""" | |
| del stage_common | |
| return importlib.import_module("stage_common.staging.output") | |
| @pytest.fixture | |
| def staging_resolution(stage_common: object) -> object: | |
| """Expose the path resolution helpers for direct testing.""" | |
| del stage_common | |
| return importlib.import_module("stage_common.staging.resolution") |
🧰 Tools
🪛 Ruff (0.13.3)
28-28: Unused function argument: stage_common
(ARG001)
35-35: Unused function argument: stage_common
(ARG001)
42-42: Unused function argument: stage_common
(ARG001)
49-49: Unused function argument: stage_common
(ARG001)
🤖 Prompt for AI Agents
In tests_python/conftest.py around lines 27 to 53, the fixtures accept the
injected stage_common argument but never use it, triggering Ruff ARG001; either
remove the unused parameter from each fixture signature or explicitly consume it
(for example assign it to a throwaway variable like _stage_common = stage_common
or call del stage_common) so the argument is considered used while keeping the
fixtures' behavior unchanged.
Add a Mermaid E-R diagram that captures the stage action's configuration schema and explain how the optional keys align with the reusable helper.
* Refactor staging helper and add tests * Tidy staging helper lint compliance * Refine staging helper suite Centralise environment handling, enrich staging diagnostics, pin the uv setup action, and expand the test coverage for the CLI, loader, and glob logic.
* Handle Windows drive prefixes in staging glob * Improve absolute glob handling tests
* Document staging helpers and use direct outputs * Add staging helper tests and cover normaliser outputs
* Simplify staging loops Extract an iterator for staged artefacts and factor glob matching into a helper to reduce the cyclomatic complexity flagged by CodeScene. * Refine staging helpers and expand coverage * Group staging helper parameters * Assert glob helper error message
* Drop redundant artefact annotation * Improve staging logging and resolution coverage * Restore staging helper re-exports * Refactor staging path resolution Split the Windows and glob logic in the resolver into reusable helpers and parameterise the duplicated Windows path test cases. * Trim staging init imports * Remove redundant Windows test parameter
Expose a make typecheck target that runs ty with cyclopts on the staging scripts. This mirrors the branch quality gate and makes it easier to rerun the type checker locally after the rebase.
17c8a8a to
3dc5403
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (2)
.github/actions/stage/scripts/stage_common/staging/resolution.py (1)
12-119: Collapse private helper docstrings to single-line summaries.Coding guidelines mandate single-line docstrings for private functions. The private helpers (_newest_file, _windows_root, _resolve_glob_pattern, _resolve_direct_path) contain multi-line NumPy-style docstrings with embedded examples. Collapse each to a single concise summary line and move examples to unit tests or module documentation.
Apply this diff:
def _newest_file(candidates: typ.Iterable[Path]) -> Path | None: - """Return the newest file from ``candidates``. - - Example: - >>> tmp = Path("/tmp/example") - >>> tmp.mkdir(exist_ok=True) - >>> sample = tmp / "sample.txt" - >>> sample.write_text("payload", encoding="utf-8") - >>> _newest_file([sample]) == sample - True - """ + """Return the newest file from ``candidates`` by modification time.""" ... def _windows_root(rendered: str) -> tuple[Path, tuple[str, ...]]: - """Return the Windows drive root and path components for ``rendered``. - - Example: - >>> _windows_root("C:/workspace/app.exe")[0] - PosixPath('C:/') - """ + """Return the Windows drive root and path components for ``rendered``.""" ... def _resolve_glob_pattern( workspace: Path, rendered: str, candidate: Path ) -> Path | None: - """Resolve a glob ``rendered`` against ``workspace``. - - Example: - >>> workspace = Path("/tmp/work") - >>> (workspace / "dist").mkdir(parents=True, exist_ok=True) - >>> target = workspace / "dist" / "netsuke.txt" - >>> target.write_text("payload", encoding="utf-8") - >>> _resolve_glob_pattern(workspace, "dist/*.txt", Path("dist/*.txt")) - PosixPath('/tmp/work/dist/netsuke.txt') - """ + """Resolve glob pattern ``rendered`` relative to ``workspace``.""" ... def _resolve_direct_path( workspace: Path, rendered: str, candidate: Path ) -> Path | None: - """Resolve a direct ``rendered`` path relative to ``workspace``. - - Example: - >>> workspace = Path("/tmp/work") - >>> workspace.mkdir(parents=True, exist_ok=True) - >>> target = workspace / "netsuke.txt" - >>> target.write_text("payload", encoding="utf-8") - >>> _resolve_direct_path(workspace, "netsuke.txt", Path("netsuke.txt")) - PosixPath('/tmp/work/netsuke.txt') - """ + """Resolve direct path ``rendered`` relative to ``workspace``."""Note: _match_candidate_path is exported in all and may retain its example if it's treated as public API, though the underscore prefix suggests otherwise.
tests_python/conftest.py (1)
27-52: Consume the injected stage_common dependency.The fixtures accept stage_common to ensure session-scope loading but never reference it, triggering Ruff ARG001. Either delete the argument explicitly (del stage_common) or assign it to a throwaway variable to satisfy linters.
Apply this diff:
@pytest.fixture def staging_package(stage_common: object) -> object: """Expose the staging package for API boundary assertions.""" - + del stage_common return importlib.import_module("stage_common.staging") @pytest.fixture def staging_pipeline(stage_common: object) -> object: """Expose the staging pipeline module for unit-level assertions.""" - + del stage_common return importlib.import_module("stage_common.staging.pipeline") @pytest.fixture def staging_output(stage_common: object) -> object: """Expose the staging output helpers for direct testing.""" - + del stage_common return importlib.import_module("stage_common.staging.output") @pytest.fixture def staging_resolution(stage_common: object) -> object: """Expose the path resolution helpers for direct testing.""" - + del stage_common return importlib.import_module("stage_common.staging.resolution")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (24)
.github/actions/stage/scripts/stage.py(1 hunks).github/actions/stage/scripts/stage_common/__init__.py(1 hunks).github/actions/stage/scripts/stage_common/config.py(4 hunks).github/actions/stage/scripts/stage_common/environment.py(1 hunks).github/actions/stage/scripts/stage_common/staging/__init__.py(1 hunks).github/actions/stage/scripts/stage_common/staging/output.py(1 hunks).github/actions/stage/scripts/stage_common/staging/pipeline.py(1 hunks).github/actions/stage/scripts/stage_common/staging/resolution.py(1 hunks).github/release-staging.toml(1 hunks).github/workflows/release.yml(4 hunks).github/workflows/scripts/normalise_windows_paths.py(2 hunks)Makefile(2 hunks)codex/refactor-python-staging-scripts-into-github-action(0 hunks)cyclopts/__init__.pyi(1 hunks)docs/netsuke-design.md(2 hunks)scripts/upload_release_assets.py(7 hunks)tests_python/conftest.py(3 hunks)tests_python/stage_test_helpers.py(1 hunks)tests_python/test_normalise_windows_paths.py(1 hunks)tests_python/test_stage_action.py(1 hunks)tests_python/test_stage_artefacts.py(1 hunks)tests_python/test_stage_cli.py(1 hunks)tests_python/test_stage_config.py(1 hunks)tests_python/test_stage_glob.py(1 hunks)
💤 Files with no reviewable changes (1)
- codex/refactor-python-staging-scripts-into-github-action
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
⚙️ CodeRabbit configuration file
**/*.py: - Keep cyclomatic complexity ≤ 12
- Follow single responsibility and CQRS (command/query segregation)
- Docstrings must follow the
numpystyle guide. Use a single-line summary for private functions and methods, and full structured docs for all public interfaces.- Move conditionals with >2 branches to predicate/helper functions
- Avoid
eval,exec,pickle, monkey-patching,ctypes, unsafe shell- Every module must begin with a triple-quoted docstring explaining its purpose, utility, and usage, including example calls if appropriate.
- Comments and docs must follow en-GB-oxendict (-ize / -yse / -our) spelling and grammar
- Lint suppressions:
- Blanket
# noqa, file-level skips, and categories are forbidden- Only narrow in-line disables (
# noqa: XYZ) are permitted, and must be accompanied byFIXME:or a ticket link, and used only as a last resort.- Use
pytestfixtures for shared setup (conftest.pyorfixtures/)- Replace duplicate tests with
@pytest.mark.parametrize- Prefer
pytest-mockorunittest.mockfor stubs/mocks- Use
assert …, "message"over bare asserts- Reflect all API/behaviour changes in
docs/and update roadmap on completion- Files must not exceed 400 logical lines:
- Decompose large modules into subpackages
- Split large
match/caseor dispatch tables by domain and collocate with targets if appropriate- Move bulky data (fixtures, templates) to external files for parsing at runtime
- Mutable defaults and shadowed built-ins are forbidden
- All code must have clear type hints using modern style (
A | B,list[str],class Foo[A]:,type Bar = int, etc.), with ABC imports drawn from the correct stdlib module.- All path manipulation must be performed using pathlib for cross platform safety. Do not use string manipulation for extraction of path components or filename elements.
Files:
tests_python/conftest.pytests_python/test_stage_action.pyscripts/upload_release_assets.pytests_python/test_stage_glob.pytests_python/test_normalise_windows_paths.pytests_python/test_stage_artefacts.pytests_python/test_stage_cli.pytests_python/test_stage_config.pytests_python/stage_test_helpers.py
docs/**/*.md
📄 CodeRabbit inference engine (AGENTS.md)
docs/**/*.md: Use docs/ markdown as the reference for requirements, dependencies, and architectural decisions
Proactively update docs/ when decisions, requirements, or dependencies change
Documentation must use en-GB-oxendict spelling and grammar (LICENSE name unchanged)
Files:
docs/netsuke-design.md
**/*.md
📄 CodeRabbit inference engine (AGENTS.md)
**/*.md: Validate Markdown with make markdownlint
Run make fmt after doc changes to format Markdown and fix tables
Validate Mermaid diagrams in Markdown using make nixie
Wrap paragraphs and bullet points at 80 columns
Wrap code blocks at 120 columns
Do not wrap tables and headings
Use dashes (-) for list bullets
Use GitHub-flavoured Markdown footnotes ([^1]) for references and footnotes
Files:
docs/netsuke-design.md
⚙️ CodeRabbit configuration file
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -yse / -our) spelling and grammar
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
docs/netsuke-design.md
docs/**/*.{rs,md}
📄 CodeRabbit inference engine (docs/rust-doctest-dry-guide.md)
docs/**/*.{rs,md}: In fenced documentation code blocks, specify the language as rust (```rust)
Use assert!, assert_eq!, and assert_ne! in doctests to verify outcomes rather than relying on panic-free execution alone
Avoid using .unwrap() or .expect() in examples; handle errors idiomatically
For fallible examples, write an explicit fn main() -> Result<...> and hide boilerplate with # lines
Alternatively, use the (()) shorthand at the end of a code block (no whitespace) to create an implicit Result-returning main
Use hidden lines (#) to hide setup, main wrappers, and non-essential use statements in doctests
Prefer no_run over ignore for examples with side effects (e.g., network, filesystem)
Use should_panic for examples intended to panic; the test should fail if it does not panic
Use compile_fail for examples that should not compile, but recognize its brittleness across compiler versions
Use edition20xx (e.g., edition2018, edition2021) on code fences when demonstrating edition-specific syntax
Avoid relying solely on #[cfg(feature = "...")] inside doctest code blocks, which can yield misleading empty-ok tests when the feature is disabled
Files:
docs/netsuke-design.md
🧬 Code graph analysis (16)
tests_python/conftest.py (1)
tests_python/test_stage_cli.py (1)
workspace(36-41)
.github/actions/stage/scripts/stage_common/staging/pipeline.py (4)
.github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5).github/actions/stage/scripts/stage_common/staging/output.py (3)
_prepare_output_data(27-77)_validate_no_reserved_key_collisions(80-107)write_github_output(110-143).github/actions/stage/scripts/stage_common/staging/resolution.py (1)
_match_candidate_path(108-125).github/actions/stage/scripts/stage_common/config.py (4)
ArtefactConfig(38-45)StagingConfig(49-89)staging_dir(64-66)as_template_context(73-89)
scripts/upload_release_assets.py (1)
cyclopts/__init__.pyi (3)
App(11-14)Parameter(8-9)default(13-13)
tests_python/test_stage_glob.py (5)
tests_python/stage_test_helpers.py (1)
write_workspace_inputs(22-34)tests_python/conftest.py (3)
stage_common(16-24)workspace(56-61)staging_resolution(49-52).github/actions/stage/scripts/stage_common/config.py (2)
StagingConfig(49-89)ArtefactConfig(38-45).github/actions/stage/scripts/stage_common/staging/pipeline.py (1)
stage_artefacts(85-134).github/actions/stage/scripts/stage_common/staging/resolution.py (1)
_match_candidate_path(108-125)
.github/actions/stage/scripts/stage_common/staging/__init__.py (2)
.github/actions/stage/scripts/stage_common/staging/output.py (1)
write_github_output(110-143).github/actions/stage/scripts/stage_common/staging/pipeline.py (3)
StageResult(36-42)StagedArtefact(54-59)stage_artefacts(85-134)
.github/actions/stage/scripts/stage_common/staging/resolution.py (1)
tests_python/conftest.py (1)
workspace(56-61)
tests_python/test_normalise_windows_paths.py (1)
.github/workflows/scripts/normalise_windows_paths.py (1)
main(27-80)
tests_python/test_stage_artefacts.py (6)
tests_python/stage_test_helpers.py (2)
decode_output_file(10-19)write_workspace_inputs(22-34)tests_python/conftest.py (5)
stage_common(16-24)staging_package(28-31)staging_pipeline(35-38)staging_output(42-45)workspace(56-61).github/actions/stage/scripts/stage_common/staging/pipeline.py (9)
stage_artefacts(85-134)StageResult(36-42)StagedArtefact(54-59)_initialize_staging_dir(62-82)_iter_staged_artefacts(167-210)_StagingEnvironment(46-50)_stage_single_artefact(213-240)_RenderAttempt(30-32)_ensure_source_available(137-164).github/actions/stage/scripts/stage_common/staging/output.py (3)
write_github_output(110-143)_prepare_output_data(27-77)_validate_no_reserved_key_collisions(80-107).github/actions/stage/scripts/stage_common/config.py (4)
staging_dir(64-66)StagingConfig(49-89)ArtefactConfig(38-45)as_template_context(73-89).github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5)
tests_python/test_stage_cli.py (3)
scripts/upload_release_assets.py (2)
default(55-59)App(48-65)cyclopts/__init__.pyi (2)
default(13-13)App(11-14)tests_python/conftest.py (1)
workspace(56-61)
.github/actions/stage/scripts/stage_common/staging/output.py (2)
.github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5).github/actions/stage/scripts/stage_common/config.py (1)
staging_dir(64-66)
.github/actions/stage/scripts/stage.py (2)
.github/actions/stage/scripts/stage_common/environment.py (1)
require_env_path(13-30).github/actions/stage/scripts/stage_common/config.py (1)
load_config(92-147)
cyclopts/__init__.pyi (1)
scripts/upload_release_assets.py (3)
Parameter(42-46)App(48-65)default(55-59)
tests_python/test_stage_config.py (4)
tests_python/conftest.py (2)
stage_common(16-24)workspace(56-61).github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5).github/actions/stage/scripts/stage_common/environment.py (1)
require_env_path(13-30).github/actions/stage/scripts/stage_common/config.py (4)
StagingConfig(49-89)as_template_context(73-89)load_config(92-147)staging_dir(64-66)
.github/workflows/scripts/normalise_windows_paths.py (1)
.github/actions/stage/scripts/stage.py (1)
main(34-64)
.github/actions/stage/scripts/stage_common/__init__.py (1)
.github/actions/stage/scripts/stage_common/staging/pipeline.py (2)
StageResult(36-42)stage_artefacts(85-134)
.github/actions/stage/scripts/stage_common/config.py (1)
.github/actions/stage/scripts/stage_common/errors.py (1)
StageError(4-5)
🪛 Ruff (0.13.3)
tests_python/conftest.py
28-28: Unused function argument: stage_common
(ARG001)
35-35: Unused function argument: stage_common
(ARG001)
42-42: Unused function argument: stage_common
(ARG001)
49-49: Unused function argument: stage_common
(ARG001)
.github/actions/stage/scripts/stage_common/staging/pipeline.py
85-85: Remove quotes from type annotation
Remove quotes
(UP037)
139-139: Remove quotes from type annotation
Remove quotes
(UP037)
168-168: Remove quotes from type annotation
Remove quotes
(UP037)
214-214: Remove quotes from type annotation
Remove quotes
(UP037)
216-216: Remove quotes from type annotation
Remove quotes
(UP037)
263-263: Remove quotes from type annotation
Remove quotes
(UP037)
scripts/upload_release_assets.py
51-51: Unused method argument: args
(ARG002)
51-51: Unused method argument: kwargs
(ARG002)
tests_python/test_normalise_windows_paths.py
25-25: Avoid specifying long messages outside the exception class
(TRY003)
60-60: subprocess call: check for execution of untrusted input
(S603)
tests_python/test_stage_artefacts.py
87-87: Probable insecure usage of temporary file or directory: "/tmp/stage"
(S108)
⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: release / build-windows (aarch64-pc-windows-msvc, arm64, arm64, windows-aarch64)
- GitHub Check: release / build-macos (aarch64-apple-darwin, macos-14, macos-arm64, arm64, macos-aarch64)
- GitHub Check: release / build-linux (x86_64-unknown-linux-gnu, linux-amd64, amd64)
- GitHub Check: release / build-windows (x86_64-pc-windows-msvc, amd64, x64, windows-x86_64)
- GitHub Check: release / build-linux (aarch64-unknown-linux-gnu, linux-arm64, arm64)
- GitHub Check: release / build-macos (x86_64-apple-darwin, macos-13, macos-x86_64, x86_64, macos-x86_64)
- GitHub Check: build-test (ubuntu-latest, stable)
🔇 Additional comments (14)
Makefile (1)
27-30: Keep the typecheck recipe.The uvx call binds ty with Cyclopts and scopes both the scripts directory and entrypoints, so the target is sound.
tests_python/stage_test_helpers.py (2)
10-19: LGTM!The simplified key-value parser correctly handles GitHub Actions output format with proper escape decoding and empty line filtering.
22-34: LGTM!The helper correctly populates test workspace with staged artefacts using pathlib for cross-platform safety and explicit UTF-8 encoding.
.github/workflows/scripts/normalise_windows_paths.py (2)
1-18: LGTM!The module docstring is comprehensive, documenting purpose, environment requirements, and usage pattern as required by coding guidelines.
27-80: LGTM!The main() function correctly validates environment variables, normalizes Windows paths using PureWindowsPath, and writes to GITHUB_OUTPUT with appropriate error handling and exit codes.
.github/actions/stage/scripts/stage_common/config.py (3)
1-16: LGTM!The module docstring comprehensively documents purpose, utility, and usage with a clear example, satisfying coding guidelines.
37-89: LGTM!The dataclasses are well-structured with appropriate docstrings. The StagingConfig methods provide clear interfaces for template rendering and staging directory resolution.
92-147: LGTM!The load_config function has comprehensive NumPy-style documentation and robust error handling for configuration validation.
tests_python/conftest.py (1)
55-61: LGTM!The workspace fixture correctly creates an isolated test workspace and sets GITHUB_WORKSPACE using monkeypatch for clean teardown.
.github/actions/stage/scripts/stage_common/staging/pipeline.py (2)
29-60: LGTM!The dataclasses are well-defined with appropriate docstrings and clear responsibilities.
85-134: LGTM!The stage_artefacts function has comprehensive NumPy-style documentation and correctly orchestrates the staging pipeline. The quoted type annotations are necessary for TYPE_CHECKING to avoid runtime circular imports.
.github/workflows/release.yml (2)
175-211: LGTM!The build-windows job correctly uses the stage action with target-specific keys and normalises Windows paths using the extracted script. The integration aligns with the new TOML-driven staging flow.
250-288: LGTM!The build-macos job correctly uses the stage action with target-specific keys and consumes top-level outputs directly, eliminating the need for JSON parsing. This aligns with the improved composite action interface.
.github/actions/stage/scripts/stage.py (1)
34-64: LGTM!The main() function has comprehensive NumPy-style documentation and correctly wraps config_file in Path() as defensive programming, ensuring compatibility with Cyclopts' type handling.
| def _render_template(template: str, context: dict[str, typ.Any]) -> str: | ||
| """Render ``template`` with ``context`` and normalise formatting errors. | ||
|
|
||
| Examples | ||
| -------- | ||
| >>> _render_template("{name}", {"name": "netsuke"}) | ||
| 'netsuke' | ||
| """ | ||
|
|
||
| try: | ||
| return template.format(**context) | ||
| except KeyError as exc: | ||
| message = f"Invalid template key {exc} in '{template}'" | ||
| raise StageError(message) from exc | ||
| except ValueError as exc: | ||
| message = f"Template formatting error: {exc} in '{template}'" | ||
| raise StageError(message) from exc |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Collapse private helper docstring to single-line summary.
The _render_template function is private and should have a single-line docstring per coding guidelines. Move the embedded example to unit tests.
Apply this diff:
def _render_template(template: str, context: dict[str, typ.Any]) -> str:
- """Render ``template`` with ``context`` and normalise formatting errors.
-
- Examples
- --------
- >>> _render_template("{name}", {"name": "netsuke"})
- 'netsuke'
- """
+ """Render template string with context, normalising format errors."""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _render_template(template: str, context: dict[str, typ.Any]) -> str: | |
| """Render ``template`` with ``context`` and normalise formatting errors. | |
| Examples | |
| -------- | |
| >>> _render_template("{name}", {"name": "netsuke"}) | |
| 'netsuke' | |
| """ | |
| try: | |
| return template.format(**context) | |
| except KeyError as exc: | |
| message = f"Invalid template key {exc} in '{template}'" | |
| raise StageError(message) from exc | |
| except ValueError as exc: | |
| message = f"Template formatting error: {exc} in '{template}'" | |
| raise StageError(message) from exc | |
| def _render_template(template: str, context: dict[str, typ.Any]) -> str: | |
| """Render template string with context, normalising format errors.""" | |
| try: | |
| return template.format(**context) | |
| except KeyError as exc: | |
| message = f"Invalid template key {exc} in '{template}'" | |
| raise StageError(message) from exc | |
| except ValueError as exc: | |
| message = f"Template formatting error: {exc} in '{template}'" | |
| raise StageError(message) from exc |
🤖 Prompt for AI Agents
.github/actions/stage/scripts/stage_common/staging/pipeline.py around lines 243
to 259: the private helper _render_template currently has a multi-line docstring
with an example; replace it with a single-line summary per coding guidelines and
remove the embedded doctest example (move example into unit tests instead). Keep
the function signature and existing exception handling unchanged, update the
docstring to a concise one-liner like "Render template with context and
normalise formatting errors." and ensure formatting and linting remain valid.
| class App: | ||
| """Minimal shim to surface a descriptive error when Cyclopts is absent.""" | ||
|
|
||
| def __init__(self, *args: object, **kwargs: object) -> None: |
There was a problem hiding this comment.
Prefix unused parameters with underscores for consistency.
Ruff flags args and kwargs as unused. The Parameter class (line 45) and the App.__call__ method (line 62) both use _args and _kwargs to signal that parameters are intentionally ignored. Apply the same convention here for consistency.
Apply this diff:
- def __init__(self, *args: object, **kwargs: object) -> None:
+ def __init__(self, *_args: object, **_kwargs: object) -> None:🤖 Prompt for AI Agents
In scripts/upload_release_assets.py at line 51, rename the unused parameters
from `args` and `kwargs` to `_args` and `_kwargs` in the App.__init__ method
signature to match the convention used elsewhere in the file (Parameter.__init__
at line 45 and App.__call__ at line 62) and silence Ruff ARG002 warnings.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__(self, *args: object, **kwargs: object) -> None: | |
| class App: | |
| """Minimal shim to surface a descriptive error when Cyclopts is absent.""" | |
| def __init__(self, *_args: object, **_kwargs: object) -> None: | |
| message = "Cyclopts is required for CLI usage; install it to run the script" | |
| self._error = RuntimeError(message) |
🧰 Tools
🪛 Ruff (0.13.3)
51-51: Unused method argument: args
(ARG002)
51-51: Unused method argument: kwargs
(ARG002)
🤖 Prompt for AI Agents
In scripts/upload_release_assets.py around line 51, the App.__init__ signature
uses unused parameters named args and kwargs; rename them to _args and _kwargs
to match the convention used by Parameter.__init__ and App.__call__ and silence
Ruff ARG002. Update the method signature to accept _args and _kwargs and leave
the body unchanged.
| """Tests exercising artefact staging and output generation.""" | ||
|
|
||
| from __future__ import annotations | ||
|
|
||
| import json | ||
| from pathlib import Path | ||
|
|
||
| import pytest | ||
|
|
||
| from stage_test_helpers import decode_output_file, write_workspace_inputs | ||
|
|
||
|
|
||
| def test_staging_package_exports_public_api( | ||
| stage_common: object, | ||
| staging_package: object, | ||
| staging_pipeline: object, | ||
| staging_output: object, | ||
| ) -> None: | ||
| """The staging package should expose the streamlined public surface.""" | ||
|
|
||
| assert staging_package.stage_artefacts is stage_common.stage_artefacts | ||
| assert staging_package.StageResult is staging_pipeline.StageResult | ||
| assert staging_package.StageResult is stage_common.StageResult | ||
| assert staging_package.StagedArtefact is staging_pipeline.StagedArtefact | ||
| assert staging_package.write_github_output is staging_output.write_github_output | ||
| assert ( | ||
| staging_package.RESERVED_OUTPUT_KEYS | ||
| == staging_output.RESERVED_OUTPUT_KEYS | ||
| ) | ||
|
|
||
|
|
||
| def test_initialize_staging_dir_removes_existing_contents( | ||
| staging_pipeline: object, tmp_path: Path | ||
| ) -> None: | ||
| """The helper should clear any previous staging directory contents.""" | ||
|
|
||
| staging_dir = tmp_path / "stage" | ||
| stale_file = staging_dir / "stale.txt" | ||
| stale_file.parent.mkdir(parents=True, exist_ok=True) | ||
| stale_file.write_text("old", encoding="utf-8") | ||
|
|
||
| staging_pipeline._initialize_staging_dir(staging_dir) | ||
|
|
||
| assert staging_dir.exists(), "Expected staging directory to be recreated" | ||
| assert list(staging_dir.iterdir()) == [], "Stale artefacts should be removed" | ||
|
|
||
|
|
||
| def test_prepare_output_data_returns_sorted_metadata( | ||
| staging_output: object, tmp_path: Path | ||
| ) -> None: | ||
| """Output preparation should normalise ordering and serialise metadata.""" | ||
|
|
||
| staging_dir = tmp_path / "dist" / "stage" | ||
| staged = [ | ||
| staging_dir / "b.bin", | ||
| staging_dir / "a.txt", | ||
| ] | ||
| outputs = { | ||
| "binary": staging_dir / "b.bin", | ||
| "manual": staging_dir / "a.txt", | ||
| } | ||
| checksums = {"b.bin": "bbb", "a.txt": "aaa"} | ||
|
|
||
| result = staging_output._prepare_output_data( | ||
| staging_dir, staged, outputs, checksums | ||
| ) | ||
|
|
||
| assert result["artifact_dir"].endswith("stage"), "Expected staging directory output" | ||
| assert result["dist_dir"].endswith("dist"), "Expected dist directory output" | ||
| assert result["staged_files"].splitlines() == [ | ||
| "a.txt", | ||
| "b.bin", | ||
| ], "Staged files should be sorted" | ||
| artefact_map = json.loads(result["artefact_map"]) | ||
| assert list(artefact_map) == ["binary", "manual"], "Outputs should be sorted" | ||
| checksum_map = json.loads(result["checksum_map"]) | ||
| assert list(checksum_map) == ["a.txt", "b.bin"], "Checksums should be sorted" | ||
|
|
||
|
|
||
| def test_validate_no_reserved_key_collisions_rejects_reserved_keys( | ||
| staging_output: object | ||
| ) -> None: | ||
| """Reserved workflow keys should trigger a stage error.""" | ||
|
|
||
| with pytest.raises(staging_output.StageError) as exc: | ||
| staging_output._validate_no_reserved_key_collisions( | ||
| {"artifact_dir": Path("/tmp/stage")} | ||
| ) | ||
|
|
||
| assert "collide with reserved keys" in str(exc.value) | ||
|
|
||
|
|
||
| def test_write_github_output_formats_values( | ||
| staging_output: object, tmp_path: Path | ||
| ) -> None: | ||
| """The GitHub output helper should escape strings and stream lists.""" | ||
|
|
||
| output_file = tmp_path / "github" / "output.txt" | ||
| output_file.parent.mkdir(parents=True, exist_ok=True) | ||
| output_file.write_text("initial=value\n", encoding="utf-8") | ||
|
|
||
| staging_output.write_github_output( | ||
| output_file, | ||
| { | ||
| "name": "value with%percent\nand newline", | ||
| "lines": ["one", "two"], | ||
| }, | ||
| ) | ||
|
|
||
| content = output_file.read_text(encoding="utf-8") | ||
| assert "initial=value" in content, "Existing output lines should remain" | ||
| assert ( | ||
| "name=value with%25percent%0Aand newline" in content | ||
| ), "String values should be escaped" | ||
| assert ( | ||
| "lines<<gh_LINES" in content | ||
| ), "List values should use the multi-line protocol" | ||
| assert "one\ntwo" in content, "List payload should be preserved" | ||
| lines = content.splitlines() | ||
| lines_header_index = lines.index("lines<<gh_LINES") | ||
| name_index = next( | ||
| idx for idx, value in enumerate(lines) if value.startswith("name=") | ||
| ) | ||
| assert ( | ||
| lines_header_index < name_index | ||
| ), "Outputs should be written in deterministic sorted order" | ||
|
|
||
|
|
||
| def test_stage_artefacts_exports_metadata( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """The staging pipeline should copy inputs, hash them, and export outputs.""" | ||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="target/{target}/release/{bin_name}{bin_ext}", | ||
| required=True, | ||
| output="binary_path", | ||
| ), | ||
| stage_common.ArtefactConfig( | ||
| source="target/generated-man/{target}/release/{bin_name}.1", | ||
| required=True, | ||
| output="man_path", | ||
| ), | ||
| stage_common.ArtefactConfig( | ||
| source="LICENSE", | ||
| required=True, | ||
| output="license_path", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| result = stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| staging_dir = workspace / "dist" / "netsuke_linux_amd64" | ||
| assert result.staging_dir == staging_dir, "StageResult must record the staging directory" | ||
| assert staging_dir.exists(), "Expected staging directory to be created" | ||
|
|
||
| staged_files = {path.name for path in result.staged_artefacts} | ||
| assert staged_files == {"netsuke", "netsuke.1", "LICENSE"}, "Unexpected artefacts staged" | ||
| assert set(result.outputs) == {"binary_path", "man_path", "license_path"}, "Outputs missing expected keys" | ||
| expected_checksums = { | ||
| "netsuke": staging_dir / "netsuke.sha256", | ||
| "netsuke.1": staging_dir / "netsuke.1.sha256", | ||
| "LICENSE": staging_dir / "LICENSE.sha256", | ||
| } | ||
| assert set(result.checksums) == set(expected_checksums), "Checksum outputs missing entries" | ||
| for path in expected_checksums.values(): | ||
| assert path.exists(), f"Checksum file {path.name} was not written" | ||
|
|
||
| outputs = decode_output_file(github_output) | ||
| assert outputs["artifact_dir"] == staging_dir.as_posix(), "artifact_dir output should reference staging directory" | ||
| assert outputs["binary_path"].endswith("netsuke"), "binary_path output should point to the staged executable" | ||
| assert outputs["license_path"].endswith("LICENSE"), "license_path output should point to the staged licence" | ||
| assert outputs["dist_dir"].endswith("dist"), "dist_dir output should reflect parent directory" | ||
| staged_listing = outputs["staged_files"].splitlines() | ||
| assert staged_listing == sorted(staged_listing), "Staged files output should be sorted" | ||
| artefact_map = json.loads(outputs["artefact_map"]) | ||
| assert artefact_map["binary_path"].endswith("netsuke"), "artefact map should include the binary path" | ||
| assert artefact_map["license_path"].endswith("LICENSE"), "artefact map should include the licence path" | ||
| checksum_map = json.loads(outputs["checksum_map"]) | ||
| assert set(checksum_map) == {"netsuke", "netsuke.1", "LICENSE"}, "Checksum map missing entries" | ||
|
|
||
|
|
||
| def test_stage_artefacts_reinitialises_staging_dir( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Running the pipeline should recreate the staging directory afresh.""" | ||
|
|
||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| staging_dir = workspace / "dist" / "netsuke_linux_amd64" | ||
| stale = staging_dir / "obsolete.txt" | ||
| stale.parent.mkdir(parents=True, exist_ok=True) | ||
| stale.write_text("stale", encoding="utf-8") | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="target/{target}/release/{bin_name}{bin_ext}", | ||
| required=True, | ||
| output="binary_path", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| assert not stale.exists(), "Previous staging artefacts should be removed" | ||
| current_entries = {path.name for path in staging_dir.iterdir()} | ||
| assert "obsolete.txt" not in current_entries, "Old entries must not survive reinitialisation" | ||
|
|
||
|
|
||
| def test_stage_artefacts_rejects_reserved_output_key( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Configs using reserved workflow outputs should error out.""" | ||
|
|
||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="LICENSE", | ||
| required=True, | ||
| output="artifact_dir", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| github_output.write_text("", encoding="utf-8") | ||
| with pytest.raises(stage_common.StageError) as exc: | ||
| stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| assert "collide with reserved keys" in str(exc.value) | ||
| assert github_output.read_text(encoding="utf-8") == "", "Outputs should not be written when validation fails" | ||
|
|
||
|
|
||
| def test_stage_artefacts_appends_github_output( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Writing outputs should append to the existing ``GITHUB_OUTPUT`` file.""" | ||
|
|
||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="LICENSE", | ||
| required=True, | ||
| output="license_path", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| github_output.write_text("previous=value\n", encoding="utf-8") | ||
|
|
||
| stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| content = github_output.read_text(encoding="utf-8") | ||
| assert content.startswith("previous=value"), "Existing lines should remain at the top" | ||
| assert "artifact_dir=" in content, "New outputs should be appended to the file" | ||
|
|
||
|
|
||
| def test_stage_artefacts_warns_for_optional( | ||
| stage_common: object, workspace: Path, caplog: pytest.LogCaptureFixture | ||
| ) -> None: | ||
| """Optional artefacts should emit a warning when absent but not abort.""" | ||
| target = "x86_64-unknown-linux-gnu" | ||
| write_workspace_inputs(workspace, target) | ||
|
|
||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="target/{target}/release/{bin_name}{bin_ext}", | ||
| required=True, | ||
| ), | ||
| stage_common.ArtefactConfig( | ||
| source="missing.txt", | ||
| required=False, | ||
| output="missing", | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target=target, | ||
| ) | ||
|
|
||
| with caplog.at_level("WARNING"): | ||
| stage_common.stage_artefacts(config, workspace / "out.txt") | ||
|
|
||
| assert any( | ||
| "Optional artefact missing" in message for message in caplog.messages | ||
| ), "Optional artefact warning missing" | ||
|
|
||
|
|
||
| def test_stage_artefacts_fails_with_attempt_context( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Missing required artefacts should include context in the error message.""" | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[ | ||
| stage_common.ArtefactConfig( | ||
| source="missing-{target}", | ||
| required=True, | ||
| ), | ||
| ], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="x86_64-unknown-linux-gnu", | ||
| ) | ||
|
|
||
| with pytest.raises(stage_common.StageError) as exc: | ||
| stage_common.stage_artefacts(config, workspace / "outputs.txt") | ||
|
|
||
| message = str(exc.value) | ||
| assert "Workspace=" in message, "Workspace context missing from error" | ||
| assert "missing-{target}" in message, "Template pattern missing from error" | ||
| assert ( | ||
| "missing-x86_64-unknown-linux-gnu" in message | ||
| ), "Rendered path missing from error" | ||
|
|
||
|
|
||
| def test_iter_staged_artefacts_yields_metadata( | ||
| stage_common: object, staging_pipeline: object, workspace: Path | ||
| ) -> None: | ||
| """The iterator should yield dataclass entries with staged file metadata.""" | ||
|
|
||
| source = workspace / "LICENSE" | ||
| source.write_text("payload", encoding="utf-8") | ||
| artefact = stage_common.ArtefactConfig(source="LICENSE", required=True) | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[artefact], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="test", | ||
| ) | ||
|
|
||
| staging_dir = config.staging_dir() | ||
| context = config.as_template_context() | ||
| staging_pipeline._initialize_staging_dir(staging_dir) | ||
|
|
||
| staged = list( | ||
| staging_pipeline._iter_staged_artefacts(config, staging_dir, context) | ||
| ) | ||
|
|
||
| assert len(staged) == 1, "Expected the iterator to yield the staged artefact" | ||
| entry = staged[0] | ||
| assert isinstance(entry, staging_pipeline.StagedArtefact) | ||
| assert entry.path.exists(), "Staged artefact path should exist on disk" | ||
| assert entry.checksum, "Iterator should include a checksum digest" | ||
| checksum_file = entry.path.with_name(f"{entry.path.name}.sha256") | ||
| assert checksum_file.exists(), "Checksum sidecar should be written" | ||
|
|
||
|
|
||
| def test_stage_artefacts_aligns_with_iterator( | ||
| stage_common: object, staging_pipeline: object, workspace: Path | ||
| ) -> None: | ||
| """Behaviourally verify the iterator matches the public staging result.""" | ||
|
|
||
| (workspace / "first.txt").write_text("first", encoding="utf-8") | ||
| (workspace / "second.txt").write_text("second", encoding="utf-8") | ||
| artefacts = [ | ||
| stage_common.ArtefactConfig(source="first.txt", required=True), | ||
| stage_common.ArtefactConfig(source="second.txt", required=True), | ||
| ] | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=artefacts, | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="behavioural", | ||
| ) | ||
|
|
||
| staging_dir = config.staging_dir() | ||
| context = config.as_template_context() | ||
| staging_pipeline._initialize_staging_dir(staging_dir) | ||
| iter_names = [ | ||
| entry.path.name | ||
| for entry in staging_pipeline._iter_staged_artefacts( | ||
| config, staging_dir, context | ||
| ) | ||
| ] | ||
|
|
||
| github_output = workspace / "outputs.txt" | ||
| result = stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| assert iter_names == [path.name for path in result.staged_artefacts] | ||
|
|
||
|
|
||
| def test_stage_single_artefact_overwrites_existing_file( | ||
| stage_common: object, staging_pipeline: object, workspace: Path | ||
| ) -> None: | ||
| """The helper should replace existing staged files atomically.""" | ||
|
|
||
| source = workspace / "binary" | ||
| source.write_text("new", encoding="utf-8") | ||
| artefact = stage_common.ArtefactConfig( | ||
| source="binary", | ||
| destination="bin/{source_name}", | ||
| required=True, | ||
| ) | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[artefact], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="unit", | ||
| ) | ||
|
|
||
| staging_dir = config.staging_dir() | ||
| stale = staging_dir / "bin" / "binary" | ||
| stale.parent.mkdir(parents=True, exist_ok=True) | ||
| stale.write_text("old", encoding="utf-8") | ||
|
|
||
| context = config.as_template_context() | ||
| env = staging_pipeline._StagingEnvironment( | ||
| staging_dir=staging_dir, | ||
| context=context, | ||
| ) | ||
| staged_path = staging_pipeline._stage_single_artefact( | ||
| config, env, artefact, source | ||
| ) | ||
|
|
||
| assert staged_path == stale | ||
| assert staged_path.read_text(encoding="utf-8") == "new" | ||
|
|
||
|
|
||
| def test_stage_artefacts_honours_destination_templates( | ||
| stage_common: object, workspace: Path | ||
| ) -> None: | ||
| """Destination templates should be rendered beneath the staging directory.""" | ||
|
|
||
| source = workspace / "payload.bin" | ||
| source.write_text("payload", encoding="utf-8") | ||
| artefact = stage_common.ArtefactConfig( | ||
| source="payload.bin", | ||
| destination="artifacts/{bin_name}/{source_name}", | ||
| required=True, | ||
| output="payload_path", | ||
| ) | ||
| config = stage_common.StagingConfig( | ||
| workspace=workspace, | ||
| bin_name="netsuke", | ||
| dist_dir="dist", | ||
| checksum_algorithm="sha256", | ||
| artefacts=[artefact], | ||
| platform="linux", | ||
| arch="amd64", | ||
| target="behavioural", | ||
| ) | ||
|
|
||
| github_output = workspace / "github.txt" | ||
| result = stage_common.stage_artefacts(config, github_output) | ||
|
|
||
| staged_path = result.outputs["payload_path"] | ||
| relative = staged_path.relative_to(result.staging_dir) | ||
| assert relative.as_posix() == "artifacts/netsuke/payload.bin" | ||
|
|
||
|
|
||
| def test_ensure_source_available_required_error( | ||
| stage_common: object, staging_pipeline: object, workspace: Path | ||
| ) -> None: | ||
| """Missing required artefacts should raise a StageError with context.""" | ||
|
|
||
| artefact = stage_common.ArtefactConfig(source="missing.bin", required=True) | ||
| attempts = [ | ||
| staging_pipeline._RenderAttempt("missing.bin", "missing.bin"), | ||
| ] | ||
|
|
||
| with pytest.raises(stage_common.StageError) as exc: | ||
| staging_pipeline._ensure_source_available( | ||
| None, artefact, attempts, workspace | ||
| ) | ||
|
|
||
| message = str(exc.value) | ||
| assert "Required artefact not found" in message | ||
| assert "missing.bin" in message | ||
|
|
||
|
|
||
| def test_ensure_source_available_optional_warning( | ||
| stage_common: object, | ||
| staging_pipeline: object, | ||
| workspace: Path, | ||
| caplog: pytest.LogCaptureFixture, | ||
| ) -> None: | ||
| """Optional artefacts should be skipped with a warning instead of failing.""" | ||
|
|
||
| artefact = stage_common.ArtefactConfig(source="missing.txt", required=False) | ||
|
|
||
| with caplog.at_level("WARNING"): | ||
| should_stage = staging_pipeline._ensure_source_available( | ||
| None, | ||
| artefact, | ||
| [staging_pipeline._RenderAttempt("missing.txt", "missing.txt")], | ||
| workspace, | ||
| ) | ||
|
|
||
| assert not should_stage | ||
| assert any( | ||
| "Optional artefact missing" in message | ||
| for message in caplog.messages | ||
| ) |
There was a problem hiding this comment.
Split the module to stay within the 400-line limit.
Reorganise the tests into multiple modules so each Python file remains ≤400 logical lines; this file is ~559 lines, breaching the repo’s test style requirements. As per coding guidelines.
🧰 Tools
🪛 Ruff (0.13.3)
87-87: Probable insecure usage of temporary file or directory: "/tmp/stage"
(S108)
| @pytest.fixture | ||
| def workspace(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path: | ||
| """Provide an isolated workspace and set ``GITHUB_WORKSPACE``.""" | ||
| root = tmp_path / "workspace" | ||
| root.mkdir() | ||
| monkeypatch.setenv("GITHUB_WORKSPACE", str(root)) | ||
| return root |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove the duplicate workspace fixture.
Delete this local workspace fixture and rely on the shared version in tests_python/conftest.py to avoid duplication and fixture shadowing.
-@pytest.fixture
-def workspace(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path:
- """Provide an isolated workspace and set ``GITHUB_WORKSPACE``."""
- root = tmp_path / "workspace"
- root.mkdir()
- monkeypatch.setenv("GITHUB_WORKSPACE", str(root))
- return root
-📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.fixture | |
| def workspace(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> Path: | |
| """Provide an isolated workspace and set ``GITHUB_WORKSPACE``.""" | |
| root = tmp_path / "workspace" | |
| root.mkdir() | |
| monkeypatch.setenv("GITHUB_WORKSPACE", str(root)) | |
| return root |
🤖 Prompt for AI Agents
In tests_python/test_stage_cli.py around lines 35 to 41, remove the local pytest
fixture named "workspace" (the tmp_path/monkeypatch fixture that creates a tmp
workspace and sets GITHUB_WORKSPACE) so the test uses the shared "workspace"
fixture defined in tests_python/conftest.py instead; delete the entire fixture
definition block and any direct references to it in this file (if any) so there
is no fixture shadowing and tests will pick up the conftest-provided workspace.
| def stage_cli(monkeypatch: pytest.MonkeyPatch) -> ModuleType: | ||
| """Import the CLI module with a stubbed :mod:`cyclopts`.""" | ||
| monkeypatch.syspath_prepend(str(SCRIPTS_DIR)) | ||
| sys.path.insert(0, str(SCRIPTS_DIR)) | ||
| monkeypatch.setitem(sys.modules, "cyclopts", ModuleType("cyclopts")) | ||
| cyclopts_module = sys.modules["cyclopts"] | ||
| cyclopts_module.App = _StubCycloptsApp # type: ignore[attr-defined] | ||
| module = importlib.import_module("stage") | ||
| monkeypatch.setitem(sys.modules, "stage", module) | ||
| return module | ||
| yield importlib.import_module("stage") | ||
| sys.path.remove(str(SCRIPTS_DIR)) | ||
| sys.modules.pop("stage", None) | ||
| sys.modules.pop("cyclopts", None) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Guarantee teardown when the import fails.
Wrap the import and module stubbing in a try/finally block so sys.path and sys.modules are restored even if importlib.import_module("stage") raises, otherwise later tests inherit the mutated state.
@pytest.fixture
def stage_cli(monkeypatch: pytest.MonkeyPatch) -> ModuleType:
"""Import the CLI module with a stubbed :mod:`cyclopts`."""
- sys.path.insert(0, str(SCRIPTS_DIR))
- monkeypatch.setitem(sys.modules, "cyclopts", ModuleType("cyclopts"))
- cyclopts_module = sys.modules["cyclopts"]
- cyclopts_module.App = _StubCycloptsApp # type: ignore[attr-defined]
- yield importlib.import_module("stage")
- sys.path.remove(str(SCRIPTS_DIR))
- sys.modules.pop("stage", None)
- sys.modules.pop("cyclopts", None)
+ sys.path.insert(0, str(SCRIPTS_DIR))
+ try:
+ monkeypatch.setitem(sys.modules, "cyclopts", ModuleType("cyclopts"))
+ cyclopts_module = sys.modules["cyclopts"]
+ cyclopts_module.App = _StubCycloptsApp # type: ignore[attr-defined]
+ yield importlib.import_module("stage")
+ finally:
+ sys.path.remove(str(SCRIPTS_DIR))
+ sys.modules.pop("stage", None)
+ sys.modules.pop("cyclopts", None)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def stage_cli(monkeypatch: pytest.MonkeyPatch) -> ModuleType: | |
| """Import the CLI module with a stubbed :mod:`cyclopts`.""" | |
| monkeypatch.syspath_prepend(str(SCRIPTS_DIR)) | |
| sys.path.insert(0, str(SCRIPTS_DIR)) | |
| monkeypatch.setitem(sys.modules, "cyclopts", ModuleType("cyclopts")) | |
| cyclopts_module = sys.modules["cyclopts"] | |
| cyclopts_module.App = _StubCycloptsApp # type: ignore[attr-defined] | |
| module = importlib.import_module("stage") | |
| monkeypatch.setitem(sys.modules, "stage", module) | |
| return module | |
| yield importlib.import_module("stage") | |
| sys.path.remove(str(SCRIPTS_DIR)) | |
| sys.modules.pop("stage", None) | |
| sys.modules.pop("cyclopts", None) | |
| @pytest.fixture | |
| def stage_cli(monkeypatch: pytest.MonkeyPatch) -> ModuleType: | |
| """Import the CLI module with a stubbed :mod:`cyclopts`.""" | |
| sys.path.insert(0, str(SCRIPTS_DIR)) | |
| try: | |
| monkeypatch.setitem(sys.modules, "cyclopts", ModuleType("cyclopts")) | |
| cyclopts_module = sys.modules["cyclopts"] | |
| cyclopts_module.App = _StubCycloptsApp # type: ignore[attr-defined] | |
| yield importlib.import_module("stage") | |
| finally: | |
| sys.path.remove(str(SCRIPTS_DIR)) | |
| sys.modules.pop("stage", None) | |
| sys.modules.pop("cyclopts", None) |
Summary
.github/actions/stagethat stages artefacts from a TOML configurationrelease-staging.tomland update the release workflow to consume the new action outputsTesting
ruff format --check .ruff check ..ty-env/bin/ty check --python .ty-env --ignore unknown-argument scripts/*.pyuvx --with "cyclopts>=0.14" pytest tests_python/test_stage_common.pyhttps://chatgpt.com/codex/tasks/task_e_68e78e424f20832280b781ae83f2cd17
Summary by Sourcery
Introduce a generic, reusable GitHub composite Action for staging release artefacts from a TOML configuration, replacing platform-specific scripts across release workflows.
New Features:
Enhancements:
Documentation:
Tests: