Skip to content

Refactor test modules to rely on real Typer integration#85

Merged
leynos merged 7 commits intomainfrom
codex/refactor-rust-build-release-tests-with-pytest-mock
Sep 18, 2025
Merged

Refactor test modules to rely on real Typer integration#85
leynos merged 7 commits intomainfrom
codex/refactor-rust-build-release-tests-with-pytest-mock

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Sep 17, 2025

Summary

  • replace the fake Typer module used by the generate-coverage tests with targeted assertions on real Typer exit codes
  • simplify the script module loader to import real dependencies and adjust fixtures and exit checks accordingly
  • guard run_cmd patching in the rust-build-release harness and update CLI tests to exercise the Typer app via typer.testing

Testing

  • make test
  • make lint
  • make typecheck

https://chatgpt.com/codex/tasks/task_e_68cb3dc058a0832298a90dd7a95c12ec

Summary by Sourcery

Refactor test suites to rely on real Typer integration, simplify module loading, and standardize CLI testing with Typer's runner.

Enhancements:

  • Simplify the script module loader to prepend actual script and repo root paths and clear cached modules before import

Tests:

  • Replace fake Typer and stubbed dependencies in generate-coverage and detect tests with real Typer invocations and targeted exit code assertions
  • Migrate rust-build-release tests from click.testing.CliRunner to typer.testing.CliRunner and unify runner.invoke arguments
  • Guard patch_run_cmd in the rust-build-release harness to only apply when the function exists

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Sep 17, 2025

Reviewer's Guide

This PR refactors the generate-coverage and rust-build-release test suites to drop custom stubs for Typer, plumbum, lxml and related modules in favor of loading real dependencies. It simplifies the dynamic module loader by prepending actual paths, removes extensive fake‐module setup, and updates fixtures, CLI invocations and exit assertions to use real Typer integration.

File-Level Changes

Change Details Files
Simplify dynamic module loader to import real dependencies
  • Prepend script and project root to sys.path instead of injecting fake modules
  • Remove all fake plumbum, lxml and typer stub classes and imports
  • Load modules via importlib.util.spec_from_file_location and execute them directly
.github/actions/generate-coverage/tests/test_scripts.py
Streamline run_rust and run_python fixtures
  • Drop stub argument maps and return ModuleType directly
  • Update fixture signatures to reflect simplified loader
.github/actions/generate-coverage/tests/test_scripts.py
Unify exit code assertions across tests
  • Replace direct exc.value.code checks with getattr logic for Exit or code attributes
  • Use real typer.Exit exception in coverage percent and detect tests
.github/actions/generate-coverage/tests/test_scripts.py
.github/actions/generate-coverage/tests/test_detect.py
Remove manual fake Typer injection in detect tests
  • Delete _FakeExit and _FAKE_TYPER setup in sys.modules
  • Rely on real typer module and its Exit behavior
.github/actions/generate-coverage/tests/test_detect.py
Switch to typer.testing.CliRunner and update CLI tests
  • Import CliRunner from typer.testing instead of click.testing
  • Add prog_name parameter and assert on stderr for error messages
.github/actions/rust-build-release/tests/test_action_setup.py
Guard run_cmd patching in harness factory
  • Wrap patch_run_cmd call in hasattr check to avoid errors on modules without run_cmd
.github/actions/rust-build-release/tests/conftest.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Sep 17, 2025

Note

Reviews paused

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Summary by CodeRabbit

  • Tests
    • Improved reliability of coverage and script tests by using real dependencies and Typer-based exit handling with robust exit code assertions.
    • Updated CLI tests to use Typer’s runner, set explicit programme name, and verify errors via stderr.
    • Refined test harness to conditionally patch command execution only when applicable, reducing unnecessary mocking.
  • Chores
    • Cleaned up unused imports and streamlined type hints for modules, improving test clarity and maintainability.

Walkthrough

Replace mocked Typer and stubbed dependencies in tests with real Typer and real imports. Unify exit handling by extracting exit codes from Exit exceptions. Simplify module loading in coverage tests. Add conditional run_cmd patching in rust-build-release test harness. Switch CLI tests to Typer’s CliRunner and assert errors on stderr.

Changes

Cohort / File(s) Summary of changes
Generate coverage tests — detect
.github/actions/generate-coverage/tests/test_detect.py
Remove Typer mocking; use real typer.Exit; extract exit code via exit_code or code; drop unused imports.
Generate coverage tests — scripts harness
.github/actions/generate-coverage/tests/test_scripts.py
Refactor _load_module to import with real deps; change return type to ModuleType; update fixtures (run_rust_module, run_python_module) to use new loader; adjust test type hints and assertions to typer.Exit with robust exit-code extraction; remove stubs for cargo/python.
Rust build release — test harness
.github/actions/rust-build-release/tests/conftest.py
Call harness.patch_run_cmd() only when target module defines run_cmd; retain behaviour otherwise.
Rust build release — CLI tests
.github/actions/rust-build-release/tests/test_action_setup.py
Replace Click CliRunner with Typer CliRunner; pass prog_name="action-setup"; assert errors on stderr; reformat invocations.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant T as Test
  participant L as _load_module
  participant I as Python Import System
  participant S as scripts/run_*
  Note over T,L: New flow: import with real dependencies
  T->>L: request module "run_rust" / "run_python"
  L->>I: purge cache for target and coverage_parsers
  L->>I: import scripts.<name>
  I-->>S: load module with real deps
  S-->>L: module object
  L-->>T: return ModuleType
  T->>S: invoke CLI via Typer
  S-->>T: raise typer.Exit (exit_code/code)
  T->>T: assert extracted exit code
Loading
sequenceDiagram
  autonumber
  participant TH as Test Harness (module_harness)
  participant M as Target Module
  participant H as Harness
  Note over TH,M: Conditional run_cmd patching
  TH->>M: inspect for attribute run_cmd
  alt run_cmd present
    TH->>H: patch_run_cmd()
    H-->>TH: patched
  else run_cmd absent
    TH->>TH: skip patching
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

Tests shed their mocks, step into light,
Typer now guides each exit right.
Harness listens: patch if told,
Otherwise, leaves the run_cmd cold.
Rust and Python, side by side,
Real deps in tow, they pass with pride.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed Accept the title as concise and accurate; it summarises the primary change of refactoring test modules to use real Typer integration across the generate-coverage, detect and rust-build-release tests. Keep the title short and free of file lists or emojis, since it need not enumerate secondary adjustments like loader simplification or run_cmd guarding. Approve the title for merge history readability.
Docstring Coverage ✅ Passed Docstring coverage is 88.46% which is sufficient. The required threshold is 80.00%.
Description Check ✅ Passed Confirm that the PR description directly references the key changes shown in the file summaries — replacing the fake Typer module, simplifying the script loader to use real dependencies, and guarding run_cmd patching — so it is on-topic. Note that the description aligns with the raw_summary details about tests, fixtures, and Typer exit handling. Treat the description as sufficiently related for this lenient check.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consolidate the repeated exit code extraction logic (getattr on exit_code or code) into a small helper or assertion function to reduce boilerplate and improve readability across tests.
  • Since you’re using typer.testing.CliRunner with prog_name in multiple tests, consider defining a default runner (or fixture) that sets the prog_name once to avoid repeating it in each invocation.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consolidate the repeated exit code extraction logic (getattr on `exit_code` or `code`) into a small helper or assertion function to reduce boilerplate and improve readability across tests.
- Centralize the module‐loading monkeypatch logic (syspath prepends and module cleanup) into a shared fixture in conftest to DRY up setup and simplify individual test modules.
- Since you’re using `typer.testing.CliRunner` with `prog_name` in multiple tests, consider defining a default runner (or fixture) that sets the `prog_name` once to avoid repeating it in each invocation.

## Individual Comments

### Comment 1
<location> `.github/actions/generate-coverage/tests/test_scripts.py:481-485` </location>
<code_context>


-def test_lcov_file_missing(tmp_path: Path, run_rust_module: types.ModuleType) -> None:
+def test_lcov_file_missing(tmp_path: Path, run_rust_module: ModuleType) -> None:
     """Non-existent file triggers ``SystemExit``."""
-    with pytest.raises(SystemExit) as excinfo:
+    with pytest.raises(run_rust_module.typer.Exit) as excinfo:
         run_rust_module.get_line_coverage_percent_from_lcov(tmp_path / "nope.lcov")
-    assert excinfo.value.code == 1
+    exit_code = getattr(excinfo.value, "exit_code", None) or getattr(
+        excinfo.value, "code", None
+    )
</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding a test for when the lcov file exists but is empty.

Adding a test for an empty lcov file will help verify that the function handles this edge case correctly and returns the appropriate result or exit code.
</issue_to_address>

### Comment 2
<location> `.github/actions/generate-coverage/tests/test_scripts.py:518` </location>
<code_context>


-def test_cobertura_detail(tmp_path: Path, run_python_module: types.ModuleType) -> None:
+def test_cobertura_detail(tmp_path: Path, run_python_module: ModuleType) -> None:
     """``get_line_coverage_percent_from_cobertura`` handles per-line detail."""
     xml = tmp_path / "cov.xml"
</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding a test for malformed Cobertura XML files.

Tests for malformed lcov files exist, but similar coverage for Cobertura XML is missing. Adding tests for cases like missing elements or invalid structure would improve error handling in the parser.
</issue_to_address>

### Comment 3
<location> `.github/actions/generate-coverage/tests/test_detect.py:30` </location>
<code_context>
+def test_invalid_format(tmp_path: Path, capsys: pytest.CaptureFixture[str]) -> None:
</code_context>

<issue_to_address>
**suggestion (testing):** Consider testing for valid formats with empty or malformed output files.

Currently, only error handling for the 'lcov' format is tested. Please add cases for empty and malformed output files to improve coverage of error scenarios.
</issue_to_address>

### Comment 4
<location> `.github/actions/rust-build-release/tests/test_action_setup.py:104` </location>
<code_context>
+    assert "contains invalid characters" in result.stderr


 def test_script_validate_step_reports_error() -> None:
</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding assertions to 'test_script_validate_step_reports_error'.

The test only has a 'pass' statement. Please add assertions to check the expected error, or remove the test if it's unnecessary.

Suggested implementation:

```python
def test_script_validate_step_reports_error() -> None:
    result = runner.invoke(
        action_setup_module.app,
        ["validate-step", "invalid step"],
        prog_name="action-setup",
    )
    assert result.exit_code != 0
    assert "contains invalid characters" in result.stderr

```

If `runner` and `action_setup_module` are not available in the scope of this test, you will need to import or define them as in the previous test. Also, ensure that `"validate-step"` and `"invalid step"` are the correct arguments for triggering the error you want to test.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread .github/actions/generate-coverage/tests/test_scripts.py Outdated
Comment thread .github/actions/generate-coverage/tests/test_scripts.py
Comment thread .github/actions/generate-coverage/tests/test_detect.py
Comment thread .github/actions/rust-build-release/tests/test_action_setup.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/actions/generate-coverage/tests/test_scripts.py (1)

206-209: Fix exit-code extraction (0 is falsy).

Use None checks rather than or.

Apply this diff:

-    assert (
-        getattr(excinfo.value, "exit_code", None)
-        or getattr(excinfo.value, "code", None)
-    ) == 1
+    code = getattr(excinfo.value, "exit_code", None)
+    if code is None:
+        code = getattr(excinfo.value, "code", None)
+    assert code == 1
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb82169 and 113f5f6.

📒 Files selected for processing (4)
  • .github/actions/generate-coverage/tests/test_detect.py (2 hunks)
  • .github/actions/generate-coverage/tests/test_scripts.py (11 hunks)
  • .github/actions/rust-build-release/tests/conftest.py (1 hunks)
  • .github/actions/rust-build-release/tests/test_action_setup.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
.github/actions/rust-build-release/tests/test_action_setup.py (1)
.github/actions/rust-build-release/tests/conftest.py (1)
  • action_setup_module (203-209)
.github/actions/generate-coverage/tests/test_scripts.py (2)
.github/actions/rust-build-release/tests/conftest.py (1)
  • _load_module (50-67)
.github/actions/generate-coverage/scripts/coverage_parsers.py (1)
  • get_line_coverage_percent_from_lcov (92-123)
⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Sourcery review
  • GitHub Check: python-tests (ubuntu-latest)
  • GitHub Check: python-tests (windows-latest)
🔇 Additional comments (8)
.github/actions/rust-build-release/tests/test_action_setup.py (3)

11-11: Switch to Typer’s CliRunner is correct.

Import CliRunner from typer.testing to align with Typer’s API.


76-88: Invoke with prog_name to stabilise usage text.

Passing prog_name="action-setup" is appropriate for consistent error/help output.


95-101: Assert error on stderr (correct stream).

Validly check result.stderr for the error text; Typer emits errors on stderr.

.github/actions/rust-build-release/tests/conftest.py (1)

118-120: Guard run_cmd patching to avoid AttributeError.

Patch run_cmd only when present on the target module to keep the harness generic.

.github/actions/generate-coverage/tests/test_scripts.py (4)

16-17: Type-hint ModuleType under TYPE_CHECKING only.

Keep runtime clean while preserving types.


173-173: Use the new loader consistently.

Load run_rust via _load_module across Windows‑specific tests to exercise real deps.

Also applies to: 195-195, 216-216, 239-239, 252-252


458-460: LCOV helpers: behaviour assertions read cleanly.

Tests for zero/missing/malformed data match parser semantics; keep as is.

Also applies to: 467-472, 474-479


512-516: Python/Cobertura tests align with real imports.

Fixtures and assertions are coherent with the new import strategy.

Also applies to: 518-541, 544-551, 554-561

Comment thread .github/actions/generate-coverage/tests/test_detect.py Outdated
Comment thread .github/actions/generate-coverage/tests/test_scripts.py
Comment thread .github/actions/generate-coverage/tests/test_scripts.py Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Sep 17, 2025

@coderabbitai pause

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Sep 17, 2025

✅ Actions performed

Reviews paused.

@leynos leynos force-pushed the codex/refactor-rust-build-release-tests-with-pytest-mock branch from 8b87b76 to 4856575 Compare September 18, 2025 12:29
@leynos leynos merged commit ee76025 into main Sep 18, 2025
4 of 8 checks passed
@leynos leynos deleted the codex/refactor-rust-build-release-tests-with-pytest-mock branch September 18, 2025 12:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant