Skip to content

fix: return non-zero exit code when collection errors occur#132

Merged
hughhan1 merged 1 commit intomainfrom
fix/collection-error-exit-code
Jan 4, 2026
Merged

fix: return non-zero exit code when collection errors occur#132
hughhan1 merged 1 commit intomainfrom
fix/collection-error-exit-code

Conversation

@hughhan1
Copy link
Copy Markdown
Owner

@hughhan1 hughhan1 commented Jan 4, 2026

Summary

  • The run_tests() function now returns exit code 1 when collection errors occur
  • Previously, errors were displayed but execution continued if some tests were collected
  • This matches the behavior of the main CLI entry point (main_cli_with_args)

Test plan

  • Added integration test for collection errors with enum subclass scenario
  • All existing tests pass (102 Rust tests, 48 Python collection integration tests)

The run_tests() function now returns exit code 1 when there are collection errors, matching the behavior of the main CLI entry point. Previously, collection errors were displayed but execution would continue if some tests were successfully collected, potentially masking issues like test classes that inherit from enum.Enum or have syntax errors.
@hughhan1 hughhan1 merged commit b9c9e85 into main Jan 4, 2026
25 checks passed
@hughhan1 hughhan1 deleted the fix/collection-error-exit-code branch January 4, 2026 02:11
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Jan 4, 2026

Benchmark Comparison Report

Comparing current results against baseline from main branch.

Repository Benchmark Baseline Current Change Status
click CLI startup time 36.9ms 37.4ms +1.3% [-]
click Test discovery performance 45.3ms 45.4ms +0.1% [-]
flask CLI startup time 35.4ms 35.1ms -1.0% [-]
flask Test discovery performance 43.3ms 45.1ms +4.2% [-]
flask Test execution with pytest runner 1.72s 1.75s +1.4% [-]
more-itertools CLI startup time 31.7ms 31.9ms +0.4% [-]
more-itertools Test discovery performance 43.2ms 44.0ms +1.9% [-]
more-itertools Test execution with native runner 3.28s 3.40s +3.7% [-]
more-itertools Test execution with pytest runner 3.90s 3.04s -21.9% [++]
pydantic CLI startup time 28.6ms 33.5ms +17.0% [!!]
pydantic Test discovery performance 127.2ms 141.9ms +11.6% [!!]

Summary

  • Total benchmarks: 11
  • Regressions (>5.0% slower): 2
  • Improvements (>5.0% faster): 1
  • No significant change: 8

Performance regressions detected!
Please review the results above to ensure these are expected.

Legend

  • [++] Significant improvement (>10.0% faster)
  • [+] Minor improvement (5.0-10.0% faster)
  • [-] No significant change (±5.0%)
  • [!] Minor regression (5.0-10.0% slower)
  • [!!] Significant regression (>10.0% slower)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant