feat(testsuite): introduce pytest-rerunfailures plugin#948
feat(testsuite): introduce pytest-rerunfailures plugin#948silvi-t wants to merge 1 commit intoKuadrant:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (70)
✅ Files skipped from review due to trivial changes (37)
🚧 Files skipped from review as they are similar to previous changes (29)
📝 WalkthroughWalkthroughAdds pytest rerun support: installs pytest-rerunfailures, enables Changes
Sequence Diagram(s)sequenceDiagram
autonumber
rect rgba(200,200,255,0.5)
actor Developer
end
rect rgba(200,255,200,0.5)
participant Makefile
participant Pytest as "pytest (+rerun plugin)"
end
rect rgba(255,200,200,0.5)
participant Test as "Test Function"
participant Conftest as "conftest.py (collector)"
participant JUnit as "JUnit XML"
end
Developer->>Makefile: invoke `make test`
Makefile->>Pytest: run `python -m pytest --reruns 3 ...`
Pytest->>Test: execute test
alt test fails
Test-->>Pytest: failure
Pytest->>Conftest: emit report (failed attempt)
Conftest-->>Pytest: record attempt metadata
Pytest->>Pytest: plugin schedules rerun (up to 3)
Pytest->>Test: rerun test
end
Pytest->>Conftest: final teardown report
Conftest->>JUnit: write aggregated rerun messages -> `user_properties`
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
testsuite/tests/singlecluster/extensions/plan_policy/test_plan_policy.py (1)
81-90:⚠️ Potential issue | 🟠 MajorDo not rerun the daily-limit parametrisation after only 25 seconds.
Line 90 applies the same 25-second retry delay to every plan, but
bronzeuses thedailylimit and this test consumes the counter before asserting429. If that parametrisation fails after quota exhaustion, the rerun can start with the counter still exhausted and fail immediately on the expected200requests.Either split the daily plan into a
reruns=0test, or disable reruns for this parametrised test.Safer minimal fix
-@pytest.mark.flaky(reruns=3, reruns_delay=25) +@pytest.mark.flaky(reruns=0) def test_plan_policy(client, user_with_plan, allowed_requests):As per coding guidelines, add
@pytest.mark.flaky(reruns=3, reruns_delay=<window+5>)to rate limit tests that exhaust a counter and assert 429, setting delay to the rate limit window + 5 seconds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testsuite/tests/singlecluster/extensions/plan_policy/test_plan_policy.py` around lines 81 - 90, The flaky retry on the parametrised rate-limit test causes false failures for plans that exhaust a daily counter (e.g., PLANS['bronze']); update the pytest.mark.flaky on this test (the decorator applied above the parametrisation) to avoid short reruns: either remove/disable retries by setting reruns=0, or split the daily-limit case into its own test with no retries; if you keep retries for other plans follow the guideline and set reruns_delay to the rate-limit window + 5 seconds (use the plan window value from PLANS for the affected plan) so the rerun won’t start while the counter is still exhausted.testsuite/tests/singlecluster/gateway/mtls/test_mtls_behaviour.py (1)
37-92:⚠️ Potential issue | 🟠 MajorDisable reruns for these state-mutating mTLS tests.
These tests change the shared Kuadrant mTLS configuration, so retrying after a mid-test failure can start the next attempt from an altered control-plane state. Prefer opting them out of global reruns unless each attempt is fully reset before retry.
Suggested fix
`@pytest.mark.parametrize`("component", component_cases, indirect=True) -@pytest.mark.flaky(reruns=3, reruns_delay=15) +@pytest.mark.flaky(reruns=0) def test_requests_succeed_when_mtls_disabled( @@ `@pytest.mark.parametrize`("component", component_cases, indirect=True) -@pytest.mark.flaky(reruns=3, reruns_delay=15) +@pytest.mark.flaky(reruns=0) def test_requests_succeed_when_mtls_enabled( @@ `@pytest.mark.parametrize`("component", component_cases, indirect=True) -@pytest.mark.flaky(reruns=3, reruns_delay=15) +@pytest.mark.flaky(reruns=0) def test_requests_still_succeed_after_mtls_disabled_again(As per coding guidelines,
Add@pytest.mark.flaky(reruns=0) to tests that delete or modify module-scoped fixtures to prevent rerun failures.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testsuite/tests/singlecluster/gateway/mtls/test_mtls_behaviour.py` around lines 37 - 92, The three tests test_requests_succeed_when_mtls_disabled, test_requests_succeed_when_mtls_enabled, and test_requests_still_succeed_after_mtls_disabled_again mutate shared mTLS state and should opt out of reruns; replace their `@pytest.mark.flaky`(reruns=3, reruns_delay=15) decorators with `@pytest.mark.flaky`(reruns=0) (remove reruns_delay) so each test will not be retried and cannot start a rerun from a modified control-plane state.testsuite/tests/singlecluster/limitador/section/test_multiple_same_listener.py (1)
21-26:⚠️ Potential issue | 🟡 MinorUse the 5s counter window for this rerun delay.
This test exhausts the
Limit(3, "5s")counter, so the delay should be5 + 5 = 10seconds rather than 15.Suggested fix
-@pytest.mark.flaky(reruns=3, reruns_delay=15) +@pytest.mark.flaky(reruns=3, reruns_delay=10) def test_two_limits_targeting_one_gateway_listener(client):As per coding guidelines,
Add@pytest.mark.flaky(reruns=3, reruns_delay=<window+5>) to rate limit tests that exhaust a counter and assert 429, setting delay to the rate limit window + 5 seconds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testsuite/tests/singlecluster/limitador/section/test_multiple_same_listener.py` around lines 21 - 26, The flaky rerun delay for test_two_limits_targeting_one_gateway_listener is too long; update the pytest.mark.flaky decorator on the test function test_two_limits_targeting_one_gateway_listener to use reruns_delay=10 (the 5s rate-limit window + 5s buffer) instead of 15 so the reruns align with the exhausted Limit(3, "5s") counter.testsuite/tests/singlecluster/limitador/storage/db/test_redis_cached.py (1)
21-26:⚠️ Potential issue | 🟡 Minor
reruns=0ontest_basicdeviates from the rate-limit guideline.
test_basicis a classic "exhaust counter → assert 429" rate-limit test. As per coding guidelines, such tests should use@pytest.mark.flaky(reruns=3, reruns_delay=<window+5>)so a stale counter from a previous attempt has time to reset, rather than disabling reruns entirely.reruns=0is intended for tests that delete/modify a module-scoped fixture, which is not the case here (thestorage/limitador/rate_limitfixtures are left intact). Consider switching to the delayed-rerun form using the configuredLIMITwindow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testsuite/tests/singlecluster/limitador/storage/db/test_redis_cached.py` around lines 21 - 26, The test_basic decorator currently disables reruns; change it to use flaky with delayed reruns to avoid stale counters: replace `@pytest.mark.flaky`(reruns=0) on test_basic with `@pytest.mark.flaky`(reruns=3, reruns_delay=LIMIT.window + 5) (use the LIMIT/window constant referenced in the test file) so the test will retry up to 3 times with a delay of the rate-limit window plus 5 seconds between attempts.
🧹 Nitpick comments (2)
testsuite/tests/singlecluster/limitador/test_basic_limit.py (1)
12-18: Use parameter-specific rerun delays for the shorter windows.The function-level
reruns_delay=20is correct for15s, but the10sand5scases only need 15s and 10s respectively. Moving the flaky marks to eachpytest.paramkeeps reruns aligned without adding avoidable CI wait time.♻️ Proposed refactor
`@pytest.fixture`( scope="module", params=[ - pytest.param(Limit(2, "15s"), id="2 requests every 15 sec", marks=[pytest.mark.smoke]), - pytest.param(Limit(5, "10s"), id="5 requests every 10 sec"), - pytest.param(Limit(3, "5s"), id="3 request every 5 sec"), + pytest.param( + Limit(2, "15s"), + id="2 requests every 15 sec", + marks=[pytest.mark.smoke, pytest.mark.flaky(reruns=3, reruns_delay=20)], + ), + pytest.param( + Limit(5, "10s"), + id="5 requests every 10 sec", + marks=[pytest.mark.flaky(reruns=3, reruns_delay=15)], + ), + pytest.param( + Limit(3, "5s"), + id="3 request every 5 sec", + marks=[pytest.mark.flaky(reruns=3, reruns_delay=10)], + ), ], ) @@ `@pytest.mark.parametrize`("rate_limit", ["route", "gateway"], indirect=True) -@pytest.mark.flaky(reruns=3, reruns_delay=20) def test_limit(client, limit):As per coding guidelines,
testsuite/tests/**/*.py: Add@pytest.mark.flaky(reruns=3, reruns_delay=<window+5>)to rate limit tests that exhaust a counter and assert 429, setting delay to the rate limit window + 5 seconds.Also applies to: 32-34
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testsuite/tests/singlecluster/limitador/test_basic_limit.py` around lines 12 - 18, The parametrized rate-limit cases should use per-parameter flaky rerun delays instead of a single function-level reruns_delay; update each pytest.param that constructs Limit(...) to include pytest.mark.flaky with reruns=3 and reruns_delay set to the limit window + 5s (so Limit(2, "15s") gets reruns_delay=20, Limit(5, "10s") gets 15, Limit(3, "5s") gets 10), moving/removing any global `@pytest.mark.flaky` on the test and apply the same per-param flaky marks to the other pytest.param occurrences that exhaust counters/assert 429.Makefile (1)
12-12:--reruns 3now applies to thedisruptivetarget too — probably undesirable.Disruptive tests (pod rollouts, storage restarts, etc.) deliberately perturb cluster state; automatically rerunning them on failure can mask genuine regressions and extend their already-long runtime. The
collecttarget already overridesPYTESTto drop--reruns; consider doing the same fordisruptive(or making--reruns 3opt-in per target) so disruptive runs stay deterministic.♻️ Possible approach
disruptive: poetry-no-dev ## Run disruptive tests - $(PYTEST) -m 'disruptive' $(flags) testsuite/tests/ + $(PYTEST) --reruns 0 -m 'disruptive' $(flags) testsuite/tests/(Individual disruptive tests such as
test_durabilityalready set@pytest.mark.flaky(reruns=0), but blanketing the target is safer and avoids relying on every disruptive test being annotated.)Also applies to: 65-66
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` at line 12, The PYTEST variable currently includes a global "--reruns 3" which also affects the disruptive target; update the Makefile so the disruptive target overrides PYTEST (like the collect target does) to remove "--reruns 3" (or set it to empty/omit reruns) so disruptive runs remain deterministic; locate the PYTEST definition and the disruptive target (and related lines ~65-66) and add a target-specific PYTEST assignment without "--reruns 3" (or make reruns opt-in per target) to ensure only non-disruptive runs use retry behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testsuite/tests/conftest.py`:
- Around line 90-95: In _collect_rerun_attempt, avoid overwriting
report.longrepr with the plain string; remove the assignment "report.longrepr =
message" and only record the formatted message into item.rerun_messages (already
done by "item.rerun_messages = getattr(item, 'rerun_messages', []) +
[message]"), and ensure any downstream writing to user_properties or RP
properties uses item.rerun_messages rather than replacing report.longrepr so
structured longrepr objects remain intact for other reporters/plugins.
- Around line 67-87: The docstring for _format_failure_message is inaccurate and
the current traceback filter is too broad: instead of excluding any frame with
"site-packages", change the filter to only drop pytest/pluggy internal frames
(e.g., check for substrings like "/_pytest/" or "pluggy") when iterating
call.excinfo.traceback so third‑party library frames are preserved; also update
the function docstring to match the new behavior (or, if you prefer to keep the
current broad behavior, update the docstring to state that frames under
"site-packages" are removed). Ensure you modify the loop that builds tb_lines
and the docstring text accordingly in _format_failure_message.
- Around line 97-105: The slicing logic using rerun_prev_output is incorrect
because report.sections contains only the current attempt's sections; replace
the slice with storing the full captured_output for each attempt: compute
captured_output from report.sections as currently done, then append
captured_output directly to item.rerun_outputs (e.g. item.rerun_outputs =
getattr(item, "rerun_outputs", []) + [captured_output]) and remove any use or
assignment of item.rerun_prev_output and the prev_output variable.
In `@testsuite/tests/singlecluster/gateway/scaling/test_auto_scale_gateway.py`:
- Around line 55-58: The test decorator on test_auto_scale_gateway currently
uses pytest.mark.flaky(reruns=3, reruns_delay=10) which can mask scaling
behavior because the module-scoped hpa/gateway deployment is mutated across
retries; change the flaky marker to pytest.mark.flaky(reruns=0) (remove
reruns_delay) so the test is not retried, preserving the initial pre-scale
assertions and ensuring gateway.deployment.wait_for_replicas(2) reflects a true
scaling event triggered by the test rather than a prior run.
In `@testsuite/tests/singlecluster/gateway/scaling/test_manual_scale_gateway.py`:
- Line 13: The flaky marker on this stateful test should not allow retries
because the test mutates gateway.deployment (the gateway's replica count) and
does not restore it; update the pytest.mark.flaky decorator in
test_manual_scale_gateway.py to use reruns=0 (i.e.,
`@pytest.mark.flaky`(reruns=0)) so the test will not be retried, or alternatively
restore the original gateway.deployment.replicas before any retry—locate the
decorator above the test function and change reruns accordingly.
---
Outside diff comments:
In `@testsuite/tests/singlecluster/extensions/plan_policy/test_plan_policy.py`:
- Around line 81-90: The flaky retry on the parametrised rate-limit test causes
false failures for plans that exhaust a daily counter (e.g., PLANS['bronze']);
update the pytest.mark.flaky on this test (the decorator applied above the
parametrisation) to avoid short reruns: either remove/disable retries by setting
reruns=0, or split the daily-limit case into its own test with no retries; if
you keep retries for other plans follow the guideline and set reruns_delay to
the rate-limit window + 5 seconds (use the plan window value from PLANS for the
affected plan) so the rerun won’t start while the counter is still exhausted.
In `@testsuite/tests/singlecluster/gateway/mtls/test_mtls_behaviour.py`:
- Around line 37-92: The three tests test_requests_succeed_when_mtls_disabled,
test_requests_succeed_when_mtls_enabled, and
test_requests_still_succeed_after_mtls_disabled_again mutate shared mTLS state
and should opt out of reruns; replace their `@pytest.mark.flaky`(reruns=3,
reruns_delay=15) decorators with `@pytest.mark.flaky`(reruns=0) (remove
reruns_delay) so each test will not be retried and cannot start a rerun from a
modified control-plane state.
In
`@testsuite/tests/singlecluster/limitador/section/test_multiple_same_listener.py`:
- Around line 21-26: The flaky rerun delay for
test_two_limits_targeting_one_gateway_listener is too long; update the
pytest.mark.flaky decorator on the test function
test_two_limits_targeting_one_gateway_listener to use reruns_delay=10 (the 5s
rate-limit window + 5s buffer) instead of 15 so the reruns align with the
exhausted Limit(3, "5s") counter.
In `@testsuite/tests/singlecluster/limitador/storage/db/test_redis_cached.py`:
- Around line 21-26: The test_basic decorator currently disables reruns; change
it to use flaky with delayed reruns to avoid stale counters: replace
`@pytest.mark.flaky`(reruns=0) on test_basic with `@pytest.mark.flaky`(reruns=3,
reruns_delay=LIMIT.window + 5) (use the LIMIT/window constant referenced in the
test file) so the test will retry up to 3 times with a delay of the rate-limit
window plus 5 seconds between attempts.
---
Nitpick comments:
In `@Makefile`:
- Line 12: The PYTEST variable currently includes a global "--reruns 3" which
also affects the disruptive target; update the Makefile so the disruptive target
overrides PYTEST (like the collect target does) to remove "--reruns 3" (or set
it to empty/omit reruns) so disruptive runs remain deterministic; locate the
PYTEST definition and the disruptive target (and related lines ~65-66) and add a
target-specific PYTEST assignment without "--reruns 3" (or make reruns opt-in
per target) to ensure only non-disruptive runs use retry behavior.
In `@testsuite/tests/singlecluster/limitador/test_basic_limit.py`:
- Around line 12-18: The parametrized rate-limit cases should use per-parameter
flaky rerun delays instead of a single function-level reruns_delay; update each
pytest.param that constructs Limit(...) to include pytest.mark.flaky with
reruns=3 and reruns_delay set to the limit window + 5s (so Limit(2, "15s") gets
reruns_delay=20, Limit(5, "10s") gets 15, Limit(3, "5s") gets 10),
moving/removing any global `@pytest.mark.flaky` on the test and apply the same
per-param flaky marks to the other pytest.param occurrences that exhaust
counters/assert 429.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c9e17c82-006e-44da-ac36-846cb7163067
📒 Files selected for processing (68)
CLAUDE.mdMakefilepyproject.tomltestsuite/component_metadata.pytestsuite/kubernetes/__init__.pytestsuite/tests/conftest.pytestsuite/tests/kuadrantctl/cli/test_simple_limit.pytestsuite/tests/multicluster/coredns/two_clusters/test_authoritative_record_removal.pytestsuite/tests/multicluster/coredns/two_clusters/test_update_secondary.pytestsuite/tests/multicluster/global_rate_limiting/test_global_rate_limit.pytestsuite/tests/multicluster/load_balanced/test_change_default_geo.pytestsuite/tests/multicluster/load_balanced/test_change_strategy.pytestsuite/tests/singlecluster/authorino/identity/api_key/test_reconciliation.pytestsuite/tests/singlecluster/authorino/operator/sharding/test_preexisting_auth.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/same_target/test_ab_strategy.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/same_target/test_ba_startegy.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/test_default_merge.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/test_default_replace.pytestsuite/tests/singlecluster/defaults/test_basic_rate_limit.pytestsuite/tests/singlecluster/defaults/test_section_targeting.pytestsuite/tests/singlecluster/egress/test_egress.pytestsuite/tests/singlecluster/extensions/plan_policy/test_plan_policy.pytestsuite/tests/singlecluster/gateway/dnspolicy/test_dnspolicy_removal.pytestsuite/tests/singlecluster/gateway/mtls/test_mtls_behaviour.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_authpolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_dnspolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_ratelimitpolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_tlspolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/listeners/test_gateway_basic_listeners.pytestsuite/tests/singlecluster/gateway/reconciliation/listeners/test_gateway_listeners_dns.pytestsuite/tests/singlecluster/gateway/reconciliation/test_affected_by.pytestsuite/tests/singlecluster/gateway/scaling/test_auto_scale_gateway.pytestsuite/tests/singlecluster/gateway/scaling/test_manual_scale_gateway.pytestsuite/tests/singlecluster/identical_hostnames/auth/test_auth_on_gw_and_route.pytestsuite/tests/singlecluster/identical_hostnames/auth/test_auth_on_routes.pytestsuite/tests/singlecluster/identical_hostnames/rlp/test_rlp_on_gw_and_route.pytestsuite/tests/singlecluster/identical_hostnames/rlp/test_rlp_on_routes.pytestsuite/tests/singlecluster/limitador/collisions/test_collisions.pytestsuite/tests/singlecluster/limitador/method/test_route_subset_method.pytestsuite/tests/singlecluster/limitador/section/test_listener.pytestsuite/tests/singlecluster/limitador/section/test_multiple_same_listener.pytestsuite/tests/singlecluster/limitador/section/test_multiple_same_rule.pytestsuite/tests/singlecluster/limitador/section/test_route_rule.pytestsuite/tests/singlecluster/limitador/storage/db/test_redis.pytestsuite/tests/singlecluster/limitador/storage/db/test_redis_cached.pytestsuite/tests/singlecluster/limitador/storage/disk/test_disk.pytestsuite/tests/singlecluster/limitador/test_basic_limit.pytestsuite/tests/singlecluster/limitador/test_multiple_iterations.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/basic/test_trlp.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/basic/test_trlp_streaming.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/multiple_iterations/test_multiple_trlp_iterations.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/multiple_iterations/test_multiple_trlp_stream_iterations.pytestsuite/tests/singlecluster/observability/test_kuadrant_policy_metrics_lifecycle.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/same_target/test_ab_strategy.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/same_target/test_ba_startegy.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/test_override_merge.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/test_override_replace.pytestsuite/tests/singlecluster/overrides/test_basic_rate_limit.pytestsuite/tests/singlecluster/overrides/test_multiple_rate_limit.pytestsuite/tests/singlecluster/overrides/test_section_targeting.pytestsuite/tests/singlecluster/reconciliation/test_httproute_delete.pytestsuite/tests/singlecluster/test_rate_limit_anonymous.pytestsuite/tests/singlecluster/test_rate_limit_authz.pytestsuite/tests/singlecluster/tracing/control_plane/test_control_plane_errors.pytestsuite/tests/singlecluster/tracing/control_plane/test_control_plane_lifecycle.pytestsuite/tests/singlecluster/ui/console_plugin/policies/dnstls/test_tls_policy.pytestsuite/tests/singlecluster/ui/console_plugin/policies/test_rate_limit_policy.pytestsuite/tests/singlecluster/ui/console_plugin/policies/test_token_rate_limit_policy.py
539a2d8 to
2fc29bc
Compare
2fc29bc to
952aaba
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
testsuite/tests/conftest.py (1)
90-98:⚠️ Potential issue | 🟡 MinorKeep
report.longreprstructured for other reporters.Line 97 still replaces pytest’s structured
longreprwith a plain string. The formatted message is already stored initem.rerun_messages, so removing this assignment preserves compatibility without losing the Report Portal metadata.Proposed fix
def _collect_rerun_attempt(item, call, report): """Record failure message and captured output for one rerun attempt.""" message = _format_failure_message(call, report) - if report.failed and call.excinfo: - exc = call.excinfo.value - result = getattr(exc, "result", None) - if result is not None and callable(getattr(result, "actions", None)): - report.longrepr = message item.rerun_messages = getattr(item, "rerun_messages", []) + [message]To check for local consumers that may inspect
longreprstructurally:#!/bin/bash # Description: Search local pytest reporting hooks and longrepr consumers. # Expected: Any custom reporter using structured longrepr attributes should be reviewed before replacing longrepr with str. rg -nP -C3 '\blongrepr\b|pytest_runtest_logreport|pytest_report_teststatus' --glob '*.py'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testsuite/tests/conftest.py` around lines 90 - 98, In _collect_rerun_attempt, stop overwriting pytest's structured report.longrepr with a plain formatted string: remove the assignment "report.longrepr = message" (or guard it so it only runs if report.longrepr is None) and leave the rest intact so the human-friendly message remains in item.rerun_messages (via _format_failure_message) while preserving the original structured longrepr for other reporters and consumers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/rules/test-reruns.md:
- Around line 7-13: The assertion in test_policy_deletion treats policy.exists()
as a boolean but exists() returns a tuple; update the test to assert the first
element of the tuple (the committed flag) instead of the full tuple — e.g.,
check policy.exists()[0] or use KubernetesObject.committed if available; modify
the assertion in test_policy_deletion so it asserts not policy.exists()[0] (or
not policy.committed) to correctly verify deletion.
---
Duplicate comments:
In `@testsuite/tests/conftest.py`:
- Around line 90-98: In _collect_rerun_attempt, stop overwriting pytest's
structured report.longrepr with a plain formatted string: remove the assignment
"report.longrepr = message" (or guard it so it only runs if report.longrepr is
None) and leave the rest intact so the human-friendly message remains in
item.rerun_messages (via _format_failure_message) while preserving the original
structured longrepr for other reporters and consumers.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c6948430-4679-4345-b84c-3c576815d3be
📒 Files selected for processing (69)
.claude/rules/test-reruns.mdCLAUDE.mdMakefilepyproject.tomltestsuite/component_metadata.pytestsuite/kubernetes/__init__.pytestsuite/tests/conftest.pytestsuite/tests/kuadrantctl/cli/test_simple_limit.pytestsuite/tests/multicluster/coredns/two_clusters/test_authoritative_record_removal.pytestsuite/tests/multicluster/coredns/two_clusters/test_update_secondary.pytestsuite/tests/multicluster/global_rate_limiting/test_global_rate_limit.pytestsuite/tests/multicluster/load_balanced/test_change_default_geo.pytestsuite/tests/multicluster/load_balanced/test_change_strategy.pytestsuite/tests/singlecluster/authorino/identity/api_key/test_reconciliation.pytestsuite/tests/singlecluster/authorino/operator/sharding/test_preexisting_auth.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/same_target/test_ab_strategy.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/same_target/test_ba_startegy.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/test_default_merge.pytestsuite/tests/singlecluster/defaults/merge/rate_limit/test_default_replace.pytestsuite/tests/singlecluster/defaults/test_basic_rate_limit.pytestsuite/tests/singlecluster/defaults/test_section_targeting.pytestsuite/tests/singlecluster/egress/test_egress.pytestsuite/tests/singlecluster/extensions/plan_policy/test_plan_policy.pytestsuite/tests/singlecluster/gateway/dnspolicy/test_dnspolicy_removal.pytestsuite/tests/singlecluster/gateway/mtls/test_mtls_behaviour.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_authpolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_dnspolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_ratelimitpolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_tlspolicy_target_ref.pytestsuite/tests/singlecluster/gateway/reconciliation/listeners/test_gateway_basic_listeners.pytestsuite/tests/singlecluster/gateway/reconciliation/listeners/test_gateway_listeners_dns.pytestsuite/tests/singlecluster/gateway/reconciliation/test_affected_by.pytestsuite/tests/singlecluster/gateway/scaling/test_auto_scale_gateway.pytestsuite/tests/singlecluster/gateway/scaling/test_manual_scale_gateway.pytestsuite/tests/singlecluster/identical_hostnames/auth/test_auth_on_gw_and_route.pytestsuite/tests/singlecluster/identical_hostnames/auth/test_auth_on_routes.pytestsuite/tests/singlecluster/identical_hostnames/rlp/test_rlp_on_gw_and_route.pytestsuite/tests/singlecluster/identical_hostnames/rlp/test_rlp_on_routes.pytestsuite/tests/singlecluster/limitador/collisions/test_collisions.pytestsuite/tests/singlecluster/limitador/method/test_route_subset_method.pytestsuite/tests/singlecluster/limitador/section/test_listener.pytestsuite/tests/singlecluster/limitador/section/test_multiple_same_listener.pytestsuite/tests/singlecluster/limitador/section/test_multiple_same_rule.pytestsuite/tests/singlecluster/limitador/section/test_route_rule.pytestsuite/tests/singlecluster/limitador/storage/db/test_redis.pytestsuite/tests/singlecluster/limitador/storage/db/test_redis_cached.pytestsuite/tests/singlecluster/limitador/storage/disk/test_disk.pytestsuite/tests/singlecluster/limitador/test_basic_limit.pytestsuite/tests/singlecluster/limitador/test_multiple_iterations.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/basic/test_trlp.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/basic/test_trlp_streaming.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/multiple_iterations/test_multiple_trlp_iterations.pytestsuite/tests/singlecluster/limitador/tokens_rate_limit/multiple_iterations/test_multiple_trlp_stream_iterations.pytestsuite/tests/singlecluster/observability/test_kuadrant_policy_metrics_lifecycle.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/same_target/test_ab_strategy.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/same_target/test_ba_startegy.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/test_override_merge.pytestsuite/tests/singlecluster/overrides/merge/rate_limit/test_override_replace.pytestsuite/tests/singlecluster/overrides/test_basic_rate_limit.pytestsuite/tests/singlecluster/overrides/test_multiple_rate_limit.pytestsuite/tests/singlecluster/overrides/test_section_targeting.pytestsuite/tests/singlecluster/reconciliation/test_httproute_delete.pytestsuite/tests/singlecluster/test_rate_limit_anonymous.pytestsuite/tests/singlecluster/test_rate_limit_authz.pytestsuite/tests/singlecluster/tracing/control_plane/test_control_plane_errors.pytestsuite/tests/singlecluster/tracing/control_plane/test_control_plane_lifecycle.pytestsuite/tests/singlecluster/ui/console_plugin/policies/dnstls/test_tls_policy.pytestsuite/tests/singlecluster/ui/console_plugin/policies/test_rate_limit_policy.pytestsuite/tests/singlecluster/ui/console_plugin/policies/test_token_rate_limit_policy.py
✅ Files skipped from review due to trivial changes (34)
- testsuite/tests/multicluster/coredns/two_clusters/test_authoritative_record_removal.py
- testsuite/tests/singlecluster/defaults/merge/rate_limit/same_target/test_ab_strategy.py
- testsuite/tests/kuadrantctl/cli/test_simple_limit.py
- testsuite/tests/singlecluster/authorino/operator/sharding/test_preexisting_auth.py
- testsuite/tests/singlecluster/limitador/method/test_route_subset_method.py
- testsuite/tests/singlecluster/limitador/test_basic_limit.py
- testsuite/tests/singlecluster/gateway/reconciliation/listeners/test_gateway_listeners_dns.py
- testsuite/tests/singlecluster/gateway/reconciliation/test_affected_by.py
- testsuite/tests/singlecluster/ui/console_plugin/policies/test_token_rate_limit_policy.py
- testsuite/tests/singlecluster/limitador/collisions/test_collisions.py
- testsuite/tests/singlecluster/gateway/dnspolicy/test_dnspolicy_removal.py
- testsuite/tests/singlecluster/identical_hostnames/rlp/test_rlp_on_gw_and_route.py
- testsuite/tests/singlecluster/tracing/control_plane/test_control_plane_lifecycle.py
- testsuite/tests/singlecluster/limitador/tokens_rate_limit/multiple_iterations/test_multiple_trlp_stream_iterations.py
- testsuite/tests/multicluster/load_balanced/test_change_strategy.py
- testsuite/tests/singlecluster/gateway/reconciliation/listeners/test_gateway_basic_listeners.py
- testsuite/tests/singlecluster/egress/test_egress.py
- testsuite/tests/singlecluster/observability/test_kuadrant_policy_metrics_lifecycle.py
- testsuite/tests/singlecluster/test_rate_limit_authz.py
- testsuite/tests/singlecluster/gateway/scaling/test_manual_scale_gateway.py
- testsuite/tests/singlecluster/gateway/scaling/test_auto_scale_gateway.py
- testsuite/tests/singlecluster/identical_hostnames/rlp/test_rlp_on_routes.py
- testsuite/tests/singlecluster/tracing/control_plane/test_control_plane_errors.py
- testsuite/tests/multicluster/coredns/two_clusters/test_update_secondary.py
- testsuite/tests/singlecluster/defaults/merge/rate_limit/test_default_merge.py
- testsuite/tests/singlecluster/ui/console_plugin/policies/test_rate_limit_policy.py
- testsuite/tests/singlecluster/overrides/merge/rate_limit/same_target/test_ab_strategy.py
- testsuite/tests/singlecluster/limitador/section/test_multiple_same_listener.py
- testsuite/tests/singlecluster/limitador/storage/db/test_redis.py
- testsuite/tests/singlecluster/limitador/storage/disk/test_disk.py
- testsuite/tests/singlecluster/reconciliation/test_httproute_delete.py
- testsuite/tests/singlecluster/limitador/storage/db/test_redis_cached.py
- pyproject.toml
- testsuite/tests/multicluster/load_balanced/test_change_default_geo.py
🚧 Files skipped from review as they are similar to previous changes (31)
- testsuite/tests/singlecluster/overrides/test_basic_rate_limit.py
- testsuite/tests/singlecluster/defaults/merge/rate_limit/same_target/test_ba_startegy.py
- testsuite/tests/multicluster/global_rate_limiting/test_global_rate_limit.py
- testsuite/tests/singlecluster/defaults/merge/rate_limit/test_default_replace.py
- testsuite/tests/singlecluster/ui/console_plugin/policies/dnstls/test_tls_policy.py
- testsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_ratelimitpolicy_target_ref.py
- testsuite/tests/singlecluster/limitador/test_multiple_iterations.py
- testsuite/tests/singlecluster/identical_hostnames/auth/test_auth_on_routes.py
- testsuite/tests/singlecluster/defaults/test_basic_rate_limit.py
- testsuite/tests/singlecluster/limitador/section/test_route_rule.py
- testsuite/tests/singlecluster/overrides/test_section_targeting.py
- testsuite/tests/singlecluster/defaults/test_section_targeting.py
- testsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_dnspolicy_target_ref.py
- testsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_authpolicy_target_ref.py
- testsuite/tests/singlecluster/gateway/reconciliation/change_targetref/test_update_tlspolicy_target_ref.py
- testsuite/tests/singlecluster/overrides/merge/rate_limit/same_target/test_ba_startegy.py
- testsuite/tests/singlecluster/limitador/tokens_rate_limit/basic/test_trlp_streaming.py
- testsuite/tests/singlecluster/test_rate_limit_anonymous.py
- testsuite/tests/singlecluster/limitador/section/test_multiple_same_rule.py
- testsuite/tests/singlecluster/limitador/tokens_rate_limit/multiple_iterations/test_multiple_trlp_iterations.py
- testsuite/tests/singlecluster/overrides/merge/rate_limit/test_override_replace.py
- testsuite/tests/singlecluster/extensions/plan_policy/test_plan_policy.py
- testsuite/tests/singlecluster/gateway/mtls/test_mtls_behaviour.py
- testsuite/tests/singlecluster/identical_hostnames/auth/test_auth_on_gw_and_route.py
- Makefile
- testsuite/component_metadata.py
- testsuite/tests/singlecluster/overrides/merge/rate_limit/test_override_merge.py
- testsuite/tests/singlecluster/limitador/section/test_listener.py
- testsuite/tests/singlecluster/authorino/identity/api_key/test_reconciliation.py
- CLAUDE.md
- testsuite/tests/singlecluster/overrides/test_multiple_rate_limit.py
|
I was scanning through tests that meet the criteria for needing a flaky marker and noticed the test in |
Signed-off-by: Silvia Tarabova <starabov@redhat.com>
952aaba to
c52bb03
Compare
Good catch! I had Claude scan through the rate limit tests to add the flaky markers, and it missed this one. Added |
| assert response.status_code == 200 | ||
|
|
||
|
|
||
| @pytest.mark.flaky(reruns=3, reruns_delay=10) |
There was a problem hiding this comment.
reruns=3 is already applied from the pytest arg in Makefile, what about only specifying the delay? This applies to all marks not just this one.
Important
This PR modifies shared/core testsuite code that could potentially affect multiple test areas. 2 reviewers should review this PR to ensure adequate coverage.
Description
pytest-rerunfailuresplugin to automatically retry failed tests (default--reruns 3), reducing flaky test noise in CI@pytest.mark.flakyto either disable reruns (reruns=0) for tests that modify module-scoped fixtures, or add appropriatereruns_delayfor rate limit testsKubernetesObject.commit()resilient to reruns by falling back toapply()when a resource already existsPer-test rerun markers
pytest-rerunfailuresonly reruns the test function, not module-scoped fixtures. Some tests need special handling:@pytest.mark.flaky(reruns=0)— disables reruns for tests that delete or modify module-scoped resources (e.g., delete a policy, change a targetRef, rollout a deployment). The fixture is not recreated between reruns, sothe test cannot recover.
@pytest.mark.flaky(reruns=3, reruns_delay=X)— adds a delay between reruns for rate limit tests. When a test exhausts its counter and fails, the counter is still active in Limitador. A rerun that starts immediatelywould hit
429on the very first request. The delay is set towindow + 5sso the counter fully resets before the next attempt.Changes
New Plugin & Configuration
pytest-rerunfailuresdependency inpyproject.toml--reruns 3default to thePYTESTcommand inMakefileCore Infrastructure
KubernetesObject.commit()intestsuite/kubernetes/__init__.pyto catchAlreadyExistsand fall back toapply(), so module-scoped resources survive reruns_format_failure_message()helper intestsuite/tests/conftest.pythat produces clean tracebacks by filtering outsite-packagesframes and extracting stderr fromOpenShiftPythonException_collect_rerun_attempt()and_write_rerun_properties()intestsuite/tests/conftest.pyto record per-attempt failure messages and isolated captured output as__rp_rerun_*user properties at teardownpytest_runtest_makereporthook to call the new rerun helpers on"call"/"setup"failures and at"teardown"Bug Fix
testsuite/component_metadata.py—container.image.split("@", 1)[0]now correctly removes the@sha256:...suffixDocumentation
CLAUDE.mdwith guidelines for@pytest.mark.flakyusageSummary by CodeRabbit
New Features
Documentation
Tests