[WIP] Increase pod-network-availability timeout to 15 minutes#31027
[WIP] Increase pod-network-availability timeout to 15 minutes#31027Neha-dot-Yadav wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@Neha-dot-Yadav: This pull request references Jira Issue OCPBUGS-83579, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository: openshift/coderabbit/.coderabbit.yaml Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
WalkthroughPolling interval in the pod network availability check was increased from 1s to 5s, and the maximum wait duration was extended from 300 seconds to 15 minutes in Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes 🚥 Pre-merge checks | ✅ 10✅ Passed checks (10 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Neha-dot-Yadav The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
pkg/monitortests/network/disruptionpodnetwork/monitortest.go (1)
181-181: Timeout bump looks reasonable; consider a longer poll interval.The 15-minute timeout comfortably covers the observed ~12-minute PowerVS container startup, consistent with neighboring timeouts in
disruptionserviceloadbalancer/monitortest.go(20m/60m). One minor consideration: polling every 1 second for up to 15 minutes can issue ~900EndpointSlices.Listcalls against the apiserver in the worst case. A slightly larger interval (e.g., 5s) would meaningfully reduce API load while barely affecting detection latency, matching patterns used elsewhere in the package (e.g., the 15s/20m poll inCleanupat line 308).♻️ Optional tweak
- err = wait.PollUntilContextTimeout(ctx, 1*time.Second, 15*time.Minute, true, pna.serviceHasEndpoints) + err = wait.PollUntilContextTimeout(ctx, 5*time.Second, 15*time.Minute, true, pna.serviceHasEndpoints)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/monitortests/network/disruptionpodnetwork/monitortest.go` at line 181, Increase the poll interval to reduce API server load: in the call to wait.PollUntilContextTimeout that currently uses a 1*time.Second interval (invoking pna.serviceHasEndpoints), change the interval to a larger value such as 5*time.Second (or another small multiple) while keeping the 15*time.Minute timeout; update the invocation of wait.PollUntilContextTimeout where pna.serviceHasEndpoints is passed so it polls less frequently but retains the same overall timeout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pkg/monitortests/network/disruptionpodnetwork/monitortest.go`:
- Line 181: Increase the poll interval to reduce API server load: in the call to
wait.PollUntilContextTimeout that currently uses a 1*time.Second interval
(invoking pna.serviceHasEndpoints), change the interval to a larger value such
as 5*time.Second (or another small multiple) while keeping the 15*time.Minute
timeout; update the invocation of wait.PollUntilContextTimeout where
pna.serviceHasEndpoints is passed so it polls less frequently but retains the
same overall timeout.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: dd9a08b3-3b64-4a4d-bf92-39f212fc6fbe
📒 Files selected for processing (1)
pkg/monitortests/network/disruptionpodnetwork/monitortest.go
|
@Neha-dot-Yadav: No Jira issue is referenced in the title of this pull request. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Scheduling required tests: |
9a07cca to
1913f76
Compare
|
Scheduling required tests: |
|
@Neha-dot-Yadav: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
The pod-network-availability monitor tests are consistently failing on PowerVS infrastructure during the collection phase. Containers are stuck in ContainerCreating state, causing timeouts.
Failing tests:
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability collection
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability cleanup
Root Cause
Analysis of deployment timestamps from test runs shows that containers are taking approximately 12 minutes to start on PowerVS, exceeding the current 5-minute (300 second) timeout.
Evidence from deployment timestamps:
4.22 Test Run:
pod-network-to-host-network-disruption-poller:
Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
host-network-to-host-network-disruption-poller:
Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
Test run links:
4.22: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.22-ocp-e2e-ovn-powervs-capi-multi-p-p/2044264624701837312/
4.21: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.21-ocp-e2e-ovn-powervs-capi-multi-p-p/2043404000346247168/
Workaround:
This PR increases the service endpoint wait timeout from 5 minutes to 15 minutes to accommodate the slower container startup times observed on PowerVS infrastructure.
Related
Bug: https://redhat.atlassian.net/browse/OCPBUGS-83579
Similar fix: #29970 (OCPBUGS-58354)
Summary by CodeRabbit