-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e node: bump all nodes ready timeout #116947
e2e node: bump all nodes ready timeout #116947
Conversation
Please note that we're already in Test Freeze for the Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Mon Mar 27 10:25:06 UTC 2023. |
/sig node |
9b7a366
to
86e1f4b
Compare
Did something recently start flaking that makes us need this change? |
@rphillips can you provide more explanation why it is needed? |
@bobbypage @SergeyKanzhelev I updated the description to why. OpenShift is seeing variability across various clouds and networking providers and is carrying a 7 minute patch like this one. Thoughts on including this upstream? Note: we are not seeing issues in upstream k8s. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bobbypage @SergeyKanzhelev I updated the description to why. OpenShift is seeing variability across various clouds and networking providers and is carrying a 7 minute patch like this one. Thoughts on including this upstream?
Note: we are not seeing issues in upstream k8s.
I'm fine with this.
/lgtm
/approve
LGTM label has been added. Git tree hash: a89e65964c13cb18b5ae514d6f487d0c0f7234f9
|
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dims, mrunalp, rphillips, SergeyKanzhelev The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
What type of PR is this?
/kind flake
What this PR does / why we need it:
This PR bumps the node ready timeout from 3 minutes to 7 minutes.
Edit 4/26/2023 - In OpenShift we see various cloud providers have large variability when all nodes can become ready. We are carrying a patch for 7 minutes, which seems to catch all the providers we are testing.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: