New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-18954: Ensure status reporter caches exit if they don't sync #285
OCPBUGS-18954: Ensure status reporter caches exit if they don't sync #285
Conversation
@JoelSpeed: This pull request references Jira Issue OCPBUGS-18954, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/jira refresh |
@JoelSpeed: This pull request references Jira Issue OCPBUGS-18954, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
88c73fb
to
e05ec26
Compare
/retest |
2 similar comments
/retest |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense to me, i have a question about one of your comments though.
If we start seeing this timeout hit repeatedly,
how will we know? i'm just curious what the signal from ci will look like.
/lgtm
/approve
CI has a check already that will report if a pod has restarted many times, so, if it becomes an issue and the pod is restarting every 30 seconds, the existing suite should tell us |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: elmiko The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@JoelSpeed: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@JoelSpeed: Jira Issue OCPBUGS-18954: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-18954 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Fix included in accepted release 4.15.0-0.nightly-2023-09-27-073353 |
/cherry-pick release-4.14 |
@JoelSpeed: new pull request created: #292 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We are seeing sometimes, that the Cluster Autoscaler status never gets reported. Looking at the logic, it's possible here that we wait forever for the informers to sync, but, if they never actually sync, the status won't get reported.
I'm not sure why they wouldn't sync, but, having a timeout seems reasonable to prevent this infinite hang.
If we start seeing this timeout hit repeatedly, then my theory is correct, and we can then investigate why the informers aren't syncing correctly.