Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-18954: Ensure status reporter caches exit if they don't sync #285

Merged

Conversation

JoelSpeed
Copy link
Contributor

We are seeing sometimes, that the Cluster Autoscaler status never gets reported. Looking at the logic, it's possible here that we wait forever for the informers to sync, but, if they never actually sync, the status won't get reported.
I'm not sure why they wouldn't sync, but, having a timeout seems reasonable to prevent this infinite hang.

If we start seeing this timeout hit repeatedly, then my theory is correct, and we can then investigate why the informers aren't syncing correctly.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Sep 14, 2023
@openshift-ci-robot
Copy link
Contributor

@JoelSpeed: This pull request references Jira Issue OCPBUGS-18954, which is invalid:

  • expected the bug to target the "4.15.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

We are seeing sometimes, that the Cluster Autoscaler status never gets reported. Looking at the logic, it's possible here that we wait forever for the informers to sync, but, if they never actually sync, the status won't get reported.
I'm not sure why they wouldn't sync, but, having a timeout seems reasonable to prevent this infinite hang.

If we start seeing this timeout hit repeatedly, then my theory is correct, and we can then investigate why the informers aren't syncing correctly.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Sep 14, 2023
@JoelSpeed
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Sep 14, 2023
@openshift-ci-robot
Copy link
Contributor

@JoelSpeed: This pull request references Jira Issue OCPBUGS-18954, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.15.0) matches configured target version for branch (4.15.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sunzhaohua2

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@JoelSpeed
Copy link
Contributor Author

/retest

2 similar comments
@JoelSpeed
Copy link
Contributor Author

/retest

@JoelSpeed
Copy link
Contributor Author

/retest

Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense to me, i have a question about one of your comments though.

If we start seeing this timeout hit repeatedly,

how will we know? i'm just curious what the signal from ci will look like.

/lgtm
/approve

@JoelSpeed
Copy link
Contributor Author

how will we know? i'm just curious what the signal from ci will look like.

CI has a check already that will report if a pod has restarted many times, so, if it becomes an issue and the pod is restarting every 30 seconds, the existing suite should tell us

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 20, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 20, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: elmiko

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 20, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 20, 2023

@JoelSpeed: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit 377435e into openshift:master Sep 20, 2023
11 checks passed
@openshift-ci-robot
Copy link
Contributor

@JoelSpeed: Jira Issue OCPBUGS-18954: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-18954 has been moved to the MODIFIED state.

In response to this:

We are seeing sometimes, that the Cluster Autoscaler status never gets reported. Looking at the logic, it's possible here that we wait forever for the informers to sync, but, if they never actually sync, the status won't get reported.
I'm not sure why they wouldn't sync, but, having a timeout seems reasonable to prevent this infinite hang.

If we start seeing this timeout hit repeatedly, then my theory is correct, and we can then investigate why the informers aren't syncing correctly.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-merge-robot
Copy link
Contributor

Fix included in accepted release 4.15.0-0.nightly-2023-09-27-073353

@JoelSpeed
Copy link
Contributor Author

/cherry-pick release-4.14

@openshift-cherrypick-robot

@JoelSpeed: new pull request created: #292

In response to this:

/cherry-pick release-4.14

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants