New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-22476: Explicitly degrade the cluster when conditions are not met #183
OCPBUGS-22476: Explicitly degrade the cluster when conditions are not met #183
Conversation
@gnufied: This pull request references Jira Issue OCPBUGS-22476, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gnufied The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Previously we used to rely on return values from sync function, but this creates a problem, when no checks are performed - we simply return nil in that case. This causes degraded condition to be removed from the cluster. But in-truth cluster should remain degraded.
ac2c02f
to
172f36a
Compare
@@ -449,7 +450,26 @@ func (c *VSphereController) updateConditions( | |||
} | |||
|
|||
updateFuncs := []v1helpers.UpdateStatusFunc{} | |||
|
|||
degradeCond := operatorapi.OperatorCondition{ | |||
Type: "VMwareVSphereOperatorController" + operatorapi.OperatorStatusTypeDegraded, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to reviewers - I am using different name than default name + xxxxDegraded
because it appears that, that name gets clobbered by library-go sync
function and defaults to false
, if no error is thrown during the sync.
I made the decision to explicitly degrade the cluster (even though I had to use a different name), because otherwise we will have to store the last check result in cache somewhere (because we don't perform full checks everytime sync gets called). But in past - caching of previous check results have caused problems and I wanted code to not rely on unwanted state, if we can help it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please note about this as a huge comment into the code.
I think it's OK to add a separate Degraded condition that capture the check result, however, it's very hard for me to actually check if updateConditions
is called with the right arguments in all code paths, so the condition is not accidentally cleared. IMO the operator's complexity has reached state where some refactoring / simplification is necessary (as a separate PR / techdebt epic).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a comment. Usually this is the only function that updates the conditions in this controller and hence we should be okay.
@@ -449,7 +450,26 @@ func (c *VSphereController) updateConditions( | |||
} | |||
|
|||
updateFuncs := []v1helpers.UpdateStatusFunc{} | |||
|
|||
degradeCond := operatorapi.OperatorCondition{ | |||
Type: "VMwareVSphereOperatorController" + operatorapi.OperatorStatusTypeDegraded, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe VMwareVSphere(Operator?)CheckDegraded
would be better, to clearly distinguish it from VMwareVSphereControllerDegraded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
78f0b59
to
572b465
Compare
/jira refresh |
@gnufied: This pull request references Jira Issue OCPBUGS-22476, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Jira (wduan@redhat.com), skipping review request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
/lgtm |
/retest-required |
/retest-required |
/retest |
@gnufied: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
1a09062
into
openshift:master
@gnufied: Jira Issue OCPBUGS-22476: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-22476 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[ART PR BUILD NOTIFIER] This PR has been included in build ose-vmware-vsphere-csi-driver-operator-container-v4.15.0-202311272131.p0.g1a09062.assembly.stream for distgit ose-vmware-vsphere-csi-driver-operator. |
Fix included in accepted release 4.15.0-0.nightly-2023-11-28-101923 |
/cherry-pick release-4.14 |
@gnufied: new pull request created: #194 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Previously we used to rely on return values from sync function, but this creates a problem, when no checks are performed - we simply return nil in that case. This causes degraded condition to be removed from the cluster. But in-truth cluster should remain degraded.