New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1698562: status: introduce ingresscontroller degraded condition #283
Bug 1698562: status: introduce ingresscontroller degraded condition #283
Conversation
Introduce degraded condition computation for ingresscontroller. For now, degraded only considers failed deployments, which seems like a conservative bare minimum indicator of a degraded state. Ensure that ingresscontroller deployments have a useful progressDeadlineSeconds so that degraded deployments are actually detected in a useful timeframe. Refacor clusteroperator degraded status to account for ingresscontroller degraded conditions.
@ironcladlou: This pull request references a valid Bugzilla bug. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
assets/router/deployment.yaml
Outdated
@@ -4,6 +4,7 @@ kind: Deployment | |||
apiVersion: apps/v1 | |||
# name and namespace are set at runtime. | |||
spec: | |||
progressDeadlineSeconds: 120 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be too long, but 120 seems conservative and we can adjust if necessary.
Here's an example of the degraded condition on the ingresscontroller:
And here's the corresponding degraded condition on the clusteroperator:
The ingresscontroller and operator are still available in this case because minimum deployment availability is maintained. |
return operatorv1.OperatorCondition{ | ||
Type: operatorv1.OperatorStatusTypeDegraded, | ||
Status: operatorv1.ConditionFalse, | ||
Reason: "DeploymentAvailable", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DeploymentAvailable
could be misleading if the deployment is still progressing and not available. Do we need an explicit reason when the degraded condition is false?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I removed the reason entirely
} | ||
|
||
conditions := r.computeOperatorStatusConditions([]configv1.ClusterOperatorStatusCondition{}, | ||
namespace, tc.allIngressesAvailable, oldVersions, reportedVersions) | ||
actual := computeOperatorProgressingCondition(tc.allIngressesAvailable, oldVersions, reportedVersions, tc.curVersions.operator, tc.curVersions.operand) | ||
conditionsCmpOpts := []cmp.Option{ | ||
cmpopts.IgnoreFields(configv1.ClusterOperatorStatusCondition{}, "LastTransitionTime", "Reason", "Message"), | ||
cmpopts.EquateEmpty(), | ||
cmpopts.SortSlices(func(a, b configv1.ClusterOperatorStatusCondition) bool { return a.Type < b.Type }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No longer need cmpopts.SortSlices
or cmpopts.EquateEmpty
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
assets/router/deployment.yaml
Outdated
@@ -4,6 +4,7 @@ kind: Deployment | |||
apiVersion: apps/v1 | |||
# name and namespace are set at runtime. | |||
spec: | |||
progressDeadlineSeconds: 120 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reminds me that we need to set the readiness probe to use /healthz/ready
. I'm nervous about setting the progressing deadline significantly lower than the default, especially if we start using /healthz/ready
. Are we confident that the initial sync will finish in time on large clusters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default is 600 seconds, which seems too long. Do you agree? Would some e2e test warn us if we chose a value that's too short on average?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default is 600 seconds, which seems too long. Do you agree?
That's what I'm wondering.
Would some e2e test warn us if we chose a value that's too short on average?
Not if the E2E tests are not representative of production clusters. Moreover, while we may be fine now, I intend to fix the readiness check to use /healthz/ready
, which will cause the deployment not to be ready until the router has synched routes, which I could see taking more than 120 seconds on burdened clusters with many routes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Progress deadline is usually measured in 10-20m.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is way too low.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're also not setting readiness endpoints correctly, fixing that and changing back to 10m in a followup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nevermind, fixed here instead
type conditions struct { | ||
degraded, progressing, available bool | ||
} | ||
func TestComputeOperatorProgressingCondition(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we're losing unit-test coverage of computeOperatorAvailableCondition
and computeOperatorDegradedCondition
, but I suppose they are sufficiently covered by E2E tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They currently seem simple enough that unit test coverage would be more code than it's worth given e2e, IMO.
If we start introducing additional inputs to the formulas, unit tests may become useful...
/lgtm |
Tests are still going, might as well roll the followups into this one. /hold |
Set a more conservative deadline and use the correct readiness endpoint.
/hold cancel |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ironcladlou, Miciah The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
@ironcladlou: All pull requests linked via external trackers have merged. The Bugzilla bug has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Introduce degraded condition computation for ingresscontroller. For now,
degraded only considers failed deployments, which seems like a conservative bare
minimum indicator of a degraded state. Ensure that ingresscontroller deployments
have a useful progressDeadlineSeconds so that degraded deployments are actually
detected in a useful timeframe.
Refacor clusteroperator degraded status to account for ingresscontroller
degraded conditions.