New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expands IngressController status conditions #224
Conversation
e2323e2
to
f05a840
Compare
/retest |
var conditions []operatorv1.OperatorCondition | ||
conditions = computeIngressStatusConditions(conditions, &appsv1.Deployment{}, false) | ||
for _, c := range conditions { | ||
updated.Status.Conditions = append(updated.Status.Conditions, c) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not directly assign the slice value from computeIngressStatusConditions
to updated.Status.Conditions
?
updated.Status.Conditions = computeIngressStatusConditions(conditions, &appsv1.Deployment{}, false)
Alternatively, would it make sense to call syncIngressControllerStatus
instead?
In fact, syncIngressControllerStatus
could subsume enforceEffectiveIngressDomain
almost entirely: If ic.Status.Domain
were empty, then enforceEffectiveIngressDomain
would call syncIngressControllerStatus
with a nil deployment pointer (we know there cannot be a deployment if there is no domain). syncIngressControllerStatus
would need to (1) take an ingressConfig
; (2) check if deployment
were nil, in which case it would not set updated.Status.Selector
or updated.Status.AvailableReplicas
; and (3) check if ic.Status.Domain
were empty, in which case syncIngressControllerStatus
would proceed to do the defaulting, uniqueness check, and updating or reporting that enforceEffectiveIngressDomain
does now. (We could do similar with enforceEffectiveEndpointPublishingStrategy
, but we can save that refactoring for another PR.) What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, would it make sense to call syncIngressControllerStatus instead?
Thanks for the review. I was thinking about using syncIngressControllerStatus
. However, it performs a status update, as does enforceEffectiveIngressDomain
and I thought multiple status update calls would be suboptimal. With the change recommendations that you outline, using ``syncIngressControllerStatus` sounds sensible. I'll start working through the changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Miciah I implemented some of your suggestions above. I keep hitting e2e failures when I move the domain uniqueness logic from enforceEffectiveIngressDomain
to syncIngressControllerStatus
:
2019-05-03T11:43:31.721-0700 INFO operator log/log.go:26 started zapr logger
=== RUN TestCreateIngressControllerThenSecret
--- FAIL: TestCreateIngressControllerThenSecret (32.82s)
certificate_publisher_test.go:69: failed to observe reconciliation of ingresscontroller: timed out waiting for the condition
=== RUN TestCreateSecretThenIngressController
--- FAIL: TestCreateSecretThenIngressController (62.82s)
certificate_publisher_test.go:160: failed to observe updated global secret: timed out waiting for the condition
=== RUN TestOperatorAvailable
--- FAIL: TestOperatorAvailable (12.57s)
operator_test.go:102: did not get expected available condition: timed out waiting for the condition
=== RUN TestDefaultIngressControllerExists
--- PASS: TestDefaultIngressControllerExists (2.48s)
=== RUN TestIngressControllerControllerCreateDelete
--- FAIL: TestIngressControllerControllerCreateDelete (62.80s)
operator_test.go:159: failed to reconcile IngressController openshift-ingress-operator/test: timed out waiting for the condition
/test e2e-aws-operator |
/refresh |
/test e2e-aws-operator |
/refresh |
Still not seeing a new run since yesterday. |
Need to determine whether these are necessary for 4.1 /hold |
f05a840
to
dac8850
Compare
dac8850
to
9ec2fe3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In #224 (comment), I suggested that syncIngressControllerStatus
could do the unique-domain check itself. Adding to that, Dan suggested in chat today that syncIngressControllerStatus
should also look up the deployment. With those two changes, the only parameter for syncIngressControllerStatus
would be the ingress controller, which would make the callers more uniform and consolidate the status computation logic. Do these changes seem reasonable, and if so, could we incorporate them into this PR, or would it be better to put them in a separate PR?
func computeIngressStatusConditions(oldConditions []operatorv1.OperatorCondition, deployment *appsv1.Deployment, | ||
uniqueDomain bool) []operatorv1.OperatorCondition { | ||
oldDegradedCondition := getIngressDegradedCondition(oldConditions) | ||
oldProgressingCondition := getIngressProgressingCondition(oldConditions) | ||
oldAvailableCondition := getIngressAvailableCondition(oldConditions) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know that having separate getIngressDegradedCondition
, getIngressProgressingCondition
, and getIngressAvailableCondition
functions is important for readability, and it means more code and more looping. What do you think of using a single loop in computeIngressStatusConditions
to get all three values, similar to how the DNS operator does it? https://github.com/openshift/cluster-dns-operator/blob/540ab8bca50b50880a4eb44feaddee8352f565bf/pkg/operator/controller/status.go#L202-L212
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Miciah I was going back and forth on whether to have separate functions for each condition or looping through the conditions in a single function. I thought the former would be easier to read and understand. I will update the PR using a single function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do these changes seem reasonable, and if so, could we incorporate them into this PR, or would it be better to put them in a separate PR?
They do make sense. I'm working on updating the PR.
efbfdb0
to
ea636f8
Compare
Doesn't |
I don't believe so. If |
ea636f8
to
4d60a3e
Compare
Consider condition sets produced by the following: computeLoadBalancerStatus // returns LoadBalancer* conditions For now, let's say that some subset S of those conditions must be "True" for the ingress controller to be considered available. For example, [LoadBalancerReady=True,DeploymentReady=true,DNSReady=true]. Given a set of the union of those conditions filtered by Status=True indexed by ingress controller, you can compute availability of the ingress controller by checking membership of the set. New criteria can be later added by introducing new conditions to the set of availability-influencing conditions. The same methodology could be applied to another meta-condition like Degraded or Progressing. |
@Miciah I updated the control flow of the main |
If there's a part of status processing that's mutations for things like domain and publishing, and I think that should be its own reconciler or something. I think one of the outcome of those mutations would need to be a condition indicating we're "admitting" the ingress controller according to constraints (e.g. it must have a unique domain). Finally, the main reconciler should be ignoring (and not receiving events for) ingress controllers which haven't been admitted, giving it some bedrock assumptions to stand on. There has to be a trust boundary, and decoupling lets us further disentangle the admission/reconciling/status trifecta. |
Right, that makes sense. Might be worth a comment since the logic is a little hairy.
Yeah, this is what I had in mind earlier.
Should we call if !IsStatusDomainSet(ingress) {
if err := r.syncIngressControllerStatus(ingress, ingressConfig); err != nil {
// ...
} else if // ...
}
if IsStatusDomainSet(ingress) {
if err := r.syncIngressControllerStatus(ingress, ingressConfig); err != nil {
// ... |
Yeah, separating that logic out would make everything a lot more comprehensible.
I don't think this PR makes the situation worse; should we tackle this refactoring into separate controllers in a follow-up? |
On second look, I believe the above statement is incorrect because of the if !IsStatusDomainSet(ingress) {
if err := r.syncIngressControllerStatus(ingress, ingressConfig); err != nil {
// ...
} else if // ... I amend my earlier suggestion as follows: if !IsStatusDomainSet(ingress) {
if err := r.syncIngressControllerStatus(ingress, ingressConfig); err != nil {
// ...
}
if IsStatusDomainSet(ingress) {
// ...
if err := r.enforceEffectiveEndpointPublishingStrategy(ingress, infraConfig); err != nil {
// ...
} else if // ...
}
if err := r.syncIngressControllerStatus(ingress, ingressConfig); err != nil {
// ... |
/test e2e-aws |
d882325
to
7aef7fc
Compare
/test e2e-aws |
7aef7fc
to
e0540c6
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: danehans The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
9be57c3
to
68be75f
Compare
68be75f
to
075f970
Compare
/test e2e-aws-operator |
f5fb727
to
206c1be
Compare
206c1be
to
92ef7b9
Compare
@danehans: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think this one is obsolete now. /close |
@ironcladlou: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
IngressController
Available
condition to be based on theAvailable
status ofIngressController
dependent resources (i.e.DNS
).Deployment
status condition.