New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1883772: pkg/operator/clustermembercontroller: resync every minute #457
Bug 1883772: pkg/operator/clustermembercontroller: resync every minute #457
Conversation
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
@hexfusion: This pull request references Bugzilla bug 1883772, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hexfusion, retroflexer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Seems harmless to resync periodically but I'm curious about what's going on here. Are you saying the operator remains alive but has missed some edge event that should trigger a sync but doesn't? If the process is killed and restarted I'd expect a resync. But are you saying the process is killed and not restarted, and having missed the event we never catch up with the missed event? |
I am saying the operator process is killed and new process is started but it appears we leaked an event and then did not retry. |
Hm, wouldn't the resync happen on startup then? The logic should be level driven, after all |
It should yes and in this case it did.
But we should consider the leak a followup? |
@hexfusion: This pull request references Bugzilla bug 1883772, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Are you saying that had process actively resynced every minute, the issue would have resolved before the process ended up killed? Not sure of the timing details here (e.g. how long did it take before the process was nuked at which point the immediate resync fixed stuff) |
@hexfusion: This pull request references Bugzilla bug 1883772, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Well I am saying once the event was leaked the controller would not resync. The net result was the cluster failed and never scaled this member. timing original operator process failure.
new operator process few seconds later
scale other two members
cache is synced but no event was triggered to attempt scale-up of 3rd. So it seems we leaked one? |
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
@hexfusion: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Please review the full test history for this PR and help us cut down flakes. |
@hexfusion: All pull requests linked via external trackers have merged: Bugzilla bug 1883772 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-4.5 |
@hexfusion: new pull request created: #458 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
In certain circumstances such as transient client failure followed by the operator process being killed (leader election lost) the operator can get in a situation where we forget to retry scaling members.
In this case we see the caches sync but scaling was not retried.
This PR ensures that we revisit membership in the case of a leaked event or unexpected cache to make sure we don't have any further work to do.