-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[release-4.11] OCPBUGS-3490: OVN-Kubernetes: Prefer oldest nodes #1641
[release-4.11] OCPBUGS-3490: OVN-Kubernetes: Prefer oldest nodes #1641
Conversation
Sometimes the number of masters changes, like when in the etcd test: etcd [apigroup:config.openshift.io] is able to vertically scale up and down with a single node This leads to problems like: I0909 11:16:02.221234 1 ovn_kubernetes.go:938] Waiting to complete OVN bootstrap: found (4) master nodes out of (3) expected: timing out in 235 seconds ovsdb-server only ever wants an odd number of members to ensure consensus in RAFT clusters. If we have 4 members and one of them is dead (like when the 4th one gets deleted) the RAFT cluster gets a bit unhappy. The CNO currently renders the ovnkube master pods with the IP addresses of all master nodes, regardless of how many control plane nodes were actually requested at install time. That's not cool. Don't do that. Instead, take the oldest master nodes (sorted by creation time) as the RAFT cluster members. Tell any NB/SB containers that aren't in the list to do nothing for a really long time (to prevent CrashloopBackoff due to early exits from the container script) and not join the cluster. If this really is a master replacement, then the cluster will shift over to the new master when the original one is finally removed. Signed-off-by: Dan Williams <dcbw@redhat.com> (cherry picked from commit c0c317e)
(cherry picked from commit 9d22f87)
When the postStart hooks fail kubelet kills the DB containers with a 30s grace period. If the DBs started at different times (because they're on different nodes, have different kubelets, etc) they may not have enough runtime overlap to establish the RAFT cluster before one or more of them get killed by kubelet. First, make the postStart scripts wait longer by retrying the stuff they do more times until the cluster is established. Second, wrap the IPsec enable/disable in a retry loop too and make it exit with an error if it fails instead of ignoring the problem. Third, add an IPsec check to the SB postStart to wait a bit more time for the SB cluster to establish, if needed. (cherry picked from commit d994351)
@kyrtapz: No Bugzilla bug is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@kyrtapz: GitHub didn't allow me to request PR reviews from the following users: kyrtapz. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
@kyrtapz: No Bugzilla bug is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1 similar comment
@kyrtapz: No Bugzilla bug is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@kyrtapz: No Bugzilla bug is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@kyrtapz: This pull request references Jira Issue OCPBUGS-3490, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/jira refresh |
@kyrtapz: This pull request references Jira Issue OCPBUGS-3490, which is valid. The bug has been moved to the POST state. 6 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-hold |
/retest |
/retest |
/retest |
/retest |
/retest |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dcbw, kyrtapz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/label backport-risk-assessed |
/label cherry-pick-approved |
/retest-required |
1 similar comment
/retest-required |
@kyrtapz: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest-required |
@kyrtapz: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-3490 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This cherry picks the improvements to ovn-k startup from #1579
/cc @martinkennelly @dcbw