Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-4820: Controller version mismatch causing degradation during upgrades #3738

Merged

Conversation

djoshy
Copy link
Contributor

@djoshy djoshy commented Jun 8, 2023

Added a new function called getOperatorNodeName() Fleshed out the findEtcdLeader stub in the controller that helps us queue control plane nodes for updates. I'll do a subsequent commit for bumping the timeout, but want to run a few tests with the queuing in place first.

- Description for the changelog
controller: defer update of node running operator

@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels Jun 8, 2023
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-4820, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.14.0) matches configured target version for branch (4.14.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @rioliu-rh

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Fleshed out the findEtcdLeader stub in the controller that queues control plane nodes for updates. I'll do subsequent commit for bumping the timeout, but want to run a few tests first with the queuing in place.

- Description for the changelog
controller: defer update of etcd leader node

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Jun 8, 2023
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-4820, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.14.0) matches configured target version for branch (4.14.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @rioliu-rh

In response to this:

Fleshed out the findEtcdLeader stub in the controller that queues control plane nodes for updates. I'll do a subsequent commit for bumping the timeout, but want to run a few tests first with the queuing in place.

- Description for the changelog
controller: defer update of etcd leader node

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested a review from rioliu-rh June 8, 2023 15:38
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 8, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 8, 2023

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-4820, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.14.0) matches configured target version for branch (4.14.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @rioliu-rh

In response to this:

Fleshed out the findEtcdLeader stub in the controller that queues control plane nodes for updates. I'll do a subsequent commit for bumping the timeout, but want to run a few tests with the queuing in place first.

- Description for the changelog
controller: defer update of etcd leader node

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@djoshy
Copy link
Contributor Author

djoshy commented Jun 8, 2023

/test e2e-gcp-op
/test unit
/test verify

@djoshy
Copy link
Contributor Author

djoshy commented Jun 8, 2023

/test e2e-gcp-op
/test unit
/test verify

@djoshy
Copy link
Contributor Author

djoshy commented Jun 8, 2023

/test e2e-gcp-op
/test unit
/test verify

@djoshy
Copy link
Contributor Author

djoshy commented Jun 8, 2023

/retest-required

@djoshy
Copy link
Contributor Author

djoshy commented Jun 8, 2023

/test verify

@djoshy
Copy link
Contributor Author

djoshy commented Jun 8, 2023

/retest-required

func (ctrl *Controller) getCurrentEtcdLeader(candidates []*corev1.Node) (*corev1.Node, error) {
return nil, nil
// getCurrentEtcdLeaderName fetches the name of the current node running the machine-config-operator pod
func (ctrl *Controller) getCurrentEtcdLeaderName() (string, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if it's about the MCO pod now, why not rename the function too? The etcd leader can be on a different node than the operator pod, unless I'm missing something...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an interesting topic here though in that it might actually be advantageous to us to keep the etcd leader and the operator and controller pods all co-located at least in a steady state, because that node should be upgraded last.

Copy link
Contributor Author

@djoshy djoshy Jun 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, yeah I can see why a name change to just get the operator pod node might be more accurate! The way I understood the etcd landscape is: there are two leader elections at startup; one for operator and one for the controller and therefore there are two etcd leaders(just in two different elections). Is that not the case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also like the idea of co-locating them too, I'm not sure why they were split up but I can dig into the costs/benefits of that!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah...so very briefly, etcd is a replicated key-value store that is the data storage heart of Kubernetes. Underneath, etcd uses raft which is a generic protocol, and has the concept of a "leader".

I think what you're crossing here with that is that our operator and controller pods use a "leader election" protocol too using the apiserver (which uses etcd, which uses raft) - but this is at an entirely different level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, makes sense - I'll update the function name to be more accurate. I'll also make a follow-up spike/task to explore putting the pods and the etcd-leader on the same node. Thanks!

@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-4820, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.14.0) matches configured target version for branch (4.14.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @rioliu-rh

In response to this:

Fleshed out the findEtcdLeader stub in the controller that queues control plane nodes for updates. I'll do a subsequent commit for bumping the timeout, but want to run a few tests with the queuing in place first.

- Description for the changelog
controller: defer update of node running operator

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sergiordlr
Copy link

sergiordlr commented Jun 13, 2023

Verified using IPI on AWS.

In order to verify the order used to updated the master nodes, using the cordon command and deleting the machine-config-operator pod, we deploy the machine-config-operator pod in every cluster and force an update of the master MCP by applying a config.

The expected result is that the master nodes should be updated alphabetically by the zone they are using. us-east-2a -> us-east-2b -> us-east2c. But the one hosting the machine-config-operator pod should be the last one to be udated.

These were the results:

ORDER       NAME                                         ZONE         CURRENT CONFIG                                     DESIRED CONFIG
  3     ip-10-0-156-188.us-east-2.compute.internal   us-east-2a   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7  (OPERATOR POD)
  1     ip-10-0-189-166.us-east-2.compute.internal   us-east-2b   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7
  2     ip-10-0-214-203.us-east-2.compute.internal   us-east-2c   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7
ORDER        NAME                                         ZONE         CURRENT CONFIG                                     DESIRED CONFIG
  1      ip-10-0-156-188.us-east-2.compute.internal   us-east-2a   rendered-master-e476a272b5cafa6789be3b5e71221320   rendered-master-e476a272b5cafa6789be3b5e71221320 
  3      ip-10-0-189-166.us-east-2.compute.internal   us-east-2b   rendered-master-e476a272b5cafa6789be3b5e71221320   rendered-master-e476a272b5cafa6789be3b5e71221320 (OPERATOR POD)
  2      ip-10-0-214-203.us-east-2.compute.internal   us-east-2c   rendered-master-e476a272b5cafa6789be3b5e71221320   rendered-master-e476a272b5cafa6789be3b5e71221320

ORDER       NAME                                         ZONE         CURRENT CONFIG                                     DESIRED CONFIG
  1      ip-10-0-156-188.us-east-2.compute.internal   us-east-2a   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7 
  2      ip-10-0-189-166.us-east-2.compute.internal   us-east-2b   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7 
  3      ip-10-0-214-203.us-east-2.compute.internal   us-east-2c   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7   rendered-master-92d3aaf3a44e51a3bd2abdaf61d44ab7 (OPERATOR POD)

The results match the expected behavior.

We have run the following tests to make sure that the order used to update MCPs is not broken with this PR:

  • "[sig-mco] MCO Author:sregidor-Longduration-NonPreRelease-High-49568-Check nodes updating order maxUnavailable=1 [Serial]"
  • "[sig-mco] MCO Author:sregidor-Longduration-NonPreRelease-High-49672-Check nodes updating order maxUnavailable>1 [Serial]"

They both passed.

We add the qe-approved label

/label qe-approved

Thank you very much!

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Jun 13, 2023
@djoshy djoshy force-pushed the sync-pool-timeout-fix branch 2 times, most recently from 072b13a to bf9d7be Compare June 15, 2023 17:36
@djoshy
Copy link
Contributor Author

djoshy commented Jun 15, 2023

/retest-required

@djoshy djoshy marked this pull request as ready for review June 20, 2023 14:17
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 20, 2023
@djoshy
Copy link
Contributor Author

djoshy commented Jun 20, 2023

/retest

@djoshy
Copy link
Contributor Author

djoshy commented Jun 20, 2023

/retest-required

1 similar comment
@djoshy
Copy link
Contributor Author

djoshy commented Jun 22, 2023

/retest-required

@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-4820, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.14.0) matches configured target version for branch (4.14.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @rioliu-rh

In response to this:

Added a new function called getOperatorNodeName() Fleshed out the findEtcdLeader stub in the controller that helps us queue control plane nodes for updates. I'll do a subsequent commit for bumping the timeout, but want to run a few tests with the queuing in place first.

- Description for the changelog
controller: defer update of node running operator

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

// nolint:unparam
func (ctrl *Controller) filterControlPlaneCandidateNodes(pool *mcfgv1.MachineConfigPool, candidates []*corev1.Node, capacity uint) ([]*corev1.Node, uint, error) {
if len(candidates) <= 1 {
return candidates, capacity, nil
}
etcdLeader, err := ctrl.getCurrentEtcdLeader(candidates)
operatorNodeName, err := ctrl.getOperatorNodeName()
if err != nil {
glog.Warningf("Failed to find current etcd leader (continuing anyways): %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need the message updated with operator pod.

glog.Infof("Deferring update of etcd leader: %s", node.Name)
if node.Name == operatorNodeName {
ctrl.eventRecorder.Eventf(pool, corev1.EventTypeNormal, "DeferringOperatorNodeUpdate", "Deferring update of machine config operator node %s", node.Name)
glog.Infof("Deferring update of machine config operator node: %s", node.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's move to using klog as this is what we will be using from now on in MCO for logging, see #3734

@@ -812,7 +827,7 @@ func (optr *Operator) syncRequiredMachineConfigPools(_ *renderConfig) error {
}
// If we don't account for pause here, we will spin in this loop until we hit the 10 minute timeout because paused pools can't sync.
if pool.Spec.Paused {
return false, fmt.Errorf("Required MachineConfigPool '%s' is paused and can not sync until it is unpaused", pool.Name)
return false, fmt.Errorf("error required pool %s is paused and cannot sync until it is unpaused", pool.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/pool/MachineConfigPool/ for verbose messaging

@djoshy
Copy link
Contributor Author

djoshy commented Jun 27, 2023

Rebased and made suggested changes @sinnykumari thanks! (:

Copy link
Contributor

@sinnykumari sinnykumari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work David!
/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jun 27, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 27, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy, sinnykumari

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 27, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 27, 2023

@djoshy: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/okd-scos-e2e-aws-ovn 192b802 link false /test okd-scos-e2e-aws-ovn

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit fc79ca7 into openshift:master Jun 27, 2023
12 of 13 checks passed
@openshift-ci-robot
Copy link
Contributor

@djoshy: Jira Issue OCPBUGS-4820: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-4820 has been moved to the MODIFIED state.

In response to this:

Added a new function called getOperatorNodeName() Fleshed out the findEtcdLeader stub in the controller that helps us queue control plane nodes for updates. I'll do a subsequent commit for bumping the timeout, but want to run a few tests with the queuing in place first.

- Description for the changelog
controller: defer update of node running operator

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@djoshy djoshy deleted the sync-pool-timeout-fix branch September 12, 2023 17:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants