Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 2051457: CCM PodDisruptionBudgets #174

Merged
merged 16 commits into from Apr 14, 2022

Conversation

lobziik
Copy link
Contributor

@lobziik lobziik commented Feb 23, 2022

This PR introduces PodDisruptionBudgets for cloud-controller-manager pods, it includes

  • Resource apply logic for PodDisruptionBudget type
  • couple of new labels for all ccm/cnm pods with platform name value, namely:
    • infrastructure.openshift.io/cloud-controller-manager - for ccm
    • infrastructure.openshift.io/cloud-node-manager - for cnm
  • PodDisruptionBudget addition logic for deployments on all platfroms, except single-node OCP topology
  • extended resourceapply for recreate Daemonsets and deployments in case if selectors was changed
  • refactor resourceapply test to use envtest and gomega matcher

@lobziik lobziik changed the title CCM PodDisruptionBudgets Bug 2051457: CCM PodDisruptionBudgets Feb 23, 2022
@openshift-ci openshift-ci bot added bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Feb 23, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 23, 2022

@lobziik: This pull request references Bugzilla bug 2051457, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @sunzhaohua2

In response to this:

Bug 2051457: CCM PodDisruptionBudgets

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@lobziik
Copy link
Contributor Author

lobziik commented Feb 23, 2022

/cc @JoelSpeed

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 23, 2022

@lobziik: This pull request references Bugzilla bug 2051457, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @sunzhaohua2

In response to this:

Bug 2051457: CCM PodDisruptionBudgets

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/hold

We cannot update the label selectors on any deployment for a platform that is already GA'd, eg, AzureStackHub, IBM cloud. We need to make this change in a non-breaking way, or teach our apply logic to be able to detect the label selector config and delete and recreate the deployment

spec:
selector:
matchLabels:
app: azure-cloud-controller-manager
k8s-app: azure-cloud-controller-manager
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change will break cluster upgrades, labels on deploymenys are immutable once created. Is there a way we can test this, will we need to delete and recreate to achieve this?

Copy link
Contributor Author

@lobziik lobziik Mar 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0e963af - recreate logic with envtest based tests, ptal

pkg/cloud/cloud.go Outdated Show resolved Hide resolved
}

func getPDB(config config.OperatorConfig) (*policyv1.PodDisruptionBudget, error) {
minAvailable := intstr.FromInt(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change this to maxUnavailable maybe? Then it would work even with a single replica?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we will strictly constraint number of ccm replicas to two, yes it might work with single replica.

However, i don't like this way, feel that it's harder to understand and troubleshoot then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maxUnavailable tends to be more widely recommended as it is more flexible to changes than minAvailable and is in general safer.

In my experience, minAvailable is more likely to cause an issue than maxUnavailable. IMO this should be maxUnavailable, even if we have 3 replicas, yes, it slows down rollouts, but that's not necessarily a bad thing for such an important component

pkg/cloud/common/resources.go Show resolved Hide resolved
@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 23, 2022
@lobziik
Copy link
Contributor Author

lobziik commented Mar 11, 2022

/test unit

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 11, 2022

@lobziik: This pull request references Bugzilla bug 2051457, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @sunzhaohua2

In response to this:

Bug 2051457: CCM PodDisruptionBudgets

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@lobziik
Copy link
Contributor Author

lobziik commented Mar 14, 2022

/retest

Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please review my nits within the apply section and apply the nits to the rest of the code there as well please

pkg/controllers/resourceapply/resourceapply.go Outdated Show resolved Hide resolved
Comment on lines 187 to 188
err = client.Delete(ctx, existing)
if err != nil && !apierrors.IsNotFound(err) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inline this please

pkg/controllers/resourceapply/resourceapply.go Outdated Show resolved Hide resolved
err = client.Delete(ctx, existing)
if err != nil && !apierrors.IsNotFound(err) {
recorder.Event(existing, corev1.EventTypeWarning, "Deletion failed", err.Error())
return false, err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please wrap this error to provide additional context

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix this comment

pkg/controllers/resourceapply/resourceapply.go Outdated Show resolved Hide resolved
pkg/controllers/resourceapply/resourceapply.go Outdated Show resolved Hide resolved
pkg/controllers/resourceapply/resourceapply.go Outdated Show resolved Hide resolved
pkg/controllers/resourceapply/resourceapply.go Outdated Show resolved Hide resolved
pkg/controllers/resourceapply/resourceapply.go Outdated Show resolved Hide resolved
Copy link

@alexander-demicev alexander-demicev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

pkg/cloud/alibaba/alibaba.go Show resolved Hide resolved
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 18, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alexander-demichev

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 18, 2022
Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this generally looks good to me, +1

i will leave the lgtm label for @JoelSpeed to make sure all this concerns are addressed

Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested the latest iteration of tis PR to verify that it still works as expected on upgrade, eg on Azure

@lobziik
Copy link
Contributor Author

lobziik commented Mar 18, 2022

/test e2e-azure-upgrade
/test e2e-aws-upgrade

@lobziik
Copy link
Contributor Author

lobziik commented Mar 18, 2022

Have you tested the latest iteration of tis PR to verify that it still works as expected on upgrade, eg on Azure

Didn't test it manually. There is a whole bunch of envtest based test: https://github.com/openshift/cluster-cloud-controller-manager-operator/pull/174/files#diff-a9b533cdc5d42f96ee87c67af940c0e07dbffc2744eaec276d67b103c851960bR286 IMHO that should be sufficient.

@JoelSpeed
Copy link
Contributor

Didn't test it manually. There is a whole bunch of envtest based test: https://github.com/openshift/cluster-cloud-controller-manager-operator/pull/174/files#diff-a9b533cdc5d42f96ee87c67af940c0e07dbffc2744eaec276d67b103c851960bR286 IMHO that should be sufficient.

I'd prefer to see some logs from an actual cluster with some manual testing before we merge this, would be good to create a cluster and then upgrade to this PR. I think cluster bot can do this for you, but it would be good to check the logs and see that the CCMs are up throughout

@lobziik
Copy link
Contributor Author

lobziik commented Mar 21, 2022

/retest

@lobziik
Copy link
Contributor Author

lobziik commented Mar 21, 2022

tested on azure manually, steps:

  • installed nightly build via cluster bot
  • applied FG for engaging CCCMO
  • scaled down CVO after migration
  • checked labels
  • replaced CCCMO image with quay.io/dmoiseev/cluster-cloud-controller-manager-operator:ccm-pdb - built from this pr
  • respective log:
dmoiseev@dmoiseev-mac ~ $ oc logs -n openshift-cloud-controller-manager-operator cluster-cloud-controller-manager-operator-5bdbf4f775-jsq8q cluster-cloud-controller-manager -f
I0321 16:15:06.740290       1 request.go:665] Waited for 1.036040111s due to client-side throttling, not priority and fairness, request: GET:https://api-int.ci-ln-mrqp8qk-1d09d.ci.azure.devcluster.openshift.com:6443/apis/operators.coreos.com/v1?timeout=32s
I0321 16:15:08.993975       1 logr.go:249] CCMOperator/controller-runtime/metrics "msg"="Metrics server is starting to listen"  "addr"=":9258"
I0321 16:15:08.994284       1 logr.go:249] CCMOperator/setup "msg"="starting manager"
I0321 16:15:08.994486       1 internal.go:362] CCMOperator "msg"="Starting server" "addr"={"IP":"::","Port":9258,"Zone":""} "kind"="metrics" "path"="/metrics"
I0321 16:15:08.994532       1 internal.go:362] CCMOperator "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":9259,"Zone":""} "kind"="health probe"
I0321 16:15:08.994552       1 leaderelection.go:248] attempting to acquire leader lease openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-leader...
I0321 16:18:06.840485       1 leaderelection.go:258] successfully acquired lease openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-leader
I0321 16:18:06.840763       1 controller.go:178] CCMOperator/controller/clusteroperator "msg"="Starting EventSource" "reconciler group"="config.openshift.io" "reconciler kind"="ClusterOperator" "source"="kind source: *v1.ClusterOperator"
I0321 16:18:06.840807       1 controller.go:178] CCMOperator/controller/clusteroperator "msg"="Starting EventSource" "reconciler group"="config.openshift.io" "reconciler kind"="ClusterOperator" "source"="kind source: *v1.Infrastructure"
I0321 16:18:06.840830       1 controller.go:178] CCMOperator/controller/clusteroperator "msg"="Starting EventSource" "reconciler group"="config.openshift.io" "reconciler kind"="ClusterOperator" "source"="kind source: *v1.FeatureGate"
I0321 16:18:06.840849       1 controller.go:178] CCMOperator/controller/clusteroperator "msg"="Starting EventSource" "reconciler group"="config.openshift.io" "reconciler kind"="ClusterOperator" "source"="kind source: *v1.KubeControllerManager"
I0321 16:18:06.840872       1 controller.go:178] CCMOperator/controller/clusteroperator "msg"="Starting EventSource" "reconciler group"="config.openshift.io" "reconciler kind"="ClusterOperator" "source"="channel source: 0xc0004810e0"
I0321 16:18:06.840908       1 controller.go:186] CCMOperator/controller/clusteroperator "msg"="Starting Controller" "reconciler group"="config.openshift.io" "reconciler kind"="ClusterOperator"
I0321 16:18:06.942863       1 controller.go:220] CCMOperator/controller/clusteroperator "msg"="Starting workers" "reconciler group"="config.openshift.io" "reconciler kind"="ClusterOperator" "worker count"=1
I0321 16:18:07.147634       1 resourceapply.go:173] Deployment need to be recreated with new parameters
I0321 16:18:07.282870       1 resourceapply.go:258] DaemonSet need to be recreated with new parameters
  • checked labels again

    • selectors are correct
    • labels on pods are correct
    • no additional daemonsets/deployments/replicasets was not detected
  • pdb is there

Found issue:

  • ccm has 1 replica, respecitve field is missing, will add in next commit

@JoelSpeed

For convinient obtain platfrom name as a string across operator
infrastructure.openshift.io/cloud-controller-manager label contains
platform name, intended to be used as selector for podDisruptionBudgets.

For consistency reasons infrastructure.openshift.io/cloud-node-manager
label was introduced for daemonsets
Expected to fail, respective resourceapply changes for handling Deployment
coming next commit
…ed selectors

Selectors are immutable for Deployments and DaemonSets,
in case if it was changed, delete old and create new one with prior serverside validation
DRY principle does not work well in this world, sorry
@JoelSpeed
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 13, 2022
Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are several usages of fmt.Errorf using the %v format for errors, i think we should convert these to %w, but i don't think it's a blocker here.
/lgtm

@JoelSpeed
Copy link
Contributor

/hold cancel
/retest-required

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 14, 2022
@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 14, 2022

@lobziik: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-openstack-ccm 7488432 link false /test e2e-openstack-ccm
ci/prow/e2e-azure-ccm 7488432 link false /test e2e-azure-ccm
ci/prow/e2e-gcp-ccm-install 7488432 link false /test e2e-gcp-ccm-install
ci/prow/e2e-vsphere-ccm 7488432 link false /test e2e-vsphere-ccm
ci/prow/e2e-gcp-ccm 7488432 link false /test e2e-gcp-ccm

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 4fbdd53 into openshift:master Apr 14, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 14, 2022

@lobziik: All pull requests linked via external trackers have merged:

Bugzilla bug 2051457 has been moved to the MODIFIED state.

In response to this:

Bug 2051457: CCM PodDisruptionBudgets

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants