Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tolerations #182

Merged

Conversation

ravisantoshgudimetla
Copy link
Contributor

In 4.1, we have taint-based evictions enabled. This changes logic in the node lifecycle controller in the kube-controller-manager significantly.

Historically, the node lifecycle controller would directly evict pods from node that had a Ready condition of False or Unknown after a pod eviction timeout set by the --pod-eviction-timeout flag on the kube-controller-manager. This setting applied to all pods cluster-wide. The default was 5m.

With taint-based evictions, all the node lifecycle controller does is taint the node with node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taint with NoExecute effect. This would normally result in the immediate eviction of all pods that don't tolerate those taints, breaking the old behavior with pod eviction timeouts. Enter DefaultTolerationSeconds mutating admission plugin.

https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go

This plugin will add a default NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds of 5m (300s) as long as no such toleration is already specified in the pod spec. This restores the old pod eviction tiemout behavior.

One of the intended effects of this change is the make the pod eviction timeout a pod-level property. Different applications require different timeouts depending on their design and having it controlled at a cluster level before was not optimal. The side-effect is that we won't allow pods to be scheduled onto nodes that have disk, memory, cpu pressure.

The DefaultTolerationSeconds plugin has a flag that allows adjusting the defaults for tolerationSeconds
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go#L34-L40

We might expose this tunable for user control in the future and do not want the cluster control plane components to be subject to it.

Thus, this PR explictly defines a NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds generically appropriate for cluster components.

Once these changes are in across all components, this e2e will enforce it in the future
openshift/origin#22752

Please refer to this doc, if you have questions around what tolerations you can have:

https://docs.google.com/document/d/1W449BfB5la9NC7pcDxkovgzlBlHMy5Lkj0-WXWjdiTU/edit#

@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label May 3, 2019
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 120
- key: "node.kubernetes.io/not-ready"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CVO must be scheduled on master so that it can drive installation even when it is NotReady ie not schedulable because it creates operators that create network and apiserver.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear, are you telling we should tolerate both unschedulable and NotReady taints?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to say yes. ;)

We want to make sure CVO gets scheduled even when node is not ready because networking isn't setup yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cvo must always get a chance to land on any master.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Including memory and disk pressures? I assume not..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated with unschedulable and network-unavailable tolerations

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about not-ready NoSchedule ?

- key: "node.kubernetes.io/network-unavailable"
operator: Exists
effect: "NoSchedule"
- key: "node.kubernetes.io/not-ready"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added it here.

@abhinavdahiya
Copy link
Contributor

/approve

will let @smarterclayton take a look too.

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 6, 2019
@ravisantoshgudimetla
Copy link
Contributor Author

/test e2e-aws

@abhinavdahiya
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 6, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya, ravisantoshgudimetla

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@ravisantoshgudimetla
Copy link
Contributor Author

Flake:

May 07 02:07:38.234 W ns/openshift-machine-config-operator pod/etcd-quorum-guard-855976c44d-mgm6x 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable. (109 times)

/cc @RobertKrawitz

@ravisantoshgudimetla
Copy link
Contributor Author

/retest

@ravisantoshgudimetla
Copy link
Contributor Author

7f4-jkjht Stopping container packageserver May 07 04:08:28.391 W ns/openshift-operator-lifecycle-manager pod/packageserver-78f48b54d6-mx9dw Readiness probe failed: Get https://10.130.0.86:5443/healthz: dial tcp 10.130.0.86:5443: connect: connection refused (2 times) May 07 04:08:29.157 W ns/openshift-etcd pod/etcd-member-ip-10-0-160-97.ec2.internal node/ip-10-0-160-97.ec2.internal container=etcd-metrics container restarted May 07 04:08:29.598 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-160-97.ec2.internal pods/kube-apiserver-ip-10-0-160-97.ec2.internal container=\"kube-apiserver-7\" is not ready" to "" (2 times)

Seems an issue with etcd communication for apiserver.

/retest

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants