-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix tolerations #50
Fix tolerations #50
Conversation
/retest |
with error
|
/retest |
/test e2e-aws |
Thanks, Ravi. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jmencak, ravisantoshgudimetla The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test e2e-aws-upgrade |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest |
In 4.1, we have taint-based evictions enabled. This changes logic in the node lifecycle controller in the kube-controller-manager significantly.
Historically, the node lifecycle controller would directly evict pods from node that had a Ready condition of False or Unknown after a pod eviction timeout set by the --pod-eviction-timeout flag on the kube-controller-manager. This setting applied to all pods cluster-wide. The default was 5m.
With taint-based evictions, all the node lifecycle controller does is taint the node with node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taint with NoExecute effect. This would normally result in the immediate eviction of all pods that don't tolerate those taints, breaking the old behavior with pod eviction timeouts. Enter DefaultTolerationSeconds mutating admission plugin.
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go
This plugin will add a default NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds of 5m (300s) as long as no such toleration is already specified in the pod spec. This restores the old pod eviction tiemout behavior.
One of the intended effects of this change is the make the pod eviction timeout a pod-level property. Different applications require different timeouts depending on their design and having it controlled at a cluster level before was not optimal. The side-effect is that we won't allow pods to be scheduled onto nodes that have disk, memory, cpu pressure but
The DefaultTolerationSeconds plugin has a flag that allows adjusting the defaults for tolerationSeconds
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go#L34-L40
We might expose this tunable for user control in the future and do not want the cluster control plane components to be subject to it.
Thus, this PR explictly defines a NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds generically appropriate for cluster components.
Once these changes are in across all components, this e2e will enforce it in the future
openshift/origin#22752
Please refer to this doc, if you have questions around what tolerations you can have:
https://docs.google.com/document/d/1W449BfB5la9NC7pcDxkovgzlBlHMy5Lkj0-WXWjdiTU/edit#
/cc @sjenning @smarterclayton @derekwaynecarr