New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix tolerations #182
Fix tolerations #182
Conversation
operator: "Exists" | ||
effect: "NoExecute" | ||
tolerationSeconds: 120 | ||
- key: "node.kubernetes.io/not-ready" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CVO must be scheduled on master so that it can drive installation even when it is NotReady
ie not schedulable because it creates operators that create network and apiserver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, are you telling we should tolerate both unschedulable
and NotReady
taints?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to say yes. ;)
We want to make sure CVO gets scheduled even when node is not ready because networking isn't setup yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cvo must always get a chance to land on any master.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Including memory
and disk
pressures? I assume not..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated with unschedulable
and network-unavailable
tolerations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about not-ready NoSchedule ?
- key: "node.kubernetes.io/network-unavailable" | ||
operator: Exists | ||
effect: "NoSchedule" | ||
- key: "node.kubernetes.io/not-ready" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it here.
/approve will let @smarterclayton take a look too. |
/test e2e-aws |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, ravisantoshgudimetla The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
Flake:
/cc @RobertKrawitz |
/retest |
Seems an issue with etcd communication for apiserver. /retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
In 4.1, we have taint-based evictions enabled. This changes logic in the node lifecycle controller in the kube-controller-manager significantly.
Historically, the node lifecycle controller would directly evict pods from node that had a Ready condition of False or Unknown after a pod eviction timeout set by the --pod-eviction-timeout flag on the kube-controller-manager. This setting applied to all pods cluster-wide. The default was 5m.
With taint-based evictions, all the node lifecycle controller does is taint the node with node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taint with NoExecute effect. This would normally result in the immediate eviction of all pods that don't tolerate those taints, breaking the old behavior with pod eviction timeouts. Enter DefaultTolerationSeconds mutating admission plugin.
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go
This plugin will add a default NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds of 5m (300s) as long as no such toleration is already specified in the pod spec. This restores the old pod eviction tiemout behavior.
One of the intended effects of this change is the make the pod eviction timeout a pod-level property. Different applications require different timeouts depending on their design and having it controlled at a cluster level before was not optimal. The side-effect is that we won't allow pods to be scheduled onto nodes that have disk, memory, cpu pressure.
The DefaultTolerationSeconds plugin has a flag that allows adjusting the defaults for tolerationSeconds
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go#L34-L40
We might expose this tunable for user control in the future and do not want the cluster control plane components to be subject to it.
Thus, this PR explictly defines a NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds generically appropriate for cluster components.
Once these changes are in across all components, this e2e will enforce it in the future
openshift/origin#22752
Please refer to this doc, if you have questions around what tolerations you can have:
https://docs.google.com/document/d/1W449BfB5la9NC7pcDxkovgzlBlHMy5Lkj0-WXWjdiTU/edit#