-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: run kube-proxy on non-master tainted nodes #65931
Conversation
@neolit123: GitHub didn't allow me to request PR reviews from the following users: discordianfish, mxey. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure here, let's check this up before merging wrt CriticalAddonsOnly
@@ -104,8 +104,6 @@ spec: | |||
tolerations: | |||
- key: CriticalAddonsOnly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make this accept everything, with Exists
only with a key, or have both this and an Exists
-only thing. @timothysc WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the explanations i found on CriticalAddonsOnly
are kind of vague. to my understanding these two lines, would help to run the daemonset on a node that already has critical pods.
but this thread suggested that "priority and preemption" (beta in 1.11) handles this "transparently"?
#57659 (comment)
https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 re make it land everywhere.
/retest |
/test pull-kubernetes-bazel-build |
operator: Exists | ||
- key: {{ .MasterTaintKey }} | ||
effect: NoSchedule | ||
- operator: Exists |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to add back the CriticalAddons toleration b/c it's used by the kubelet and the scheduler - https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neolit123 - lets add
labels:
scheduler.alpha.kubernetes.io/critical-pod: ""
priorityClassName: system-node-critical
To prevent more PRs then lgtm
/approve
/cc @kubernetes/sig-release-bugs for whoever is the v1.11 - release manager. |
kube-proxy should be able to run on all nodes, independent on the taint of such nodes. This restriction was previously removed in bb28449 but then was brought back in d194926. Also, annotate with: scheduler.alpha.kubernetes.io/critical-pod: "" and add a class in the template spec: priorityClassName: system-node-critical
updated (hope i got it right). |
/test pull-kubernetes-e2e-gce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: neolit123, timothysc The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[MILESTONENOTIFIER] Milestone Pull Request Needs Approval @neolit123 @timothysc @kubernetes/sig-cluster-lifecycle-misc @kubernetes/sig-release-misc Action required: This pull request must have the Pull Request Labels
|
/test all [submit-queue is verifying that this PR is safe to merge] |
/test pull-kubernetes-e2e-kops-aws |
Automatic merge from submit-queue (batch tested with PRs 65931, 65705, 66033). If you want to cherry-pick this change to another branch, please follow the instructions here. |
@neolit123: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Thanks! |
What this PR does / why we need it:
kube-proxy should be able to run on all nodes, independent
on the taint of such nodes.
This restriction was previously removed in bb28449 but
then was brought back in d194926.
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews
/cc @luxas @detiber @dixudx @discordianfish @mxey
/kind bug
/area kube-proxy
/area kubeadm
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes kubernetes/kubeadm#699
Special notes for your reviewer:
we are removing the requirement again, but please have a look at all the implications here.
hopefully we don't have to bring it again.
Release note: