Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

taint node with PreferNoSchedule until it gets drained #19

Closed
damoon opened this issue May 7, 2018 · 4 comments · Fixed by #250
Closed

taint node with PreferNoSchedule until it gets drained #19

damoon opened this issue May 7, 2018 · 4 comments · Fixed by #250
Milestone

Comments

@damoon
Copy link
Contributor

damoon commented May 7, 2018

In case of multiple nodes asking to restart, pods will be moved more often then needed, because they can start on the node that will be rebooted next.
The use of PreferNoSchedule on nodes waiting to be rebooted, will prefer scheduling pods onto already rebooted nodes.

link: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

@github-actions
Copy link

This issue was automatically considered stale due to lack of activity. Please update it and/or join our slack channels to promote it, before it automatically closes (in 7 days).

@damoon
Copy link
Contributor Author

damoon commented Nov 29, 2020

@weaveworksbot i am still interested in this

@bboreham
Copy link
Contributor

bboreham commented Dec 7, 2020

Isn't there a danger that every node is waiting to reboot (e.g. if Ubuntu push out a security patch), so this would mean no pods can get scheduled?

@damoon
Copy link
Contributor Author

damoon commented Dec 7, 2020

The taint is PreferNoSchedule not NoSchedule.
The scheduler will try to avoid putting pods on these nodes, but do it if no other placement is possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants