-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
taint node with PreferNoSchedule until it gets drained #19
Comments
This issue was automatically considered stale due to lack of activity. Please update it and/or join our slack channels to promote it, before it automatically closes (in 7 days). |
@weaveworksbot i am still interested in this |
Isn't there a danger that every node is waiting to reboot (e.g. if Ubuntu push out a security patch), so this would mean no pods can get scheduled? |
The taint is PreferNoSchedule not NoSchedule. |
In case of multiple nodes asking to restart, pods will be moved more often then needed, because they can start on the node that will be rebooted next.
The use of PreferNoSchedule on nodes waiting to be rebooted, will prefer scheduling pods onto already rebooted nodes.
link: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
The text was updated successfully, but these errors were encountered: