-
Notifications
You must be signed in to change notification settings - Fork 14k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
“Safely Drain A Node” does not explain how to handle DaemonSet pods #39816
Comments
cc @liggitt |
Related Doc - https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#use-kubectl-drain-to-remove-a-node-from-service |
from linked page https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain Does that explain it? |
DaemonSet pods are not safer to skip draining, they are just more difficult to drain effectively. In the skew/upgrade doc, it indicates pods must be removed from a node before upgrading, because kubelet does not test or guarantee compatibility of on-disk structures between minor versions. That guidance applies to all types of pods, including daemonset pods. From an upgrade perspective, it doesn't especially care whether you use the In the page that describes running the drain command, it walks through steps to remove pods, but apparently does not provide instructions for effectively removing daemonset pods. That's the issue to fix (either by documenting effective steps or by improving the drain command or daemonset controller or both) |
I think the The key question that Safely Drain A Node must answer is: How do we stop DaemonSets from replacing Pods after we remove them, and before we have finished maintenance? We can document this, but it might be a bit of a pain: I think we'd need to look at labelling nodes and changing the scheduling rules for each DaemonSet. |
/triage accepted |
/priority backlog IMO: not quite important-longterm, but close /lifecycle frozen /retitle “Safely Drain A Node” does not explain how to handle DaemonSet pods |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
The draining guide of draining explicitly ignores Daemonsets pods by
Wonder why Daemonsets are safer to skip draining comparing to other pods from Deployment or Statefulsets (or maybe regular pods). There should be no difference compatability-wise about those pods to survive a drop-in kubelet MINOR upgrade.
The text was updated successfully, but these errors were encountered: