You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl drain should not in theory kill the kured pod as it's a DaemonSet; we rely on this behaviour, because after the drain operation is complete we need to command the reboot. We have however experienced the kured pod being killed during drain when the embedded version of kubectl is too different to the server (specifically, with kubectl 1.7.x against server 1.9.x), resulting in a never ending cycle of lock/drain/restart/unlock without the reboot actually occurring.
Possible fixes:
Detect client/server version mismatch on startup and refuse to operate (we should probably warn on this anyway)
Ignore TERM signals after drain commences, and have a long enough terminationGracePeriodSeconds that we can complete (problem: how long is long enough?)
Catch TERM during drain and print a warning message
Catch TERM during drain, and stash some information in the lock so that we don't cycle endlessly on restart
The text was updated successfully, but these errors were encountered:
This issue was automatically considered stale due to lack of activity. Please update it and/or join our slack channels to promote it, before it automatically closes (in 7 days).
kubectl drain
should not in theory kill the kured pod as it's a DaemonSet; we rely on this behaviour, because after the drain operation is complete we need to command the reboot. We have however experienced the kured pod being killed during drain when the embedded version ofkubectl
is too different to the server (specifically, withkubectl
1.7.x against server 1.9.x), resulting in a never ending cycle of lock/drain/restart/unlock without the reboot actually occurring.Possible fixes:
terminationGracePeriodSeconds
that we can complete (problem: how long is long enough?)The text was updated successfully, but these errors were encountered: