New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detect when drain operation kills kured pod #14

awh opened this Issue Apr 16, 2018 · 1 comment


None yet
2 participants

awh commented Apr 16, 2018

kubectl drain should not in theory kill the kured pod as it's a DaemonSet; we rely on this behaviour, because after the drain operation is complete we need to command the reboot. We have however experienced the kured pod being killed during drain when the embedded version of kubectl is too different to the server (specifically, with kubectl 1.7.x against server 1.9.x), resulting in a never ending cycle of lock/drain/restart/unlock without the reboot actually occurring.

Possible fixes:

  • Detect client/server version mismatch on startup and refuse to operate (we should probably warn on this anyway)
  • Ignore TERM signals after drain commences, and have a long enough terminationGracePeriodSeconds that we can complete (problem: how long is long enough?)
  • Catch TERM during drain and print a warning message
  • Catch TERM during drain, and stash some information in the lock so that we don't cycle endlessly on restart

This comment has been minimized.

neumanndaniel commented Apr 24, 2018

Regarding to this. It would be good, if you can maintain a section in the documentation which image version runs which kubectl version.

Additionally it would be great, if the latest image would be tagged with latest.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment