Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine control over when Pods in a StatefulSet are restarted #1424

Closed
ANeumann82 opened this issue Mar 13, 2020 · 0 comments · Fixed by #1483
Closed

Fine control over when Pods in a StatefulSet are restarted #1424

ANeumann82 opened this issue Mar 13, 2020 · 0 comments · Fixed by #1483

Comments

@ANeumann82
Copy link
Member

What would you like to be added:
A way to control when Pods from a StatefulSet or Deployment are restarted. The default should stay as is, but it should be possible to prevent automatic restarts of Pods if they are not required.

Why is this needed:
At the moment, the enhancer adds the kudo.dev/last-plan-execution-uid: 4f668a1a-0226-49a0-8354-c3e32e36440b to the StatefulSet.spec.template.annotations, which always triggers a restart of all pods, even if no other attribute of the spec template was changed.

This is was added to make sure that the Pods are always restarted, but has negative implications for big installations of an operator. For example, a cassandra statefulSet that changes the NODE_COUNT from 30 to 31 will trigger a restart of all 30 existing Pods, interrupting the running C* cluster more than required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant