You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What would you like to be added:
A way to control when Pods from a StatefulSet or Deployment are restarted. The default should stay as is, but it should be possible to prevent automatic restarts of Pods if they are not required.
Why is this needed:
At the moment, the enhancer adds the kudo.dev/last-plan-execution-uid: 4f668a1a-0226-49a0-8354-c3e32e36440b to the StatefulSet.spec.template.annotations, which always triggers a restart of all pods, even if no other attribute of the spec template was changed.
This is was added to make sure that the Pods are always restarted, but has negative implications for big installations of an operator. For example, a cassandra statefulSet that changes the NODE_COUNT from 30 to 31 will trigger a restart of all 30 existing Pods, interrupting the running C* cluster more than required.
The text was updated successfully, but these errors were encountered:
What would you like to be added:
A way to control when Pods from a StatefulSet or Deployment are restarted. The default should stay as is, but it should be possible to prevent automatic restarts of Pods if they are not required.
Why is this needed:
At the moment, the enhancer adds the
kudo.dev/last-plan-execution-uid: 4f668a1a-0226-49a0-8354-c3e32e36440b
to the StatefulSet.spec.template.annotations, which always triggers a restart of all pods, even if no other attribute of the spec template was changed.This is was added to make sure that the Pods are always restarted, but has negative implications for big installations of an operator. For example, a cassandra statefulSet that changes the NODE_COUNT from 30 to 31 will trigger a restart of all 30 existing Pods, interrupting the running C* cluster more than required.
The text was updated successfully, but these errors were encountered: