You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why it happened:
For the upcoming release we chose a very simple strategy of restarting here #1031
Basically for every plan executed, we always restart everything because we're not sure right now whether we need to or not (we don't know if e.g. param change did not change a config map which would require a statefulset to be restarted). This is not an ideal solution though, we should be able to understand the dependencies and restart only when necessary.
How to reproduce:
Having a 3 brokers cluster when we scale it to 5 brokers, before it was adding the 2 brokers and get it ready now when we do:
k kudo update --instance=kafka -p BROKER_COUNT=5
Instance: instance default/kafka has updated parameters from map[] to map[BROKER_COUNT:5]
InstanceController: Going to start execution of plan deploy on instance default/kafka
and right after the new pods are up the old pods are restarted
pdb helps, but still restarts the Pods, making the update of the StatefulSet much slower.
This also applies if the deploy plan modifies just a service, as the enhancement of the resources sets the last-plan-execution-uid on the template of the statefulSet.
It would be nice if we only update the last-plan-execution-uid on the template if we know that the pods need to be restarted, but figuring that out might be complex to do automatically.
Maybe we could leave this to the operator developers in some way? They should know which variables should trigger a Pod restart, wether the var is used in the StatefulSet, a ConfigMap or somewhere else.
Why it happened:
For the upcoming release we chose a very simple strategy of restarting here #1031
Basically for every plan executed, we always restart everything because we're not sure right now whether we need to or not (we don't know if e.g. param change did not change a config map which would require a statefulset to be restarted). This is not an ideal solution though, we should be able to understand the dependencies and restart only when necessary.
How to reproduce:
Having a 3 brokers cluster when we scale it to 5 brokers, before it was adding the 2 brokers and get it ready now when we do:
and right after the new pods are up the old pods are restarted
The text was updated successfully, but these errors were encountered: