-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(upgrade): support parallel/faster upgrades for node daemonset #230
Conversation
For ZFSPV, all the node daemonset pods can go into the terminating state at the same time since it does not need any minimum availability of those pods. Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset pods parallelly. Signed-off-by: Pawan <pawan@mayadata.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@pawanpraka1 -- has upgrade from previously installed setup to this new operator.yaml with changes tested? In the past I had seen that when upgrade options are changed, we might have to delete some objects and then patch. |
@kmova, I have tested the upgrade on my Rancher setup, did not face any issue. Could you elaborate more about the issue, which object we need to delete and why? |
usually changing the default value of upgrade type, requires removing that json object from the deployment spec and recreating. Had faced this issue when changin from rollingUpgrade to recreate. In this case as it is not changing, it should be fine. |
Signed-off-by: Pawan pawan@mayadata.io
Why is this PR required? What issue does it fix?:
For ZFSPV, all the node daemonset pods can go into the terminating state at
the same time since it does not need any minimum availability of those pods.
What this PR does?:
Changing maxUnavailable to 100% so that K8s can upgrade all the daemonset
pods parallelly. Also added labels to all the pods.