-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the desired behavior of helm regarding manual changes? #2070
Comments
Our ideal case is that Helm upgrades would essentially work like Kubernetes apply, but at the release level instead of at the individual resource level. We are gradually getting closer to that (moving) target. During an upgrade, the chart is rendered into resource definitions. Then a patch set is calculated by comparing those resources to the existing release's resources. Then the patch set is sent to the Kubernetes API server. As with Git, Helm will try to merge the new changes without touching unrelated changes you made with So, for example, if you deploy with replica count 4, then use As a consequence of this, it is not safe to assume that you can use /cc @adamreese |
What happens if a bad release goes out? I've encountered a situation where a value was typo'd, and Kubernetes rejected the patch set.. but somehow helm thought the change went through, and adjusting it didn't fix the deployment. |
@technosophos: RE #2070 (comment) -- is there a way to debug output the patch set to have more insight into what's actually being applied? It's a bit scary for the chart's debug output to desync with what's actually happening in the cluster. A kubectl apply of the whole object was assumed on my part. |
I believe @adamreese is working on that. There's also the |
We're hitting this with these versions:
We've made
Changes were not reverted by the next deploy. This is especially concerning with container image tags. Essentially what that boils down to is that if someone makes a change with @technosophos are you suggesting that we do a |
@technosophos I think I understand the logic that Helm is keeping an history of the changes and does not override changes done manually with kubectl but that in fact prevents the use of Helm to establish a known state based on values passed to Helm charts. |
You mean like |
@bacongobbler --force did not apply the values as I expected it would |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
For my use case this helps |
I hope helm would not revert changes to the replica count as this would break horizontal autoscalers |
While dealing with #1844 the question from the subject came up.
Our position in this is best described in the #1844 (comment) by @wjkohnen:
The text was updated successfully, but these errors were encountered: