New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when updating Statefulsets #2149
Comments
When you say "seem to work", do you mean that you have checked with |
See this explanation in the Kubernetes documentation: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/ It states:
|
The changes made were to spec.template.containers.resources so we would expect no errors. We received the error only on the first attempt. Subsequent attempts were verified to stick. PS. What's the recommended way of updating StatefulSets managed by Helm with changes beyond containers and replicas? |
Also, curious about above ^ |
Also interested. What is the process when I do need to update a
|
Or does 1.7 change the semantics? |
Same problem here, but updating the "replicas" value. Statefulset doesn't get updated. |
Same Problem here, but with Kubernetes 1.8.x this should be allowed: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/ |
I had a similar error when adding a 2nd container (k8s 1.7.8). Solved it by manually deleting without deleting running pods: then running helm upgrade succeeded and it automatically started rolling up the old pods. |
FWIW, still seeing this on k8s 1.8.5 & 1.8.6 with helm 2.7.2 and 2.8.0-rc.1 |
With k8s 1.9 I get the error every time I try to update my chart (
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
What's the recommended way of updating StatefulSets managed by Helm with changes beyond containers and replicas? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I had this problem with 2.8.2, then upgraded to 2.9.1 and had no issue. |
@MikeSchuette thanks for your answer. I will give it a try :) |
I just faced the same issue, with 2.10.0, and it seems the only way I found to fix it is @balboah 's solution ( |
You may want to review the |
Still here with image changes |
Seeing inconsistent behavior when upgrading a StatefulSet on Helm 3.2.0. Changes to As a result the StatefulSet relaunched the pods in a broken state because they referenced a Secret that no longer existed. I was able to clean up the errant volume/volumeMount content that helm had left behind with a |
Kubernetes only allows a small subset of changes to a StatefulSet once it has been created. We cannot update values that Kubernetes considered immutable. This was pointed out earlier by @technosophos in #2149 (comment). I highly recommend giving that document a read-through. For those interested in more destructive options (i.e. delete and re-create the resource on an upgrade), please have a look at #7431 and the discussion in #7082. This is intentional behaviour by Kubernetes, so there's nothing we can do about it. I'm closing this as an intentional design choice by Kubernetes. Thanks! |
As mentioned in the above steps, there was no error, instead the change was silently dropped. This resulted in the cluster falling out of sync with the chart content, which then caused further issues. Additionally this was apparently a legal change as the missing change could be fixed manually with a |
@bacongobbler Hoping you will reopen this. I'm experiencing the same edge case mentioned by @mariusgrigoriu and @nickbp. Specifically, even though Helm v3 fails to upgrade the release because of the reasons you described in #2149 (comment), subsequent updates have no error or failure. If I attempt the invalid update via Would you expect Helm v3 to only fail the first time and never again? My concern here is that maybe Helm is actually marking the release as successful internally in its Secrets state when it was not. Again, trying to be clear; the issue here is NOT that Helm fails to upgrade an immutable spec field (I understand Helm is limited to Kubernetes' API), the issue is the failure appears to be dropped/swallowed after the first occurrence. |
I'll re-open the ticket here for further discussion. I do not have any more information to share other than what's been shared in previous comments in this thread. If you've found a case that may cause a bug, we'd appreciate a fix, or at the very least a test case which others can use to reproduce the issue to determine a fix. Thanks. |
I managed to solve this issue by deleting and purging my helm release, then redeploying:
then |
Helm could delete sts automatically without removing pods? (with a flag, such as |
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs. |
The first time I'm updating a stateful set I get an error like
Error: UPGRADE FAILED: StatefulSet.apps "eerie-tortoise-consul" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas' are forbidden.
on the first try. Subsequent attempts seem to work with no error.An easy way to repro for us:
Client: &version.Version{SemVer:"v2.2.3", GitCommit:"1402a4d6ec9fb349e17b912e32fe259ca21181e3", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.2.3", GitCommit:"1402a4d6ec9fb349e17b912e32fe259ca21181e3", GitTreeState:"clean"}
Logs
The text was updated successfully, but these errors were encountered: