-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: UPGRADE FAILED: no resource with the name "anything_goes" found #3275
Comments
Completely removing release from Helm via Why can't Helm just overwrite whatever is currently installed? Aren't we living in a declarative world with Kubernetes? |
Just got the same thing... quite new for me at it seems to be a new issue. delete the resource will fix it. |
I had this problem - it was due to a PersistentVolume that i'd created. To resolve, I deleted the PV and PVC. Ran |
I got the feeling it might be related to bad pv... but then the error is quite misleading ! just tried with 2.7.1 with no luck... [main] 2017/12/21 15:30:48 Starting Tiller v2.7.1 (tls=false) |
seems it get confused at doing to release at the same time... just reapplied the same config twice... [tiller] 2017/12/21 15:50:46 preparing update for xxx |
might be related to #2941 |
a said in the other thread, one of the way to fix the issue was to delete the buggy configmaps ... seems to do it for me currently... |
That is all fine and dandy. Until that time, when you have to delete something critical from a production namespace. Which, coincidentally, happened to me just now. :c |
I've faced the issue as well when we upgrade an release if there are multiple |
Same problem. Everything was just fine yesterday and I did multiple upgrades. Today I just added a new yaml with The interesting thing is, helm created the Update: I manually deleted the |
I was having this exact error. It looks like the issue is related to templates with multiple API objects similar to what @amritb saw. In my case, I had a template that had multiple API objects that could be toggled on and off similar to:
Breaking that into its own template file and cleaning up the orphaned objects that helm created and forgot about resolved the issue for me. It sounds like there is a bug in how helm gets previous config if the number of objects per template changes between releases. |
Adding another datapoint: I appear to be having the exact same issue as @awwithro. We're using a jinja loop to create multiple cronjobs via a template, and when a new upgrade caused this loop to fill in an additional cronjob, we ran into the bug. Seemed to trigger #2941 as well (or possibly one bug causes the other), and deleting the zombie configmaps fixes it. |
Just trapped into this even without using any configmaps |
Some extra color for anyone who may be stuck: This seems to be in line with others' evidence that deletion is the only way to solve right now 😕 |
Also running across this =\ |
I also needed to delete affected resources. Not good for a production environment =_( |
I'm seeing something I think is similar. The problem appears to be that a |
The same problem using helm
But the configmap exists in Here is the configmap: apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "proxy.fullname" . }}-config
# namespace: {{ .Release.Namespace }} # I've tried adding and removing it
labels: # labels are the same as labels from $ kubectl describe configmap bunny-proxy-config
app: {{ template "proxy.name" . }}
chart: {{ template "proxy.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
asd: qwe |
I deleted release and re-installed again. Also, I was using api version Right now seems like everything is working. |
This is really easy to reproduce, happens if there is an error in manifest. Like we have resource1 and resource2, resource2 depends on first. When we upgrade release, resource1 is created (eg PV & PVC), but resource2 fails. After this only deletion of resource1 helps, as helm always reports a problem on upgrade (PersistentVolume with name ... not found) |
We had the same issue (the resource that got us was Secrets). Removing the new secrets and re-deploying fixed it. Do note that because of the failures, we now have 11 different releases when we do |
helm/helm#3275 has caused serious amounts of downtime in the last week or so for various deployments at Berkeley, so let's recommend people use the last known version that did not have these issues.
This has pretty much made helm unusable for regular production deploys for us :( We're currently investigating doing things like passing --dry-run to helm and piping it to kubectl apply... Since this seems to affect only a subset of users, am unsure what it is that we are doing wrong :( |
After tailing the tiller logs, I found that tiller was trying to update an old release at the same time:
Deleting the old configmap for s2osf.v10 and then upgrading worked.
|
I am hitting this issue too. I tried adding subchart with a deployment in my chart, it succeeded when upgraded with
The helm tiller logs just logs the same error. Anyone experiencing this too? |
* add test for rolling back from a FAILED deployment * Update naming of release variables Use same naming as the rest of the file. * Update rollback test - Add logging - Verify other release names not changed * fix(tiller): Supersede multiple deployments There are cases when multiple revisions of a release has been marked with DEPLOYED status. This makes sure any previous deployment will be set to SUPERSEDED when doing rollbacks. Closes helm#2941 helm#3513 helm#3275
hi,i also has this problem, and i can not solve it , can you would like give me some prompts. |
see #1193 (comment). |
Just confirming I am witnessing the same issue and the cause also is as earlier indicated. Added a new secret, referenced it in a volume (invalid syntax). Upgrade failed, subsequent upgrades failed with the error as above. Listing secrets showed it had been created. Manually deleted secret and the upgrade went through successfully. |
Same, @thedumbtechguy. I run into this issue routinely. It's especially fun when Helm decides you need to delete all your secrets, configmaps, roles, etc. Upgrading becomes a game of wack-a-mole with an ever-increasing list of arguments to |
I've been using helm for one week and already faced everything outlined
here https://medium.com/@7mind_dev/the-problems-with-helm-72a48c50cb45
A lot needs fixing here.
…On Fri, Mar 15, 2019, 10:49 PM Tom Davis ***@***.***> wrote:
Same, @thedumbtechguy <https://github.com/thedumbtechguy>. I run into
this issue routinely. It's especially fun when Helm decides you need to
delete *all* your secrets, configmaps, roles, etc. Upgrading becomes a
game of wack-a-mole with an ever-increasing list of arguments to kubectl
delete. I should have thrown in the towel on this sisyphean task months
ago, but it's too late for that now. Sure hope this and the dozens of
similar issues can be fixed!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3275 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AA4XZU4KMQePtZKcir8S5kWulkbYg-8Uks5vXCNggaJpZM4RGz7W>
.
|
I experienced the same with Helm v2.10. I already had a chart deployed, added another configMap to the chart. It reported that the deployment failed because it couldn't find configMap "blah". I did
To verify the configMap was indeed being rendered, it was. Checked the configMaps in the cluster, and found it there. Deleted the blah configMap, re-ran the upgrade, it worked. |
#5460 should better clarify the error message going forward. |
Fair point.
Keep up the good work helm team. |
In case this is a big deal to anyone else, thought I'd point out #4871 should fix these issues. Note that it appears it still hasn't been approved by the Helm team. Plus there were some concerns about the automatic deletion resources. Just mentioning it in case anyone wants to build it from source and give it a try. |
Having the same issue and only workaround seems to be |
A less destructive option is doing a |
Is there an idea if this is going to be in the next release, and if it does when it is coming? |
#5460 was merged 2 months ago, which means it should be in helm 2.14.0. |
I fixed the issue by
|
We ran into this issue in PROD, when a requirement to our umbrella helm chart added a configmap based on a conditional. For us the work around fix was to
|
For us, a simple rollback to the current revision has always worked:
|
@tobypeschel do you have idea how your fix works? |
Hi,
We are constantly hitting a problem that manifests itself with this
Error: UPGRADE FAILED: no resource with the name "site-ssl" found
, for example. They can appear after any innocuous update to a template.Could you, please, help me with understanding the problem. What causes those messages to appear?
I've been unsuccessful in triaging the issue further, it may happen anytime, haven't really found a pattern yet.
Perhaps, there is a problem with how we deploy?
helm upgrade hmmmmm /tmp/dapp-helm-chart-20171219-20899-1ppm74grrwrerq --set global.namespace=hmm --set global.env=test --set global.erlang_cookie=ODEzMTBlZjc5ZGY5NzQwYTM3ZDkwMzEx --set global.tests=no --set global.selenium_tests=no --namespace hmm --install --timeout 300
Helm: v2.7.2, v2.6.2, Kubernetes: v1.7.6, v1.8.5. I've tried every possible combination of these 4 versions, neither work.
The text was updated successfully, but these errors were encountered: