New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
app-name has no deployed releases #5595
Comments
Hi - can you give some more detail on how you're deploying? are you using If this is the case, a |
hi, sorry for missing info. The question is why helm cannot recover after 3 consecutive failures. |
the only way to sort that without deleting the release add |
|
yes, of course to |
does it means that BTW I really dont want to end up with deleted production services, so the and do you really think that it is not an issue? |
See #3208 |
Cannot agree more. Our production is experiencing the same error. So deleting the chart is not an option, and forcing the install seems dangerous. This error is still present with Helm 3. So might be good to include a fix or safer workaround. |
it can be fixed by removing "status": "deployed", in storage.go:136 See: 638229c I will fix the Pull Request when i have time. |
The code in place was originally correct. Removing If you can provide the output of |
I'm encountering this issue when deploying for the first time to a new cluster. Should I use |
I encountered this issue when I deleted the previous release without using
Helm Version
|
I am also encountering this issue. |
@bacongobbler Reproduction seems really easy:
|
Seems the --atomic flag may be a way forward in my (CI/CD) scenario. Since it cleans out initial failing release completely as if it never happened, I don't hit this issue on next attempt. |
Same here, I don't see how using Update: btw in my case the release is failing because of:
even if I haven't changed anything in the grafana values |
@alex88 can you provide the output from |
@bacongobbler sure I would really love to see this fixed as I'm really cautious of using helm because of having lost persistent volumes a couple times (probably my fault tho)
basically I've tried multiple times to run the upgrade to change some env variables and since while there was the deploy error the env variables changed anyway I kept doing so ignoring the error |
how did you get into a state where every release has failed? Where's release 1, 2, and 3? |
changing env variables (had to do multiple changes) and running an upgrade every time, it was changing the env variables but I had no idea on how to fix the persistent volume error Update: btw I'm using
regarding previous release probably helm keeps only 10 of them |
Helm3: I am having the similar issue while upgrading istio, the release failed, now I can not redeploy it even though a small error in templates is fixed. I cant delete production release since it will also delete associated ELB with istio-ingress service. |
Is there any future work to change the logic when the initial release ends up in a failed state ? |
What do I have to do if downtime is not accepted?
|
Actually - nevermind. For those affected by this, there is one solution: delete the history record from kubernetes manually. It's stored as a secret. If I delete the offending |
@AirbornePorcine - Can you please elaborate on the changes required in kubernetes to delete the pending-install entries . |
@tarunnarang0201 Helm creates a kubernetes secret for each deploy, in the same namespace you deployed to, you'll see it's of type 'helm.sh/release.v1', and named something like 'sh.helm.release.v1.release-name.v1'. You just have to delete the most recent secret (look at the 'v1' suffix in the example, it's incremented for each deploy), and that seemed to unblock things for me. |
@AirbornePorcine thanks! |
@AirbornePorcine @tarunnarang0201 @ninja- You can also just patch the status label ... especially, if you don't have any previous DEPLOYED releases. For Helm 3, see my comment at #5595 (comment) For more details and instructions for Helm 2, see my comment at #5595 (comment) |
This conversation is too long... and each comment has one solution .... what's the conclusion? We are using Terraform by the way and latest helm provider. So should we use |
@xbmono The conversation is long because there are
If you are at a "has no deployed releases" error I'm not sure A sensible suggestion can probably only be given
My summary of possible options
Since this is a closed issue, I suspect there is a root cause that would be good to debug and document in a different, more specific ticket anyway. |
@chadlwilson Thanks for your response.
but
We are using Terraform and our environments get deployed every hour automatically by Jenkins. With terraform I can't use In the terraform code I have set
So I wonder if it's to do with With helm v2 if the deployment fails and the developers fix it, the next deployment would upgrade the failed deployment. |
The In any case, I don't have issues doing In any case, as I said above, if the error you are stuck at is |
No there was no typo. Also, this happens regardless of being first deployment or later. As I mentioned we are using It seems, if timeout is reached and the deployment isn't successful, So if we remove Workaround:Now I found another solution. For those who have the same problem and want their automation to work nicely as it used to work before, here is my workaround:
|
Alternatively, this worked for me:
|
fixed by |
I am not sure why this is closed, I've just hit it with brand new Helm 3.3.4. If initial install fails, second helm upgrade --install --force still shows the same error. All those workarounds work, but are manual, they don't help when you want to completely, 100% automatic CI/CD where you can simply push the fix to trigger another deployment without manually doing cleanup. Has anyone thought of simply adding a flag that this is the first release so it should be safe to just delete it automatically? Or adding something like "--force-delete-on-failure"? Ignoring the problem is not going to help. |
@nick4fake AFIK it was closed by PR #7653. @yinzara might be able to to provide more details. |
It was a decision by the maintainers to not allow overwriting a pending-upgrade release. But your statement that all work arounds are work arounds that don't work in a CI/CD pipeline are not true. The last suggested work around could be added as a build step before running your helm upgrade (i also would not use --force in a CI/CD pipieline). It has the same effect as what you've suggested except that it deletes the release right before you install the next release instead of immediately afterwards allowing you to debug the cause of the failure. |
I have also used the following in my automated build to uninstall any "pending" releases before I run my upgrade command (make sure to set the NS_NAME environment variable to the namespace you're deploying to): #!/usr/bin/env bash
RELEASES=$(helm list --namespace $NS_NAME --pending --output json | jq -r '.[] | select(.status=="pending-install")|.name')
if [[ ! -z "$RELEASES" ]]; then
helm delete --namespace $NS_NAME $RELEASES
fi |
@yinzara thank you for the snippet, it is very helpful for those finding this thread. My point is still valid - it is not safe to simply delete release. Why can't Helm force-upgrade release if a single resource fails? Replacing release with a new version seems a better solution than full deletion. I might not understand some core fundamentals of Helm (like how it manages state) so it might be not possible to do, but I still don't understand why it is better force users to manually intervene if first installation fails. I mean, just check this discussion thread, people still face the issue. What do you think about possibly adding some additional information to Helm error message with link to this thread + some suggestions on what to do? |
@nick4fake I think you're mixing up "failed" with "pending-install". The library maintainers agree with you about failed releases, that's why they accepted my PR. A "failed" release CAN be upgraded. That's what my PR did. If a release fails because one of the resources failed, you can just upgrade that release (i.e. upgrade --install works too) and it will not give the "app-name" has no deployed releases error. You're talking about a "pending-install" release. The maintainers do not think it is safe to allow you to upgrade a pending-install release (forced or otherwise) as it could possibly be in progress still or be in a partially complete state that they don't feel can be resolved automatically. My PR originally allowed this state and the maintainers asked me to remove it. If you find your releases in this state, you might want to reconsider your deployment configuration. This should never happen in a properly configured CI/CD pipeline. It should either fail or succeed. "pending" implies the install was cancelled while it was still processing. I am not a maintainer so my opinion on your suggestion is irrelevant however I do not find any mention in the codebase to a Github issue that's actually printed in an error or message, so I'm betting they won't allow that, but you're welcome to put together a PR and see :-) |
That being said, I don't agree with your statement that your point is still valid. My suggestion may remove the pending release, however @abdennour suggestion right before yours is just to delete the secret that describes the pending install release. If you do that you're not deleting any of the resources from the release and can upgrade the release. |
+1 to this. We still have to google around, to find this thread, to understand what's a |
I had issues with |
For Helm3, Could be solved through patch
|
For Helm 2, it can solved through: |
the patch does not work anymore:
and :
the
good luck editing that base64 and replacing it. We found no solution but to delete the secret. |
something like this worked for us:
|
Output of
helm version
:Output of
kubectl version
:Cloud Provider/Platform (AKS, GKE, Minikube etc.): Amazon
What is happening:
After few broken deployments, helm (or tiller) is broken and all subsequent deployments (no matter if fixed or still broken) ends with following error:
app-name has no deployed releases
How to reproduce:
We have
but i think it is not relevant.
Path a:
Path b:
The text was updated successfully, but these errors were encountered: