Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

app-name has no deployed releases #5595

Closed
sta-szek opened this issue Apr 12, 2019 · 123 comments
Closed

app-name has no deployed releases #5595

sta-szek opened this issue Apr 12, 2019 · 123 comments

Comments

@sta-szek
Copy link

Output of helm version:

$ helm version 
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Output of kubectl version:

$ kubectl version 
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): Amazon

What is happening:
After few broken deployments, helm (or tiller) is broken and all subsequent deployments (no matter if fixed or still broken) ends with following error: app-name has no deployed releases

How to reproduce:
We have

spec:
  revisionHistoryLimit: 1

but i think it is not relevant.

Path a:

  1. Deploy any service - working
  2. Break it, e.g. by exiting containers after startup, so the whole deployment will be broken
  3. Repeat it exactly 3 times
  4. All next deployments will have error, no matter if fixed or broken

Path b:

  1. Deploy broken service - see Add README.md to guestbook example. Remove local replicatedservice.py… #2 above
  2. All next deployments will have error, no matter if fixed or broken
@reschex
Copy link

reschex commented Apr 18, 2019

Hi - can you give some more detail on how you're deploying? are you using helm upgrade --install by any chance? And if you do, what is the state of the deployment when it's broken (helm ls) - presumably it is Failed?

If this is the case, a helm delete --purge <deployment> should do the trick.

@sta-szek
Copy link
Author

hi, sorry for missing info.
Yes i am using helm upgrade --install
And yes, the deployment stays in Failed forever.
Unfortunately the helm delete --purge <deployment> is not an option here at all. I cannot just delete production services because of that :)

The question is why helm cannot recover after 3 consecutive failures.

@rimusz
Copy link
Contributor

rimusz commented Apr 18, 2019

the only way to sort that without deleting the release add --force

@sta-szek
Copy link
Author

sta-szek commented Apr 18, 2019

--force to what? to helm upgrade --install ?
and if yes, then it means that above issue is actually expected feature and we should use --force with every deployment? -- if yes, then it means that it will deploy broken releases forcibly?

@rimusz
Copy link
Contributor

rimusz commented Apr 18, 2019

yes, of course to helm upgrade --install :)
and yes you should use --force with every deployment

@sta-szek
Copy link
Author

sta-szek commented Apr 18, 2019

does it means that --force will deploy broken releases forcibly as well? - I mean, if pod will be broken and restarting all the time, will it delete old pods and schedule new ones?
--force force resource update through delete/recreate if needed
what is the delete condition? can you elaborate how it works exactly? the description is definitely too short for such a critical flag - I expect it does thousands of things under the hood.

BTW I really dont want to end up with deleted production services, so the --force flag is not an option for me.

and do you really think that it is not an issue?
even the error message is wrong:
app-name has no deployed releases
which states that there is no deployed releases
while there is but with state Failed and helm does not even try to fix it :( -- by fixing I mean just please try to deploy it, instead of giving up on the very beginning

@AmazingTurtle
Copy link

See #3208

@bakayolo
Copy link

bakayolo commented Nov 8, 2019

Cannot agree more. Our production is experiencing the same error. So deleting the chart is not an option, and forcing the install seems dangerous. This error is still present with Helm 3. So might be good to include a fix or safer workaround.

@johannges
Copy link

it can be fixed by removing "status": "deployed", in storage.go:136

See: 638229c

I will fix the Pull Request when i have time.

@bacongobbler
Copy link
Member

bacongobbler commented Nov 12, 2019

The code in place was originally correct. Removing status: deployed from the query results with Helm finding the latest release to upgrade from, regardless of the state it is currently in which could lead to unintended results. It circumvents the problem temporarily, but it introduces much bigger issues further down the road.

If you can provide the output of helm history when you hit this bug, that would be helpful. It's more helpful to determine how one ends in a case where the release ledger has no releases in the "deployed" state.

@bastoche
Copy link

I'm encountering this issue when deploying for the first time to a new cluster. Should I use --force too?

@japzio
Copy link

japzio commented Nov 25, 2019

I encountered this issue when I deleted the previous release without using --purge option.

helm delete --purge <release-name>

Helm Version

Client: &version.Version{SemVer:"v2.15.X"}
Server: &version.Version{SemVer:"v2.15.X"}

@tomaustin700
Copy link

I am also encountering this issue.

@henrikb123
Copy link

henrikb123 commented Dec 2, 2019

@bacongobbler
I hit this with helm3. History is completely empty when this happens, although broken k8s resources are there since attempt 1.

Reproduction seems really easy:

  1. helm upgrade --install "something with a pod that has a container that exits with error"
  2. correct what caused the container to exit, e.g. value with invalid arg for the executable inside container, and try again
    -> Error: UPGRADE FAILED: "foo" has no deployed releases

@henrikb123
Copy link

Seems the --atomic flag may be a way forward in my (CI/CD) scenario. Since it cleans out initial failing release completely as if it never happened, I don't hit this issue on next attempt.

@alex88
Copy link

alex88 commented Dec 9, 2019

Same here, I don't see how using delete or --force can be advised especially when there are persistent volumes in place, I've already lost all grafana dashboards because of this once, not doing it again :)

Update: btw in my case the release is failing because of:

Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims

even if I haven't changed anything in the grafana values

@bacongobbler
Copy link
Member

@alex88 can you provide the output from helm history? I need to know how others are hitting this case so we can try to nail down the root cause and find a solution.

@alex88
Copy link

alex88 commented Dec 10, 2019

@bacongobbler sure I would really love to see this fixed as I'm really cautious of using helm because of having lost persistent volumes a couple times (probably my fault tho)

REVISION	UPDATED                 	STATUS	CHART        	APP VERSION	DESCRIPTION
4       	Wed Dec  4 02:45:59 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
5       	Mon Dec  9 12:27:22 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
6       	Mon Dec  9 12:33:54 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
7       	Mon Dec  9 12:36:02 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
8       	Mon Dec  9 13:06:55 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
9       	Mon Dec  9 13:38:19 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
10      	Mon Dec  9 13:38:51 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
11      	Mon Dec  9 13:41:30 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
12      	Mon Dec  9 13:56:01 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
13      	Mon Dec  9 15:15:05 2019	failed	grafana-4.1.0	6.5.0      	Upgrade "grafana" failed: cannot patch "grafana" with kind PersistentVolumeClaim: PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims

basically I've tried multiple times to run the upgrade to change some env variables and since while there was the deploy error the env variables changed anyway I kept doing so ignoring the error

@bacongobbler
Copy link
Member

bacongobbler commented Dec 10, 2019

how did you get into a state where every release has failed? Where's release 1, 2, and 3?

@alex88
Copy link

alex88 commented Dec 10, 2019

how did you get into a state where every release has failed? Where's release 1, 2, and 3?

changing env variables (had to do multiple changes) and running an upgrade every time, it was changing the env variables but I had no idea on how to fix the persistent volume error

Update: btw I'm using

version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}

regarding previous release probably helm keeps only 10 of them

@ajitchahal
Copy link

Helm3: I am having the similar issue while upgrading istio, the release failed, now I can not redeploy it even though a small error in templates is fixed. I cant delete production release since it will also delete associated ELB with istio-ingress service.

@HamzaZo
Copy link

HamzaZo commented Dec 17, 2019

Is there any future work to change the logic when the initial release ends up in a failed state ?

@com30n
Copy link

com30n commented Jan 2, 2020

What do I have to do if downtime is not accepted?

% helm upgrade prometheus-thanos --namespace metrics -f values.yaml . 
Error: UPGRADE FAILED: "prometheus-thanos" has no deployed releases
% helm install --atomic prometheus-thanos --namespace metrics -f values.yaml .                                                                                                               
Error: cannot re-use a name that is still in use
% helm version
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}

@AirbornePorcine
Copy link

Actually - nevermind. For those affected by this, there is one solution: delete the history record from kubernetes manually. It's stored as a secret. If I delete the offending pending-install state entry, then I can successfully run upgrade --install again!

@tarunnarang0201
Copy link

@AirbornePorcine - Can you please elaborate on the changes required in kubernetes to delete the pending-install entries .

@AirbornePorcine
Copy link

@tarunnarang0201 Helm creates a kubernetes secret for each deploy, in the same namespace you deployed to, you'll see it's of type 'helm.sh/release.v1', and named something like 'sh.helm.release.v1.release-name.v1'. You just have to delete the most recent secret (look at the 'v1' suffix in the example, it's incremented for each deploy), and that seemed to unblock things for me.

@ninja-
Copy link

ninja- commented Aug 13, 2020

@AirbornePorcine thanks!

@carlosdoordash
Copy link

carlosdoordash commented Aug 13, 2020

@AirbornePorcine @tarunnarang0201 @ninja- You can also just patch the status label ... especially, if you don't have any previous DEPLOYED releases.

For Helm 3, see my comment at #5595 (comment)

For more details and instructions for Helm 2, see my comment at #5595 (comment)

@xbmono
Copy link

xbmono commented Sep 1, 2020

This conversation is too long... and each comment has one solution .... what's the conclusion?
We've been using old helm 2.12 and we never had issues but now with v3.2.4 a previously failed deployment fails with this error.

We are using Terraform by the way and latest helm provider. So should we use --force or --replace

@chadlwilson
Copy link

@xbmono The conversation is long because there are

  • there are quite a number reasons your release can get into this state
  • this was possible on Helm 2 as well, and solutions that worked there and on Helm 3 are different.
  • there are different paths users in this issue took to get there
  • there are different options depending on what you are trying to do, and whether you are willing to risk/tolerate loss of PVCs and various possible combinations of downtime.

If you are at a "has no deployed releases" error I'm not sure install --replace nor upgrade --install --force will help you on its own.

A sensible suggestion can probably only be given

  • if you supply the helm history for the release so people can see what has happened
  • if you share the original reason for the failure/what you did to get there - and whether you feel that the original problem been addressed

My summary of possible options

  • if you don't care about the existing k8s resources at all or downtime, helm uninstall && helm install may be an option
  • if it's a first time chart install that failed, you can probably just delete the release secret metadata and helm install again. Maybe will need to clean up k8s resources manually if cruft got left beyond due to the failure, depending on whether you used --atomic etc.
  • if you abandoned a --waited install part way through and the helm history shows the last release is in pending-install you can delete the most recent release secret metadata or patch the release status
  • in certain other combinations of scenarios, it may also be possible to patch the release status of one or more of the release secrets and see if a subsequent upgrade can proceed, however to my knowledge, most of these cases were addressed by fix(helm): allow a previously failed release to be upgraded #7653 (to ensure there is a deployed release somewhere in the history to go back to) so I'd be surprised if this was useful now.

Since this is a closed issue, I suspect there is a root cause that would be good to debug and document in a different, more specific ticket anyway.

@xbmono
Copy link

xbmono commented Sep 1, 2020

@chadlwilson Thanks for your response.

helm history returns no rows!

Error: release: not found

but helm list returns the failed deployment

M:\>helm3 list -n cluster171
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS  CHART                           APP VERSION
cluster171      cluster171      1               2020-09-01 04:45:26.108606381 +0000 UTC failed    mychart-prod-0.2.0-alpha.10    1.0

We are using Terraform and our environments get deployed every hour automatically by Jenkins. With terraform I can't use helm upgrade, it's what the helm provider is doing

In the terraform code I have set force_update to true, no luck and the I set replace to true, again no luck

resource "helm_release" "productStack" {
  name = "${var.namespace}"
  namespace = "${var.namespace}"
  chart = "${var.product_stack}"
  force_update = true//"${var.helm_force_update}"
  max_history = 10
  replace = true

  wait = true
  timeout = "${var.timeout_in_seconds}"

}

So I wonder if it's to do with wait=true ? So the reason the previous deployment failed was that the cluster wasn't able to communicate with docker repository and so the timeout reached and the status is failed but we fixed the issue and the pods restarted successfully, now obviously helm delete works but if I were to do this each time my managers nor the developers will be happy.

With helm v2 if the deployment fails and the developers fix it, the next deployment would upgrade the failed deployment.

@chadlwilson
Copy link

M:\>helm3 list -n cluster171
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS  CHART                           APP VERSION
cluster171      cluster171      1               2020-09-01 04:45:26.108606381 +0000 UTC failed    mychart-prod-0.2.0-alpha.10    1.0

The helm history failure seems odd (typo? missed namespace? wrong helm version?), but given it's revision 1 in the list above it seems you are trying to do a first time installation of a new chart and the first time installation has failed. If you are trying to unblock things you can probably delete the release secret metadata as above or patch its status, and try again. That may indicate that the metadata is in a bad state from the perspective of either Helm or the Helm Terraform Provider, but not how it got there.

In any case, I don't have issues doing upgrade over failed first-time deploys with Helm 3.2.1 since #7653 was merged. You might want to double-check the specific Helm version the provider is actually using? It's also possible it may be to do with the way the Helm Terraform provider figures out the state of the release after an install failure. I don't have any experience with that provider, and personally am not in favour of wrapping Helm with another declarative abstraction such as TF because I find it even more opaque when things go wrong, but you might want to dig further there all the same.

In any case, as I said above, if the error you are stuck at is has no deployed releases after a failed first-time deployment, I don't think either replace nor force are likely to help you resurrect the situation without some other intervention and it would be best to debug it further and have any conversation elsewhere, as going back and forth on this old closed ticket with 51 participants doesn't seem so productive for all concerned.

@xbmono
Copy link

xbmono commented Sep 3, 2020

No there was no typo. Also, this happens regardless of being first deployment or later.

As I mentioned we are using --wait option to wait for the deployment in Jenkins and then to notify whether the deployment failed or not.

It seems, if timeout is reached and the deployment isn't successful, helm marked the deployment as failed and there is no way to recover other than manually deleting that release. And we don't want to delete the release automatically either because that's scary.

So if we remove --wait option, helm will mark the deployment as successful regardless.

Workaround:

Now I found another solution. For those who have the same problem and want their automation to work nicely as it used to work before, here is my workaround:

  • Remove --wait option from helm deploy
  • Use this command to retrieve the list of deployment for that namespace that you are deploying against: kubectl get deployments -n ${namespace} -o jsonpath='{range .items[*].metadata}{.name}{","}{end}'
  • You can use split to turn the comma separated list above into an array
  • Then you can run multiple commands in parallel (we use Jenkins so it's easy to do so) as kubectl rollout status deployment ${deploymentName} --watch=true --timeout=${timeout} -n ${namespace}
  • If after the timeout, for example 7m means 7 minutes, the deployment still not successful, the command exits with error
  • Problem solved.

@LasTshaMAN
Copy link

Actually - nevermind. For those affected by this, there is one solution: delete the history record from kubernetes manually. It's stored as a secret. If I delete the offending pending-install state entry, then I can successfully run upgrade --install again!

Alternatively, this worked for me:

helm uninstall {{release name}} -n {{namespace}}

@abdennour
Copy link

fixed by kubectl -n $namespace delete secret -lstatus=pending-upgrade
Run now helm again.

@nick4fake
Copy link

nick4fake commented Oct 8, 2020

I am not sure why this is closed, I've just hit it with brand new Helm 3.3.4. If initial install fails, second helm upgrade --install --force still shows the same error. All those workarounds work, but are manual, they don't help when you want to completely, 100% automatic CI/CD where you can simply push the fix to trigger another deployment without manually doing cleanup.

Has anyone thought of simply adding a flag that this is the first release so it should be safe to just delete it automatically? Or adding something like "--force-delete-on-failure"? Ignoring the problem is not going to help.

@hickeyma
Copy link
Contributor

hickeyma commented Oct 8, 2020

@nick4fake AFIK it was closed by PR #7653. @yinzara might be able to to provide more details.

@yinzara
Copy link
Contributor

yinzara commented Oct 8, 2020

It was a decision by the maintainers to not allow overwriting a pending-upgrade release. But your statement that all work arounds are work arounds that don't work in a CI/CD pipeline are not true. The last suggested work around could be added as a build step before running your helm upgrade (i also would not use --force in a CI/CD pipieline). It has the same effect as what you've suggested except that it deletes the release right before you install the next release instead of immediately afterwards allowing you to debug the cause of the failure.

@yinzara
Copy link
Contributor

yinzara commented Oct 8, 2020

I have also used the following in my automated build to uninstall any "pending" releases before I run my upgrade command (make sure to set the NS_NAME environment variable to the namespace you're deploying to):

#!/usr/bin/env bash
RELEASES=$(helm list --namespace $NS_NAME --pending --output json | jq -r '.[] | select(.status=="pending-install")|.name')
if [[ ! -z "$RELEASES" ]]; then
  helm delete --namespace $NS_NAME $RELEASES
fi

@nick4fake
Copy link

@yinzara thank you for the snippet, it is very helpful for those finding this thread.

My point is still valid - it is not safe to simply delete release. Why can't Helm force-upgrade release if a single resource fails? Replacing release with a new version seems a better solution than full deletion. I might not understand some core fundamentals of Helm (like how it manages state) so it might be not possible to do, but I still don't understand why it is better force users to manually intervene if first installation fails.

I mean, just check this discussion thread, people still face the issue. What do you think about possibly adding some additional information to Helm error message with link to this thread + some suggestions on what to do?

@yinzara
Copy link
Contributor

yinzara commented Oct 10, 2020

@nick4fake I think you're mixing up "failed" with "pending-install".

The library maintainers agree with you about failed releases, that's why they accepted my PR.

A "failed" release CAN be upgraded. That's what my PR did. If a release fails because one of the resources failed, you can just upgrade that release (i.e. upgrade --install works too) and it will not give the "app-name" has no deployed releases error.

You're talking about a "pending-install" release. The maintainers do not think it is safe to allow you to upgrade a pending-install release (forced or otherwise) as it could possibly be in progress still or be in a partially complete state that they don't feel can be resolved automatically. My PR originally allowed this state and the maintainers asked me to remove it.

If you find your releases in this state, you might want to reconsider your deployment configuration. This should never happen in a properly configured CI/CD pipeline. It should either fail or succeed. "pending" implies the install was cancelled while it was still processing.

I am not a maintainer so my opinion on your suggestion is irrelevant however I do not find any mention in the codebase to a Github issue that's actually printed in an error or message, so I'm betting they won't allow that, but you're welcome to put together a PR and see :-)

@yinzara
Copy link
Contributor

yinzara commented Oct 10, 2020

That being said, I don't agree with your statement that your point is still valid. My suggestion may remove the pending release, however @abdennour suggestion right before yours is just to delete the secret that describes the pending install release. If you do that you're not deleting any of the resources from the release and can upgrade the release.

@omnibs
Copy link

omnibs commented Oct 13, 2020

What do you think about possibly adding some additional information to Helm error message with link to this thread + some suggestions on what to do?

+1 to this. We still have to google around, to find this thread, to understand what's a pending-install release, so we can begin to reason about this error message.

@sajtrus
Copy link

sajtrus commented Oct 20, 2020

I had issues with helm upgrade and it lead me here. It was solved by adding -n <namespace>. Maybe it will help someone out there.

@Jenishk56
Copy link

Jenishk56 commented Oct 27, 2020

For Helm3, Could be solved through patch
kubectl -n <namespace> patch secret <release-name>.<version> --type=merge -p '{"metadata":{"labels":{"status":"deployed"}}}'

release-name and version - Can be seen from kubectl get secrets -n <namespace> | grep helm

@polatsinan
Copy link

For Helm 2, it can solved through:
kubectl -n kube-system patch configmap release-name.v123 --type=merge -p '{"metadata":{"labels":{"STATUS":"DEPLOYED"}}}'

@wind57
Copy link

wind57 commented Oct 20, 2021

the patch does not work anymore:

kubectl describe secret sh.helm.release.v1.test-1.v1
Name:         sh.helm.release.v1.test-1.v1
Namespace:    default
Labels:       modifiedAt=1634763779
              name=test-1
              owner=helm
              status=test
              version=1
Annotations:  <none>

Type:  helm.sh/release.v1

Data
====
release:  3100 bytes

and :

helm ls
NAME  	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART        	APP VERSION
test-1	default  	1       	2021-10-20 17:02:59.333187 -0400 EDT	deployed	mychart-0.1.0	1.16.0

the status is now elsewhere :

kubectl get secret sh.helm.release.v1.test-1.v1 -o json | jq .data.release | tr -d '"' | base64 -d | base64 -d | gzip -d | jq | grep status

good luck editing that base64 and replacing it.

We found no solution but to delete the secret.

@markdingram
Copy link

markdingram commented Aug 5, 2022

something like this worked for us:


$ kubectl get secret sh.helm.release.v1.xxx.v123 -o json > secret.orig.json
$ cat secret.orig.json | jq -r '.data.release' | base64 -d | base64 -d | gzip -d > secret.json

$ vi secret.json 
<- change status to "deployed"

$ cat secret.json | gzip | base64 | base64 > secret.encoded
$ jq -r '.data.release = $input' secret.orig.json --rawfile input secret.encoded > secret.new.json
$ kubectl apply -f secret.new.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests