Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm3 deletes configmaps when using hooks on success by default #8694

Closed
Jinkxed opened this issue Sep 3, 2020 · 17 comments
Closed

Helm3 deletes configmaps when using hooks on success by default #8694

Jinkxed opened this issue Sep 3, 2020 · 17 comments

Comments

@Jinkxed
Copy link

Jinkxed commented Sep 3, 2020

Use case: We are trying to ensure our configmaps have been installed / updated prior to running database migrations that rely on said configmaps. We run migrations via a job that runs with hooks

Output of helm version:

version.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.7"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:50:54Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-eks-065dce", GitCommit:"065dcecfcd2a91bd68a17ee0b5e895088430bd05", GitTreeState:"clean", BuildDate:"2020-07-16T01:44:47Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):

EKS 1.15

Creating a job with the following annotations:

    meta.helm.sh/release-name: "%{namespace}-%{application}"
    meta.helm.sh/release-namespace: "%{namespace}"
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "10"

Create a configmap with the following annotations:

    meta.helm.sh/release-name: "%{namespace}-%{application}"
    meta.helm.sh/release-namespace: "%{namespace}"
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "1"

Create a deployment with NO annotations around hooks as we want them to deploy last ( they are web servers ).

The configmap will persist through and be available for the job, but as soon as its completed the configmaps are deleted and are no longer available for the web server deployments. As I understand hooks far as the documentation goes, not specifying a hook-delete-policy of hook-succeeded should ensure the configmaps persist till the next deploy/upgrade?

This definitely seems to be the case for jobs using the same settings. The jobs will persist after completing, but our configmaps seem to disappear no matter what hook-delete-policy I use.

The reason why we are having to use hooks in our configmaps is we noticed that if we change an environment variable in the configmap the job with hooks would use the previous configmap on deploy.

This also creates another scenario where if we don't use hooks on the configmaps, on new installations the job tries to run before they are created and fails.

@bacongobbler
Copy link
Member

It sounds like you're looking for the hook-weight annotation. This allows you to order your jobs in a determined order such that one hook will execute before the other runs.

See https://helm.sh/docs/topics/charts_hooks/#writing-a-hook for more information on hook weights.

@Jinkxed
Copy link
Author

Jinkxed commented Sep 15, 2020

Hey @bacongobbler I did try hook weights on a number of different configurations. The core issue here is helm is deleting the configmaps even when they should persist.

So using weights really had no effect.

@heydonovan
Copy link

I've the exact same issue. The documentation isn't clear on how to go about not deleting those resources. Here is another person with the same problem: #2622 (comment)

Very open to seeing if we are going about this the wrong way, but I don't see how one could keep those resources after a successful job. We thought about just hacking around it with a separate configmap template just for migration jobs. This person looks to have similar thinking: https://stackoverflow.com/a/59425059

@Jinkxed
Copy link
Author

Jinkxed commented Sep 22, 2020

So I was able to get this working by kinda using a hack.

Here's what I did.

On your configmap add the lines:

    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "1"

Next on your job ( we do this for migrations ):

    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "2"

Now this is the part that is the hack, because without it the configmap will auto delete itself and not be available for the other deployments that need it.

We have a deployment we call "console", it allows us to run rake tasks or investigate things easily for each service. It's basically just a pod that uses:

          command: [ "/bin/bash", "-c", "--" ]
          args: [ "while true; do sleep 30; done;" ]

to stay running permanently. On this deployment AND the pdb add the lines:

    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "3"

Binding the configmap to this hook which runs forever makes it so the configmap never auto deletes and is always available.

I tried about 20 different ways to not have to do something like this, but this was the simplest and does exactly what I needed. As well as fixing the issue of new configmaps not being available for jobs prior to them starting.

Hopefully this helps someone else.

  • edited to add the annotation for the console needs to be on the pdb as well *

@heydonovan
Copy link

@sc-chad Actually, just tried again this morning, and not specifying helm.sh/hook-delete-policy caused the configmap to stick around after the job completed. Here is what I've got:

    jobAnnotations:
      helm.sh/hook: "pre-install,pre-upgrade"
      helm.sh/hook-weight: "-10"
      helm.sh/hook-delete-policy: "hook-succeeded"
    configmapAnnotations:
      helm.sh/hook: "pre-install,pre-upgrade"
      helm.sh/hook-weight: "-20"
$ helm version --short
v3.3.1+g249e521

$ kubectl version --client --short
Client Version: v1.18.8

@Jinkxed
Copy link
Author

Jinkxed commented Sep 22, 2020

@heydonovan that's interesting, i'm on 3.3.0 I wonder if there is a difference in behavior or if it's because you are using hook-succeeded.

You might test if you haven't already with that job failing, as "hook-succeeded" is fine until the job fails for some reason and breaks your next deployment. Reason is the job will persist and you'll get "job already exists" the next time you deploy.

Also the docs it says: no hook deletion policy annotation is specified, the before-hook-creation behavior applies by default.

I tried your method and my configmap still autodeleted, but I can't use the "hook-succeeded" policy so that may be why.

One thing I didn't note above in my configuration is the console pod has a PDB as well, which needs the same annotations as the deployment.

@Jinkxed
Copy link
Author

Jinkxed commented Sep 22, 2020

And now with a different app that has 2 jobs it's auto deleting again even with my above setup on the latest helm version. Back to the drawing board.

@Jinkxed
Copy link
Author

Jinkxed commented Sep 22, 2020

So I finally gave up. Using the exact same annotations from one app to another caused different behavior. Decided to go with a much simpler method.

This won't ensure migrations are completed prior to the apps deploy, but atleast it's a work around until hooks get some more love.

---
apiVersion: batch/v1
kind: Job
metadata:
  name: "db-migrate-job-{{ now | date "20060102150405" }}"

This gives each job a unique name so you won't hit the immutability issues and you can delete all the pre-hook and deletion policies as you won't need them.

You will likely have to clean up the original job that didn't use a unique name after, but that's it.

@github-actions
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Dec 22, 2020
@Nastik-kum
Copy link

Facing pretty the same issue - some secrets with following hook while upgrade are sometimes deleted and new one is not created.

annotations:
    "helm.sh/hook": pre-install, pre-upgrade
    "helm.sh/hook-weight": "-3"
    "helm.sh/hook-delete-policy": before-hook-creation

The most strange thing is second try always finishes successfully.

@github-actions
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@blastik
Copy link

blastik commented Oct 6, 2021

@bacongobbler can this issue be reopened ? we are facing pretty much the same issue

@silvez-dh
Copy link

facing the same issue here w/ v3.5.4

@mabdh
Copy link

mabdh commented Aug 10, 2022

@bacongobbler I am also facing the same issue here. Seeing the code, is the delete hook operation blocking?
If not, maybe there is a race condition between creation of configmaps hook and the deletion one.
So helm would possible deleting (ask kube to delete) the newly created one. Maybe we need to add WaitForDelete?

@mabdh
Copy link

mabdh commented Aug 10, 2022

Facing pretty the same issue - some secrets with following hook while upgrade are sometimes deleted and new one is not created.

annotations:
    "helm.sh/hook": pre-install, pre-upgrade
    "helm.sh/hook-weight": "-3"
    "helm.sh/hook-delete-policy": before-hook-creation

The most strange thing is second try always finishes successfully.

I second this, we need to do upgrade several times until the states are consistent.

Here is the details:

We are on helm v3.5.3

configmap and secret hook annotation:

annotations:
   "helm.sh/hook": pre-install,pre-upgrade
   "helm.sh/hook-weight": "-5"

migration job hook annotation:

annotations:    
    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-delete-policy": before-hook-creation

Behaviour:
1st upgrade
Initial state:

  • Config exist
  • App pod exist

Hook running:

  • Config removed
  • Config added
  • Migration pod running
  • Migration terminated
  • Config removed

Final state:

  • Config not exist
  • Old pods are running
  • Some new pods are created with CreateContainerConfigError

2nd upgrade
Initial state:

  • Config not exist
  • Old app pods are running, New app pods are created with CreateContainerConfigError

Hook running:

  • Config removed
  • Config added
  • Migration pod running
  • Migration terminated
  • Config removed

Final state:

  • Config not exist
  • Old pods are terminated
  • Some new app pods are running, the other new app pods are created with CreateContainerConfigError

3rd upgrade
Initial state:

  • Config not exist
  • Some new app pods are running, some new app pods are created with CreateContainerConfigError

Hook running:

  • Config removed
  • Config added
  • Migration pod running
  • Migration terminated
  • Config removed

Final state:

  • Config not exist
  • All new app pods are running

4th upgrade
Initial state:

  • Config not exist
  • All new app pods are running

Hook running:

  • Config removed
  • Config added
  • Migration pod running
  • Migration terminated
  • Config removed

Final state:

  • Config exist
  • All new app pods are running (no-op)

@Jinkxed
Copy link
Author

Jinkxed commented Aug 10, 2022

@bacongobbler Can we get this reopened please?

@hickeyma hickeyma removed the Stale label Aug 10, 2022
@unamashana
Copy link

Perhaps it would stop working again soon, but adding the keep resource policy to the Configmap does the trick for now. The ConfigMap isn't deleted after the job, and it's recreated on the next deployment.

    "helm.sh/hook-delete-policy": before-hook-creation 
    "helm.sh/resource-policy": keep

angdraug added a commit to angdraug/mastodon-chart that referenced this issue Jan 15, 2023
pre-install and pre-upgrade hooks run before the persistent ConfigMap
resources are installed. As suggested in helm/helm#8694, create a hook
with lower hook-weight and resource-policy=keep to make the same
ConfigMap available in pre- hooks.
angdraug added a commit to angdraug/mastodon-chart that referenced this issue Jan 15, 2023
pre-install and pre-upgrade hooks run before the persistent ConfigMap
resources are installed. As suggested in helm/helm#8694, create a hook
with lower hook-weight and resource-policy=keep to make the same
ConfigMap available in pre- hooks.
neggles pushed a commit to neggles/mastodon-chart that referenced this issue Feb 9, 2023
pre-install and pre-upgrade hooks run before the persistent ConfigMap
resources are installed. As suggested in helm/helm#8694, create a hook
with lower hook-weight and resource-policy=keep to make the same
ConfigMap available in pre- hooks.
dictvm pushed a commit to dictvm/mastodon-chart that referenced this issue Feb 10, 2023
pre-install and pre-upgrade hooks run before the persistent ConfigMap
resources are installed. As suggested in helm/helm#8694, create a hook
with lower hook-weight and resource-policy=keep to make the same
ConfigMap available in pre- hooks.
neggles pushed a commit to neggles/mastodon-chart that referenced this issue Mar 18, 2023
pre-install and pre-upgrade hooks run before the persistent ConfigMap
resources are installed. As suggested in helm/helm#8694, create a hook
with lower hook-weight and resource-policy=keep to make the same
ConfigMap available in pre- hooks.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants