Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support helm.Chart with helm hooks #555

Closed
gobengo opened this issue May 4, 2019 · 26 comments
Closed

Support helm.Chart with helm hooks #555

gobengo opened this issue May 4, 2019 · 26 comments
Assignees
Labels
kind/enhancement Improvements or new features resolution/fixed This issue was fixed
Milestone

Comments

@gobengo
Copy link

gobengo commented May 4, 2019

Pulumi program: https://gist.github.com/gobengo/590a88e3b951fa3f66c8d8e963bf34a0

I am trying to use the latest wordpress chart (version 5.9.0) to test out pulumi-kubernetes' helm.Chart. The pulumi kubernetes docs for helm use a much earlier one.

const wordpressChart = new k8s.helm.v2.Chart(`${namespaceName}-wordpress`, {
	namespace: namespaceName,
	repo: "stable",
	version: "5.9.0",
	chart: "wordpress",
	transformations: [
		addNamespaceTransformation(namespaceName),
		removeHelmHooksTransformation()
	]
});

However, when I pulumi up, it ends up failing because this helm chart includes a helm.sh/hook: test-success annotation. The mariadb chart it composes has a test hook as well, so there are two that fail.

Pulumi seems to be unaware of hook semantics (understandable). The end result is that the hook pods get created at the same time (sometimes) before the app Pods they depend on (i.e. mariadb), and so they fail with exit code 1, which causes the pulumi deployment to fail as well.

I was able to work around this by providing a transformation created with

function removeHelmHooksTransformation() {
	return (o: any) => {
		if ( ! o) return
		if (o.metadata && o.metadata.annotations && o.metadata.annotations['helm.sh/hook']) {
			console.log('skipping helm hook', o.metadata)
			// transforms in nodejs expects you to mutate input object, not return a new one :/
			// https://github.com/pulumi/pulumi-kubernetes/blob/4b01b5114df3045cecefd2ff3e2f2ed64430e3dd/sdk/nodejs/yaml/yaml.ts#L2214
			for (const key in o) {
				delete o[key]
			}
			Object.assign(o, {
				kind: "List",
				apiVersion: "v1",
				metadata: {},
				items: [],
			})
			return
		}
		return o
	}
}

Ideally, pulumi-kubernetes would be aware of helm hook semantics and would, e.g., wait for other Pods to be deployed and ready before creating Pods with the helm.sh/hook: test-success annotation. In lieu of full semantic support, it would be nice to maybe just not deploy pods with that annotation at all (and warn with a link to this issue or a more appropriate one).

@gobengo
Copy link
Author

gobengo commented May 4, 2019

Worth noting that #511 would be at least some way of supporting these charts without modification, but ofc with the overhead of running and configuring Tiller securely.

@hausdorff
Copy link
Contributor

This might involve a helm provider proper. Should we do this now, or should we be waiting for Helm 3?

@lukehoban
Copy link
Member

@hausdorff @lblackstone What's the proposal for addressing this?

@lukehoban lukehoban added this to the 0.26 milestone Jul 25, 2019
@lblackstone
Copy link
Member

I'm not very familiar with Helm hooks, but it looks like a pretty big list of scenarios that it supports: https://github.com/helm/helm/blob/master/docs/charts_hooks.md#the-available-hooks

AFAICT, Pulumi can natively handle all of the lifecycle hooks (pre/post/crd install hooks), so we probably just need to specially handle the test-success and test-failure annotations.

For now, I'm inclined to skip deploying Pods with the helm.sh/hook: test-success or helm.sh/hook: test-failure annotation -- these Pods are intended to be a short-lived test, and aren't part of the application.

@hausdorff
Copy link
Contributor

hausdorff commented Jul 29, 2019

I'm not sure that we'll ever support lifecycle hooks, but certainly not in Q3—I'll remove the label. In the mean time, we can commit to skipping the test hooks, and documenting that our Helm support is limited to the semantics of helm template. I'll file a separate issue for that, which we will close in Q3.

@Place1
Copy link

Place1 commented Jun 11, 2020

The workaround posted in this issue no longer works for me.

I get an error saying there's no metadata.name on the List.

After adding a metadata.name to the List I get a apiVersion does not have both a group and a version: "v1"

@lblackstone
Copy link
Member

@Place1 I have a PR open to fix the regression you reported in #1156. The workaround will be valid again once that merges.

@shinebayar-g
Copy link

Looks like ingress-nginx uses helm hooks to deploy admission-webhook things. For this limitation we cannot deploy ingress-nginx.

@draconisNoctis
Copy link

draconisNoctis commented Jan 6, 2021

@shinebayar-g
Although the documentation of ingress-nginx mentioned, that only Helm v3 is supported a workaround is to use the k8s.helm.v2.Chart.
Helm 2 doesn't support the hooks, so the annotations are ignored.
Pulumi Helm 3 doesn't support them neither and removing any entry with the annotation before processed by the transformations. So it isn't even possible to remove the annotation in a transformation.

@lblackstone
Copy link
Member

@shinebayar-g @draconisNoctis We fixed a bug in the v3 SDK in #1335 that was likely the cause of the issues you were seeing. Can you give it a try with the v2.8.0 k8s provider and let me know if you're still having problems? Thanks!

@shinebayar-g
Copy link

shinebayar-g commented Feb 18, 2021

@lblackstone I deployed ingress-nginx chart as k8s.helm.v3.Chart resource today. It worked like a charm. All ingress-nginx resources with helm.sh/hook annotations deployed successfully. Thank you.


I did another test today. Upgraded nginx chart version from 3.17.0 to 3.23.0 to demonstrate upgrade scenario. Also looks like everything went smooth.

@bob-bins
Copy link

bob-bins commented Feb 20, 2021

The transformation function in the original comment no longer works. Here's an updated one (in Typescript) in case someone else needs it.

import * as k8s from "@pulumi/kubernetes"
import * as pulumi from "@pulumi/pulumi"

const removeHelmHooksTransformation = (
  o: pulumi.ResourceTransformationArgs
): pulumi.ResourceTransformationResult => {
  if (o.props?.metadata?.annotations?.["helm.sh/hook"]) {
    const {
      "helm.sh/hook": junk,
      "helm.sh/hook-delete-policy": junk2,
      ...validAnnotations
    } = o.props.metadata.annotations
    return {
      props: {
        ...o.props,
        metadata: {
          ...o.props.metadata,
          annotations: validAnnotations,
        },
      },
      opts: o.opts,
    }
  }
  return o
}

// example use-case
const ingressNginx = new k8s.yaml.ConfigFile(
  "ingress-nginx",
  {
    file:
      "https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/aws/deploy.yaml",
  },
  {
    provider: eksCluster.provider,
    transformations: [removeHelmHooksTransformation],
  }
)

@maxromanovsky
Copy link

maxromanovsky commented Mar 3, 2021

Is it possible to suppress Diagnostics messages in pulumi up cause by Helm hooks such as this one:

  kubernetes:rbac.authorization.k8s.io/v1:RoleBinding (ingress-nginx-internal/ingress-nginx-internal-admission):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555

@bob-bins
Copy link

bob-bins commented Mar 4, 2021

@maxromanovsky If you're looking to suppress only those messages, I don't think that's possible. But you can use the transformation function I posted above to remove the helm hook annotations so it won't be an issue.

@maxromanovsky
Copy link

@bob-bins thank you for the tip! I thought there's more idiomatic approach :)

Anyway, here's the transformation very similar to @bob-bins, but in Python:

def remove_helm_hooks(obj):
    if 'metadata' in obj and 'annotations' in obj['metadata']:
        for key in list(obj['metadata']['annotations'].keys()):
            if key in ['helm.sh/hook', 'helm.sh/hook-delete-policy']:
                del obj['metadata']['annotations'][key]

@gerrywastaken
Copy link

gerrywastaken commented Mar 30, 2021

I'm not sure that we'll ever support lifecycle hooks

@hausdorff In this case isn't it better in the docs and errors to advise against using Chart when hooks are involved and instead pushing people towards more appropriate methods of adding a package of resources. Just removing blindly ignoring/removing hooks probably isn't a great idea, but at the moment it's the only solution presented.

p.s. an alternative transform to remove those annotations (typescript noob so use with caution):

// A transformation to remove Helm hooks. Pulumi does not support running them and will complain if they exist.
// See: https://github.com/pulumi/pulumi-kubernetes/issues/555#issuecomment-810383577
removeHelmHooksTransformation(o: pulumi.ResourceTransformationArgs): pulumi.ResourceTransformationResult {
  if (o.props?.metadata?.annotations?.["helm.sh/hook"]) {
    delete o.props.metadata.annotations['helm.sh/hook']
    delete o.props.metadata.annotations['helm.sh/hook-delete-policy']
  }

  return o
}

@shinebayar-g
Copy link

shinebayar-g commented Apr 26, 2021

@shinebayar-g @draconisNoctis We fixed a bug in the v3 SDK in #1335 that was likely the cause of the issues you were seeing. Can you give it a try with the v2.8.0 k8s provider and let me know if you're still having problems? Thanks!

Hi, @lblackstone can this issue considered as a fixed now? If yes it would be awesome to remove these warnings introduced in #1454

@lblackstone
Copy link
Member

We're planning to take another look at this soon, and will add toggles for the warnings if we aren't able to support all of the hooks.

@bartsmykla
Copy link

@lblackstone Have you maybe had another look on this issue? :-)

@lblackstone
Copy link
Member

lblackstone commented Aug 19, 2021

Hey, sorry for the delayed response here. I just opened #1682 to give users the option to disable these warnings.
The suppressHelmHookWarnings option is available now in https://github.com/pulumi/pulumi-kubernetes/releases/tag/v3.6.1

Relatedly, we're hard at work on a new approach to managing Helm resources with Pulumi that will also support all hooks. If you're interested, the work in progress can be found here: #1677 Feedback is welcome, so chime in on that PR if you have comments on the approach!

@viveklak viveklak added the resolution/fixed This issue was fixed label Sep 14, 2021
@viveklak
Copy link
Contributor

#1677 has been merged and Helm Release support is in public preview with v3.7.0 of the Kubernetes provider/sdk. As discussed in https://www.pulumi.com/blog/full-access-to-helm-features-through-new-helm-release-resource-for-kubernetes/, hooks are a good reason to consider Helm Release over the chart resource. Closing.

@JoaRiski
Copy link

Slightly related, I've opened an issue over at helm (helm/helm#10153) asking about the future of helm hooks overall, given that their non-declarative nature doesn't really fit the rest of the kubernetes ecosystem.

@JordanP01
Copy link

Hello, I try to deploy velero with pulumi:

helm: https://vmware-tanzu.github.io/helm-charts
version 4.1.1

But I have theses warnings:

kubernetes:core/v1:ServiceAccount (velero/velero-server-upgrade-crds):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555 -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:rbac.authorization.k8s.io/v1:ClusterRole (velero-upgrade-crds):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555 -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:batch/v1:Job (velero/velero-upgrade-crds):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555 -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:velero.io/v1:VolumeSnapshotLocation (velero/default):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555 -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:velero.io/v1:BackupStorageLocation (velero/default):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555 -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding (velero-upgrade-crds):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555 -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

Did you have any suggestions for making it work ?

thanks in advance for your help

@omidraha
Copy link

Hi, I want to use this aws-efs-csi-driver helm chart from here:

https://artifacthub.io/packages/helm/aws-efs-csi-driver/aws-efs-csi-driver

But I received this warning:

Diagnostics:
  kubernetes:storage.k8s.io/v1:CSIDriver (efs.csi.aws.com):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at https://github.com/pulumi/pulumi-kubernetes/issues/555 -- 

Info

kubernetes.helm.v3.Chart(
        f"efs-csi-driver{DEPLOY_NAME_PREFIX}",
        kubernetes.helm.v3.ChartOpts(
            chart="aws-efs-csi-driver",
            version="2.4.7",
            fetch_opts=kubernetes.helm.v3.FetchOpts(
                repo="https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
            ),
            namespace=namespace.metadata.name,
            values={
                "logLevel": "debug",
                "replicaCount": "1",
                "region": "us-west-2",
            },
        ),
        pulumi.ResourceOptions(
            provider=provider,
            parent=namespace,
        )
    )

@usrbinkat
Copy link

@omidraha

I see you are using the kubernetes.helm.v3.Chart resource which does not have any support for helm hooks.

Take a look at the kubernetes.helm.v3.Release resource instead.

Read about both HERE

@iyobo
Copy link

iyobo commented Dec 2, 2023

The helm release is pretty "black boxy" in its approach to helm generated resources. This causes all sorts of issues sometimes with regards to deterministic pulumi state

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement Improvements or new features resolution/fixed This issue was fixed
Projects
No open projects
Development

No branches or pull requests