Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using helmCharts doesn't work with charts that use .Release fields #4303

Closed
anvouk opened this issue Nov 22, 2021 · 11 comments
Closed

Using helmCharts doesn't work with charts that use .Release fields #4303

anvouk opened this issue Nov 22, 2021 · 11 comments
Assignees
Labels
area/helm kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/resolved Indicates an issue has been resolved or doesn't need to be resolved.

Comments

@anvouk
Copy link

anvouk commented Nov 22, 2021

What happened:
Deploying kustomization.yaml with helmCharts doesn't set .Release properties.

This breaks some popular charts in a very hard to debug manner (for example, bitnami kafka helm chart which relies on .Release.Namespace for cluster connections).

What you expected to happen:
When launching kubectl kustomize <FOLDER> --enable-helm, .Release properties should have correct values and not default ones.

At least, provide a way to manually set them inside kustomization.yaml.

How to reproduce it (as minimally and precisely as possible):

  1. Download, unzip and deploy with kubectl kustomize test --enable-helm | kubectl apply -f - the following example: broken-example
  2. pod's has wrong .Release.Namespace and .Release.Name set:
        env:
        - name: release-name
          value: RELEASE-NAME # <-- wrong. should be `test`
        - name: release-namespace
          value: default  # <-- wrong. should be `test`

Anything else we need to know?:
This issue has been previously open under kubectl: kubernetes/kubectl#1151.

Environment:

  • Kubernetes client and server versions (use kubectl version):
    kubectl: Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
    helm: version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}
  • Cloud provider or hardware configuration:
    minikube version: v1.24.0 commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
  • OS (e.g: cat /etc/os-release):
    NAME="Ubuntu" VERSION="18.04.5 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.5 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic
@anvouk anvouk added the kind/bug Categorizes issue or PR as related to a bug. label Nov 22, 2021
@k8s-ci-robot
Copy link
Contributor

@anvouk: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 22, 2021
@stephen-dexda
Copy link

At least for the broken namespace value, explicitly setting helmCharts.*namespace worked around the issue for me (see #3815).

I also had a problem with the release name being set to release-name for at least one of my charts, which was worked around by explicitly setting helmCharts.*.releaseName.

@brunocascio
Copy link

do we have any update on this?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 7, 2022
@natasha41575
Copy link
Contributor

/assign

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 5, 2022
@KnVerey
Copy link
Contributor

KnVerey commented Aug 31, 2022

Since Kustomize is not actually performing a Helm release as part of processing, we are using helm template under the hood (not for example helm install). Is there a helm template command that works with these charts?

At least for the broken namespace value, explicitly setting helmCharts.*namespace worked around the issue for me (see #3815).

I also had a problem with the release name being set to release-name for at least one of my charts, which was worked around by explicitly setting helmCharts.*.releaseName.

This sounds very reasonable to me. Could this be considered a viable solution if we document it?

@KnVerey KnVerey added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 31, 2022
@brianpursley
Copy link
Member

brianpursley commented Aug 31, 2022

According to this:

// ReleaseName replaces RELEASE-NAME in chart template output,
// making a particular inflation of a chart unique with respect to
// other inflations of the same chart in a cluster. It's the first
// argument to the helm `install` and `template` commands, i.e.
// helm install {RELEASE-NAME} {chartName}
// helm template {RELEASE-NAME} {chartName}
// If omitted, the flag --generate-name is passed to 'helm template'.
ReleaseName string `json:"releaseName,omitempty" yaml:"releaseName,omitempty"`
// Namespace set the target namespace for a release. It is .Release.Namespace
// in the helm template
Namespace string `json:"namespace,omitempty" yaml:"namespace,omitempty"`

it looks like you are supposed to be able to set helmCharts.releaseName and helmCharts.namespace, like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test
helmCharts:
  - name: test
    releaseName: foo
    namespace: bar

However, doing this causes the helm template command to be constructed incorrectly, and I get this error:

Error: Error: expected at most two arguments, unexpected arguments: foo
: unable to run: 'helm template foo --namespace bar /home/bpursley/Downloads/test/charts/test --values /tmp/kustomize-helm-57038941/test-kustomize-values.yaml --release-name foo' with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-57038941/helm HELM_CACHE_HOME=/tmp/kustomize-helm-57038941/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-57038941/helm/.data] (is 'helm' installed?)

It looks like Helm 3 had a breaking change where it moved the release name from an arg to a flag value
helm/helm#6019

I think I can make a change to the code to get it working with Helm 3.

@KnVerey @natasha41575 Does Kustomize need to support both Helm 2 and Helm 3?


@KnVerey @natasha41575 nevermind my question, I see it says it requires Helm v3.

I will submit a PR with a fix for this


No code change needed, I think it is working as expected if you specify releaseName and namespace.

@brianpursley
Copy link
Member

Actually, after looking at this again, I think it seems to be working fine. I don't know why I got that error about two arguments before, but maybe I had made a change and forgot to back it out before running that.

@anvouk if you specify releaseName and namespace in your kustomization.yaml, like this, it should work:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test
helmCharts:
  - name: test
    releaseName: foo
    namespace: bar

Output:

apiVersion: v1
kind: Namespace
metadata:
  name: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test
  name: test
  namespace: test
spec:
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
      namespace: test
    spec:
      containers:
      - command:
        - sleep
        - infinity
        env:
        - name: release-name
          value: foo
        - name: release-namespace
          value: bar
        image: busybox
        name: test-container

@KnVerey
Copy link
Contributor

KnVerey commented Sep 6, 2022

Thanks @brianpursley ! Since these values are already propagated correctly from our helmCharts.*.releaseName and helmCharts.*.namespace fields, I'm going to close this as resolved. Feel free to reopen if I've misunderstood.

/triage resolved

@KnVerey KnVerey closed this as completed Sep 6, 2022
@k8s-ci-robot k8s-ci-robot added the triage/resolved Indicates an issue has been resolved or doesn't need to be resolved. label Sep 6, 2022
@KnVerey KnVerey removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 6, 2022
@anvouk
Copy link
Author

anvouk commented Sep 6, 2022

@KnVerey I think we should re-open this issue (or make a new one).

Some reasons that came to mind while thinking as for why leaving things like this is a bad idea:

  1. Having to manually specify the namespace field for every chart gets tedious and error-prone (there is already a top-level namespace field so why do I have to keep telling every chart its namespace? it's against DRY).
  2. There is no painless way to identify problems caused by the missing fields: in my case, I found this problem only after scaling my kafka replicas from 1 to 3 (they use the .Release.namespace field to communicate with each-other) and it took a while to trace back the cause of the problem to this.
  3. I see no reason as for why both helmCharts.*.namespace and helmCharts.*.releaseName should default to default and RELEASE-NAME respectively when better default could be used from the kustomization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/helm kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/resolved Indicates an issue has been resolved or doesn't need to be resolved.
Projects
None yet
Development

No branches or pull requests

8 participants