Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HelmChartInflationGenerator ignores releaseNamespace #4593

Open
in-cloud-opensource opened this issue Apr 20, 2022 · 18 comments
Open

HelmChartInflationGenerator ignores releaseNamespace #4593

in-cloud-opensource opened this issue Apr 20, 2022 · 18 comments
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@in-cloud-opensource
Copy link

Describe the bug

According to the documentation of the helm inflator it shall be possible to set the release namespace of the helm template.
When running kustomize build --enable-helm . with the kustomization.ymal below the releaseNamespace is ignored.

Files that can reproduce the issue

Example:

kustomization.yaml

helmChartInflationGenerator:
- chartName: nats
  chartVersion: v0.13.1
  chartRepoUrl: https://nats-io.github.io/k8s/helm/charts
  releaseName: nats
  releaseNamespace: XXXXXX

Expected output

Expected output e.g. for ConfigMap, namesapce set to XXXXXX

apiVersion: v1
data:
  nats.conf: |
    # NATS Clients Port
    port: 4222

    # PID file shared with configuration reloader.
    pid_file: "/var/run/nats/nats.pid"

    ###############
    #             #
    # Monitoring  #
    #             #
    ###############
    http: 8222
    server_name:$POD_NAME
    lame_duck_duration: 120s
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-config
  namespace: XXXXXX

Actual output

apiVersion: v1
data:
  nats.conf: |
    # NATS Clients Port
    port: 4222

    # PID file shared with configuration reloader.
    pid_file: "/var/run/nats/nats.pid"

    ###############
    #             #
    # Monitoring  #
    #             #
    ###############
    http: 8222
    server_name:$POD_NAME
    lame_duck_duration: 120s
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-config
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: client
    port: 4222
  - name: cluster
    port: 6222
  - name: monitor
    port: 8222
  - name: metrics
    port: 7777
  - name: leafnodes
    port: 7422
  - name: gateways
    port: 7522
  selector:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/name: nats
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nats-box
    chart: nats-0.13.1
  name: nats-box
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nats-box
  template:
    metadata:
      labels:
        app: nats-box
    spec:
      containers:
      - command:
        - tail
        - -f
        - /dev/null
        env:
        - name: NATS_URL
          value: nats
        image: natsio/nats-box:0.8.1
        imagePullPolicy: IfNotPresent
        name: nats-box
        resources: null
        volumeMounts: null
      volumes: null
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats
  namespace: default
spec:
  podManagementPolicy: Parallel
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: nats
      app.kubernetes.io/name: nats
  serviceName: nats
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "7777"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/instance: nats
        app.kubernetes.io/name: nats
    spec:
      containers:
      - command:
        - nats-server
        - --config
        - /etc/nats-config/nats.conf
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: SERVER_NAME
          value: $(POD_NAME)
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: CLUSTER_ADVERTISE
          value: $(POD_NAME).nats.$(POD_NAMESPACE).svc.cluster.local
        image: nats:2.7.2-alpine
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - nats-server -sl=ldm=/var/run/nats/nats.pid
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 5
        name: nats
        ports:
        - containerPort: 4222
          name: client
        - containerPort: 7422
          name: leafnodes
        - containerPort: 7522
          name: gateways
        - containerPort: 6222
          name: cluster
        - containerPort: 8222
          name: monitor
        - containerPort: 7777
          name: metrics
        resources: {}
        startupProbe:
          failureThreshold: 30
          httpGet:
            path: /
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /etc/nats-config
          name: config-volume
        - mountPath: /var/run/nats
          name: pid
      - command:
        - nats-server-config-reloader
        - -pid
        - /var/run/nats/nats.pid
        - -config
        - /etc/nats-config/nats.conf
        image: natsio/nats-server-config-reloader:0.6.2
        imagePullPolicy: IfNotPresent
        name: reloader
        resources: null
        volumeMounts:
        - mountPath: /etc/nats-config
          name: config-volume
        - mountPath: /var/run/nats
          name: pid
      - args:
        - -connz
        - -routez
        - -subz
        - -varz
        - -prefix=nats
        - -use_internal_server_id
        - http://localhost:8222/
        image: natsio/prometheus-nats-exporter:0.9.1
        imagePullPolicy: IfNotPresent
        name: metrics
        ports:
        - containerPort: 7777
          name: metrics
        resources: {}
      shareProcessNamespace: true
      terminationGracePeriodSeconds: 120
      volumes:
      - configMap:
          name: nats-config
        name: config-volume
      - emptyDir: {}
        name: pid
  volumeClaimTemplates: null
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    helm.sh/hook: test
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-test-request-reply
spec:
  containers:
  - command:
    - /bin/sh
    - -ec
    - |
      nats reply -s nats://$NATS_HOST:4222 'name.>' --command "echo 1" &
    - |
      "&&"
    - |
      name=$(nats request -s nats://$NATS_HOST:4222 name.test '' 2>/dev/null)
    - |
      "&&"
    - |
      [ $name = test ]
    env:
    - name: NATS_HOST
      value: nats
    image: synadia/nats-box
    name: nats-box
  restartPolicy: Never

Kustomize version
{Version:kustomize/v4.5.3 GitCommit:de6b9784912a5c1df309e6ae9152b962be4eba47 BuildDate:2022-03-24T20:51:20Z GoOs:linux GoArch:amd64}

Platform
Linux

Additional context

It seems that the ReleaseNamespace arguments are missing in the mapping between HelmChartArgs and HelmChart see

func makeHelmChartFromHca(old *HelmChartArgs) (c HelmChart) {
	c.Name = old.ChartName
	c.Version = old.ChartVersion
	c.Repo = old.ChartRepoURL
	c.ValuesFile = old.Values
	c.ValuesInline = old.ValuesLocal
	c.ValuesMerge = old.ValuesMerge
	c.ReleaseName = old.ReleaseName
	return
}
@in-cloud-opensource in-cloud-opensource added the kind/bug Categorizes issue or PR as related to a bug. label Apr 20, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 20, 2022
@kevin-hanselman
Copy link

This just bit me too. I know "+1" comments are frowned upon, but I think this is a serious enough bug to give this page a bump.

@brahama
Copy link

brahama commented Jun 5, 2022

Same happening.

{Version:kustomize/v4.5.5 GitCommit:daa3e5e2c2d3a4b8c94021a7384bfb06734bcd26 BuildDate:2022-05-20T20:21:22Z GoOs:darwin GoArch:amd64}

Helm chart redis from Bitnami.

How to bypass it with kustomize? Basically the replicas fail as they try to connect to master in default namespace instead.

- name: REDIS_MASTER_HOST
              value: {{ template "common.names.fullname" . }}-master-0.{{ template "common.names.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}

Cheers

@jberger
Copy link

jberger commented Jun 9, 2022

This bug causes kustomize to create an invalid certificate for a webhook in the bonzaicloud kafka operator. It broke a LOT of stuff. A LOT. And because of its location inside of a cert in a secret it was not easy to debug to this so please help the next user out and fix this soon. https://github.com/banzaicloud/koperator/blob/v0.21.2/charts/kafka-operator/templates/operator-deployment-with-webhook.yaml#L7

@jberger
Copy link

jberger commented Jun 9, 2022

And now I'm seeing that this isn't a recent issue, from #3815 it has been broken for over a year. If this cannot be made to work, at least remove the documentation for the releaseNamespace and extraArgs fields here https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_helmchartinflationgenerator_

@jberger
Copy link

jberger commented Jun 9, 2022

To the future reader, the only mechanism that functions is to use the generator file usage, which is woefully under-documented anyway, then use the file as constituted here in this test

apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
name: myPipeline
name: ocp-pipeline
namespace: mynamespace
version: 0.1.16
repo: https://bcgov.github.io/helm-charts

Create that as a separate file (say helmchart.yaml) in the same path as your kustomize.yaml, then in your kustomize add

generators:
- helmchart.yaml

@brahama
Copy link

brahama commented Jun 9, 2022

Yes. Documentation on K8s is especially hard to follow. I removed this generator in favor of helmCharts: that i find it somewhere in an issue like this.. and also did not find too much docs..

@jberger
Copy link

jberger commented Jun 9, 2022

I also suspect (but cannot prove, because I don't have a local testing mechanism) that they could fix at least the documented releaseNamespace issue by simply adding one line here

c.ReleaseName = old.ReleaseName

c.Namespace = old.ReleaseNamespace

extraArgs could not be fixed as easily because it isn't supported in the new form but the namespace is, it is just silently ignored currently

@KnVerey
Copy link
Contributor

KnVerey commented Jul 13, 2022

The fix suggested sounds plausible and we would appreciate a PR with that + test proving it works.

/triage accepted
/good-first-issue

@k8s-ci-robot
Copy link
Contributor

@KnVerey:
This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

The fix suggested sounds plausible and we would appreciate a PR with that + test proving it works.

/triage accepted
/good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 13, 2022
@pxwxnvermx
Copy link

/assign

@sonali-rajput
Copy link

Is this still getting worked on?

@winston0410
Copy link

I think the issue is still there

@toanju
Copy link

toanju commented Oct 24, 2022

It looks like this is still being worked on here

However, @brahama pointed out that helmCharts is the better way to go to. The docs are not up to date on this. However, there is some Chart example. In addition, you might want to look into

type HelmChart struct {
for more details on the parameters.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
@schlichtanders
Copy link

To the future reader, the only mechanism that functions is to use the generator file usage, which is woefully under-documented anyway, then use the file as constituted here in this test

apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
name: myPipeline
name: ocp-pipeline
namespace: mynamespace
version: 0.1.16
repo: https://bcgov.github.io/helm-charts

Create that as a separate file (say helmchart.yaml) in the same path as your kustomize.yaml, then in your kustomize add

generators:
- helmchart.yaml

this does not seem to work for me - can someone confirm that it works for someone else?

@ashutosh887
Copy link

I would like to work on this

/assign

@trexx
Copy link

trexx commented Aug 16, 2023

The above methods wouldn't work for me.
For example, the fluxcd community helm chart templates is missing the namespaces fields and I couldn't get it populated by either helmchart or HelmChartInflationGenerator. However for some of the other helm charts containing templates with a templated namespace field (such as Cilium using {{ .Release.Namespace }}), I was able to alter the destination namespace.

./kustomization.yaml

resources:
  - ./flux

./flux/kustomization.yaml

namespace: flux

resources:
  - ./namespace.yaml

helmCharts:
  - name: flux2
    namespace: flux
    valuesFile: ./values.yaml
    releaseName: flux
    version: 2.9.2
    repo: https://fluxcd-community.github.io/helm-charts
    IncludeCRDs: true

@vaibhav2107
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.