Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

namespace not supported by helmCharts #3815

Closed
james-callahan opened this issue Apr 19, 2021 · 34 comments · Fixed by #3832
Closed

namespace not supported by helmCharts #3815

james-callahan opened this issue Apr 19, 2021 · 34 comments · Fixed by #3832
Labels
area/helm kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@james-callahan
Copy link
Contributor

The new helmCharts feature (added in #3784) has no way to pass in a namespace.
This is problematic with many common charts where the namespace is used in e.g. arguments or annotations

e.g. try rendering the cert-manager helm chart:

helmCharts:
- name: cert-manager
  releaseName: cert-manager
  version: 1.3.0
  repo: https://charts.jetstack.io

Actual output

it results in annotations that contain the namespace, e.g.
cert-manager.io/inject-ca-from-secret: default/cert-manager-webhook-ca

it also has the argument:
- --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.default,cert-manager-webhook.default.svc

Kustomize version/Platform

{Version:4.1.2 GitCommit:$Format:%H$ BuildDate:2021-04-19T00:04:53Z GoOs:linux GoArch:amd64}


CC @monopole

@james-callahan james-callahan added the kind/bug Categorizes issue or PR as related to a bug. label Apr 19, 2021
@Shell32-Natsu
Copy link
Contributor

#3816 (comment)

@Shell32-Natsu Shell32-Natsu added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/bug Categorizes issue or PR as related to a bug. labels Apr 19, 2021
@Shell32-Natsu Shell32-Natsu added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Apr 20, 2021
@monopole
Copy link
Contributor

monopole commented Apr 23, 2021

The helm program does many many things, including (so I'm told) arbitrary execution of shell commands via --extra-args and particularly crafted Go templates.

The basic chart inflation kustomize performs is intentionally rudimentary to prove out the concept of performing "last mile" chart customization. Maintaining kustomize long term as a helm runner with full helm flag and argument knowledge as helm evolves would be toilsome and imply kustomize inherits whatever security risks helm presents.

So - anything that looks like what kustomize does should be done by kustomize, rather than delegated to helm, e.g. setting namespace (#3838).

The safest, most performant, and completely unlimited way to use kustomize + helm to generate config is

  1. manually inflate the helm chart via helm template --foo --bar ... (with whatever risky options one wants),
  2. commit the resulting plain old k8s YAML to a repo.
  3. use kustomize to operate on that repo, treating it as a base, adding some last mile config changes.

From time to time, capture the latest chart, and repeat steps 1 and 2. It's dangerous to set up a production stack that relies on kustomize to freshly download a helm chart with each build.

@monopole
Copy link
Contributor

BTW, i'm not asserting that #3838 yields the result you want.

I'm wondering what the full use case is, and if it can be satisfied in the fashion suggested by #3838

@cehoffman
Copy link
Contributor

cehoffman commented Apr 23, 2021

So - anything that looks like what kustomize does should be done by kustomize, rather than delegated to helm, e.g. setting namespace (#3838).

I think the namespace setting in kustomize should flow into the namespace setting of helm, however I didn't see a way to get that value in the helm inflator plugin. The namespace setting of kustomize currently can not satisfy what the namespace setting of helm does, e.g. setting the namespace component in the middle of an annotation string to the resource like cert-manager uses.

From time to time, capture the latest chart, and repeat steps 1 and 2. It's dangerous to set up a production stack that relies on kustomize to freshly download a helm chart with each build.

Totally agree. Ideally you don't do that, but it comes down to scheduling prioritization and resources to implement. For me, submitting the PR was less resources to get kustomize in the door than setting up the rest before I could even start with kustomize and our many other usecases that don't usere helm where kustomize would help me.

@monopole
Copy link
Contributor

I think the namespace setting in kustomize should flow into the namespace setting of helm.

That's interesting, and would avoid the easy contradictions one can imagine by not doing it that way.

@drivelikebrazil
Copy link

The helm program does many many things, including (so I'm told) arbitrary execution of shell commands via --extra-args and particularly crafted Go templates.

This is an issue with Helm's --post-renderer option in particular and the scary part was that someone running kustomize didn't have to consciously opt-in to run Helm. Given that we now have to explicitly elect to run Helm when we run kustomize, the user is consciously choosing to run Helm and accept the risks associated with it.

I think the namespace setting in kustomize should flow into the namespace setting of helm

This is a partially workable solution to me. The main places I could see it causing issues is with charts like Jenkins that create resources in other namespaces intentionally (in this case, creating a role in the agent namepace to allow Jenkins to spin up agent pods there).

From time to time, capture the latest chart, and repeat steps 1 and 2. It's dangerous to set up a production stack that relies on kustomize to freshly download a helm chart with each build.

Perhaps if you're pulling the Helm chart from an uncontrolled source. We pull specific versions of Helm charts into a private Helm repo and enforce tag immutability from there. We deploy multiple instances of a given application with different configurations and storing the fully templated code for each deployment adds a ton of noise and places for user error.

@cassid4

This comment has been minimized.

@monopole
Copy link
Contributor

monopole commented May 11, 2021

This is a situation where one could use a transformer to change the configuration of another transformer before that generator/transformer is applied.

https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/#transformed-transformers

@cassid4

This comment has been minimized.

@monopole
Copy link
Contributor

Reopening to track the broader concept of having only one place to specify a namespace change.
Right now, if you use helm charts, you have to set it there too.

@rdxmb
Copy link

rdxmb commented Jun 21, 2021

I have tried to define the namespace twice. Works with https://github.com/grafana/helm-charts/tree/main/charts/grafana

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: grafana-helm-test
helmCharts:
- name: grafana
  releaseName: customer-metrics-ui-v1
  repo: https://grafana.github.io/helm-charts
  namespace: grafana-helm-test
  version:  6.13.0
$ kustomize version
{Version:kustomize/v4.1.3 GitCommit:0f614e92f72f1b938a9171b964d90b197ca8fb68 BuildDate:2021-05-20T20:52:40Z GoOs:linux GoArch:amd64}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 19, 2021
@qkflies
Copy link

qkflies commented Sep 20, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2021
@natasha41575 natasha41575 removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2022
@rcmorano
Copy link
Contributor

rcmorano commented Jan 10, 2022

I have tried to define the namespace twice. Works with https://github.com/grafana/helm-charts/tree/main/charts/grafana

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: grafana-helm-test
helmCharts:
- name: grafana
  releaseName: customer-metrics-ui-v1
  repo: https://grafana.github.io/helm-charts
  namespace: grafana-helm-test
  version:  6.13.0
$ kustomize version
{Version:kustomize/v4.1.3 GitCommit:0f614e92f72f1b938a9171b964d90b197ca8fb68 BuildDate:2021-05-20T20:52:40Z GoOs:linux GoArch:amd64}

If I'm not wrong, this wouldn't work with many multi-namespace charts as it would try to use the main namespace for every resource. It doesn't for me with this [0] fission, ending up with this [1] error.

It would be great to make kustomize use the behaviour proposed by @cehoffman; that's what I would expect instead of any fancy transformation :D

[0]

namespace: fission

bases:
- https://github.com/fission/fission/crds/v1?ref=v1.15.1

helmCharts:
- name: fission-all
  valuesFile: helm-values.yaml
  namespace: fission
  releaseName: init0
  version: v1.15.1
  repo: https://fission.github.io/fission-charts/

[1]

E0110 11:50:44.766222 4050399 run.go:120] "command failed" err="namespace transformation produces ID conflict: [{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{\"internal.config.kubernetes.io/previousKinds\":\"Namespace\",\"internal.config.kubernetes.io/previousNames\":\"fission-function\",\"internal.config.kubernetes.io/previousNamespaces\":\"_non_namespaceable_\"},\"labels\":{\"chart\":\"fission-all-v1.15.1\",\"name\":\"fission-function\"},\"name\":\"fission\"}} {\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{\"internal.config.kubernetes.io/previousKinds\":\"Namespace\",\"internal.config.kubernetes.io/previousNames\":\"fission-builder\",\"internal.config.kubernetes.io/previousNamespaces\":\"_non_namespaceable_\"},\"labels\":{\"chart\":\"fission-all-v1.15.1\",\"name\":\"fission-builder\"},\"name\":\"fission\"}}]"

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 10, 2022
@rcmorano
Copy link
Contributor

bump

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 23, 2022
@rcmorano
Copy link
Contributor

bump

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wibed
Copy link

wibed commented Jul 13, 2022

please reopen.

just to again underline an already made point.
it breaks applied charts for some are created in mind with those capabilities.
this must be taken into consideration, before electing "kustomize" as a tool to provision your nodes.

atleast i would.

@wibed
Copy link

wibed commented Jul 13, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 13, 2022
@k8s-ci-robot
Copy link
Contributor

@wibed: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wibed
Copy link

wibed commented Jul 13, 2022

could you or one of the contributors reopen this issue
@KnVerey

@KnVerey
Copy link
Contributor

KnVerey commented Jul 13, 2022

This issue was about the lack of a field that was added in #3832. #4593 implies it may have since been broken, but it's better to track that in that newer issue.

@arielweinberger
Copy link

Bumping again as this breaks our flow of specifying namespace in overlays completely.

@marcjimz
Copy link

The community position on some of these flaws are baffling.

I cannot inline edit multi-line string values for some of my YAML objects after flattening a helm chart, so I go towards the helmChart generator for an exception basis.

I cannot use the helmChart generator effectively, because it does not honor the namespace.

What are my remaining options?

@wibed
Copy link

wibed commented Oct 29, 2023

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: XXXX

helmCharts:
...

sets the namespace correctly, if the chart makes use of the {.Release.Namespace} attribute

@Skaronator
Copy link

@wibed {{ .Release.Namespace }} doesn't fully work for me.

My kustomize file looks like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: the-bug

helmGlobals:
  chartHome: ../../_charts

helmCharts:
- name: namespace
  releaseName: namespace
  valuesFile: values.yaml

My Helm Chart looks like this:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: namespace-quota
  namespace: {{ .Release.Namespace }}
  annotations:
    release-namespace: {{ .Release.Namespace }}
spec: {{ .Values.resourceQuota | toYaml | nindent 2 }}

And building this looks like this:

$ kustomize build . --enable-helm
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: namespace
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: default   #  <- {{ .Release.Namespace }} is still "default"
    helm.sh/chart: namespace-1.0.0
  name: the-bug
---
apiVersion: v1
kind: ResourceQuota
metadata:
  annotations:
    release-namespace: default        # <- {{ .Release.Namespace }} is still "default"
  name: namespace-quota
  namespace: the-bug                  # <- {{ .Release.Namespace }} correctly set
spec:
  hard:
    pods: 100

I suspect that kustomize just replaces the namespace values with the correct values.

This is an issue for us since we dynamically generate namespace and therefore cannot set namespace within a helmCharts block.

@GenaSG
Copy link

GenaSG commented Mar 6, 2024

@wibed {{ .Release.Namespace }} doesn't fully work for me.

My kustomize file looks like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: the-bug

helmGlobals:
  chartHome: ../../_charts

helmCharts:
- name: namespace
  releaseName: namespace
  valuesFile: values.yaml

My Helm Chart looks like this:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: namespace-quota
  namespace: {{ .Release.Namespace }}
  annotations:
    release-namespace: {{ .Release.Namespace }}
spec: {{ .Values.resourceQuota | toYaml | nindent 2 }}

And building this looks like this:

$ kustomize build . --enable-helm
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: namespace
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: default   #  <- {{ .Release.Namespace }} is still "default"
    helm.sh/chart: namespace-1.0.0
  name: the-bug
---
apiVersion: v1
kind: ResourceQuota
metadata:
  annotations:
    release-namespace: default        # <- {{ .Release.Namespace }} is still "default"
  name: namespace-quota
  namespace: the-bug                  # <- {{ .Release.Namespace }} correctly set
spec:
  hard:
    pods: 100

I suspect that kustomize just replaces the namespace values with the correct values.

This is an issue for us since we dynamically generate namespace and therefore cannot set namespace within a helmCharts block.

I'm having exactly the same issue. {{ .Release.Namespace }} is used in openmetadata-dependencies helm chart in multiple places to generate env vars and configs and default namespace used everywhere even though resources deployed to correct custom namespace.

@Skaronator
Copy link

I've retested with kustomize v5.3.0 since the included version in kubectl is quite old, but the issue persists.

@wibed
Copy link

wibed commented Mar 7, 2024

I've retested with kustomize v5.3.0 since the included version in kubectl is quite old, but the issue persists.

i confirm that my workaround is more than flawed.

to set a overall namespace overrides internal set namespaces disregarding the variable {{ Release.Namespace }} entirely.

@Skaronator
Copy link

I created a new issue dedicated to this. You are welcome to comment and upvote #5566 :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/helm kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.