Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm 3: --namespace not respected #5628

Closed
ericvmw opened this issue Apr 22, 2019 · 41 comments
Closed

helm 3: --namespace not respected #5628

ericvmw opened this issue Apr 22, 2019 · 41 comments
Labels
bug Categorizes issue or PR as related to a bug. v3.x Issues and Pull Requests related to the major version v3
Milestone

Comments

@ericvmw
Copy link

ericvmw commented Apr 22, 2019

It seems the latest helm3 from dev-v3 branch doesn't take the --namespace parameter. Basically I try to deploy an example chart onto a different namespace, but it doesn't seem to take effect. Below are the commands and outputs. How is helm3 planning to support namespace and how should namespaces be specified in helm charts for helm3? Thanks for any help / pointers!

Output of helm version:
version.BuildInfo{Version:"v3.0+unreleased", GitCommit:"658c66dc66684a85be787a066c8434ac5212648c", GitTreeState:"clean"}

Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"archive", BuildDate:"2018-08-14T19:36:17Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

/usr/local/helm3/linux-amd64/helm install alpine /usr/local/alpine/ --namespace=us1-dev-web-1 --debug

printer.go:84: [debug] Original chart version: ""
printer.go:84: [debug] CHART PATH: /usr/local/alpineclient.go:94: [debug] building resources from manifest
client.go:99: [debug] creating 1 resource(s)
NAME: alpine
LAST DEPLOYED: 2019-04-19 17:07:30.391894983 +0000 UTC m=+0.063175512
NAMESPACE: default
STATUS: deployed

kubectl get pod -n default

NAME READY STATUS RESTARTS AGE
alpine-alpine 1/1 Running 0 6m

@bacongobbler bacongobbler added v3.x Issues and Pull Requests related to the major version v3 question/support labels Apr 22, 2019
@bacongobbler
Copy link
Member

bacongobbler commented Apr 22, 2019

There was a change in Helm 3 where it now takes the current namespace from your local kube config. If it's not present, the default namespace is used.

To change the current namespace for the alpha, you can use

kubectl config set-context NAME --namespace=foo

Helm will pick that up and use that namespace the next time you invoke it.

There's a few design refactors going on within Helm 3 to address prioritization issues with loading settings from CLI flags and the kube config. As I understand it, the --namespace flag was temporarily removed but it's planned to allow some way to change the namespace on-the-fly.

@adamreese do you have any other context I may be missing here on why we removed the --namespace flag?

@bacongobbler
Copy link
Member

For reference:

helm/cmd/helm/helm.go

Lines 108 to 113 in 658c66d

func getNamespace() string {
if ns, _, err := kubeConfig().ToRawKubeConfigLoader().Namespace(); err == nil {
return ns
}
return "default"
}

@ericvmw
Copy link
Author

ericvmw commented Apr 22, 2019

Thanks @bacongobbler for the info, it works! So I guess the --namespace flag would be added back later to customize the namespace config on the kubectl.

@bacongobbler bacongobbler changed the title helm3 dev-v3 namespace helm 3: missing --namespace flag May 1, 2019
@willholley
Copy link

this is quite a confusing breaking change, given the --namespace flag is still listed in the cli help. It would be at least worth calling out in https://github.com/helm/helm/releases/tag/v3.0.0-alpha.1.

@aliasmee
Copy link

aliasmee commented May 16, 2019

There was a change in Helm 3 where it now takes the current namespace from your local kube config. If it's not present, the default namespace is used.

To change the current namespace for the alpha, you can use

kubectl set-context NAME --namespace=foo

Helm will pick that up and use that namespace the next time you invoke it.

There's a few design refactors going on within Helm 3 to address prioritization issues with loading settings from CLI flags and the kube config. As I understand it, the --namespace flag was temporarily removed but it's planned to allow some way to change the namespace on-the-fly.

@adamreese do you have any other context I may be missing here on why we removed the --namespace flag?

Hi,The command to correctly switch between different contexts should be like this:

kubectl config set-context CONTENT_NAME --namespace NAMESPACE

EXAMPLE:

kubectl config set-context gke_xxx_top_us-east1-b_my-test --namespace qa

thanks 😄

@bacongobbler bacongobbler added bug Categorizes issue or PR as related to a bug. and removed question/support labels May 16, 2019
@bacongobbler bacongobbler changed the title helm 3: missing --namespace flag helm 3: --namespace not respected May 16, 2019
@kevtaylor
Copy link

It seems also that the namespace is no longer created by helm, even when specifying it through the context - please can someone confirm that this is a breaking change ?

@bacongobbler
Copy link
Member

The namespace should still be created as part of helm install. If you can recreate the issue, would you mind opening a separate ticket?

@kevtaylor
Copy link

Done: #5753

@floretan
Copy link

Note that the same also applies to helm delete, you need to set the current namespace to that of the release you want to delete or helm won't find it.

@bacongobbler
Copy link
Member

bacongobbler commented Jun 14, 2019

this is quite a confusing breaking change, given the --namespace flag is still listed in the cli help.

To clarify, this is not a breaking change, but a bug in the current alpha release. We have had several race condition issues raised in Helm 2 where settings weren't being read properly from the environment (or in some cases, being ignored entirely), so we're taking the time to re-work that piece of the architecture.

Right now, we are working out how to read settings from multiple different locations (environment variables, kubeconfig, feature flags) and making that a consistent API across the board. The workaround listed above works for the time being, so please feel free to use that while we try and fix this prior to the final release. Thanks for your patience.

@bacongobbler
Copy link
Member

See #4657, #2690 and #2682 for a few past examples why we're re-working the architecture for this piece.

@torstenwalter
Copy link
Contributor

@bacongobbler I think this code is actually fine:

helm/cmd/helm/helm.go

Lines 108 to 113 in 658c66d

func getNamespace() string {
if ns, _, err := kubeConfig().ToRawKubeConfigLoader().Namespace(); err == nil {
return ns
}
return "default"
}

If the kubeconfig here would be initialized with the correct values then it would work

helm/cmd/helm/helm.go

Lines 101 to 106 in 658c66d

func kubeConfig() genericclioptions.RESTClientGetter {
configOnce.Do(func() {
config = kube.GetConfig(settings.KubeConfig, settings.KubeContext, settings.Namespace)
})
return config
}

It looks ok, but settings are not parsed at that point in time when the method is called. So settings.KubeConfig, settings.KubeContext, settings.Namespace are all empty. I changed settings.Namespace locally to a hard-coded value and that works.

The code for adding and parsing the flags is here:

helm/cmd/helm/root.go

Lines 52 to 64 in 658c66d

func newRootCmd(actionConfig *action.Configuration, out io.Writer, args []string) *cobra.Command {
cmd := &cobra.Command{
Use: "helm",
Short: "The Helm package manager for Kubernetes.",
Long: globalUsage,
SilenceUsage: true,
Args: require.NoArgs,
}
flags := cmd.PersistentFlags()
settings.AddFlags(flags)
flags.Parse(args)

but kubeConfig() is called from

helm/cmd/helm/helm.go

Lines 61 to 62 in 658c66d

func newActionConfig(allNamespaces bool) *action.Configuration {
kc := kube.New(kubeConfig())
which is called from

helm/cmd/helm/helm.go

Lines 53 to 54 in 658c66d

func main() {
cmd := newRootCmd(newActionConfig(false), os.Stdout, os.Args[1:])

before new newRootCmd is executed...

@airhorns
Copy link

FWIW I think this is the cause of cert-manager/cert-manager#1744 if anyone else is trying to figure that out. Good case for testing the fix?

@bacongobbler
Copy link
Member

bacongobbler commented Jun 18, 2019

Different issue, but it is related to namespaces. I'll follow up in that ticket.

@pisymbol
Copy link

pisymbol commented Jun 19, 2019

this is quite a confusing breaking change, given the --namespace flag is still listed in the cli help.

To clarify, this is not a breaking change, but a bug in the current alpha release. We have had several race condition issues raised in Helm 2 where settings weren't being read properly from the environment (or in some cases, being ignored entirely), so we're taking the time to re-work that piece of the architecture.

Right now, we are working out how to read settings from multiple different locations (environment variables, kubeconfig, feature flags) and making that a consistent API across the board. The workaround listed above works for the time being, so please feel free to use that while we try and fix this prior to the final release. Thanks for your patience.

The workaround above does not well, work for me. I am still getting the dreaded:

Error: the namespace from the provided object "kube-system" does not match the namespace "foobar-ns". You must pass '--namespace=kube-system' to perform this operation.

I'm trying to install seldon-core using helm3.

@bacongobbler
Copy link
Member

@pisymbol that's the same issue as reported above by @airhorns. Please see cert-manager/cert-manager#1744 for more info.

@pisymbol
Copy link

pisymbol commented Jun 19, 2019

@bacongobbler I understand that. However, the work around you outlined above (setting the context's default namespace) does not work! i.e. helm3 right now is unusable which makes me sad. EDIT: Is there anyway this problem can be addressed in the short term? It's a real show stopper for me and I'll bet many others.

@sudermanjr
Copy link

I have been testing the Helm3 alpha.1 release and setting the default namespace in my Kube context works for me. You can see my testing example here: #5753 (comment)

Note that the chart gets installed in the sudermanjr namespace which is set in my Kube context.

@kri5
Copy link

kri5 commented Jun 25, 2019

I found out where is located the issue with namespace enforcement.

diff --git a/pkg/kube/client.go b/pkg/kube/client.go
index 8df24bef..2af14278 100644
--- a/pkg/kube/client.go
+++ b/pkg/kube/client.go
@@ -114,7 +114,6 @@ func (c *Client) newBuilder() *resource.Builder {
                ContinueOnError().
                NamespaceParam(c.namespace()).
                DefaultNamespace().
-               RequireNamespace().
                Flatten()
 }

If the RequireNamespace is not called, it relaxes the namespace enforcement, which is described as follow in the here

// RequireNamespace instructs the builder to set the namespace value for any object found
// to NamespaceParam() if empty, and if the value on the resource does not match
// NamespaceParam() an error will be returned.

I am not familiar with all of this, and on what should be done.

@AndiDog
Copy link
Contributor

AndiDog commented Jul 19, 2019

The breaking change doesn't make any sense: helm template --namespace xyz should use xyz, but uses default. Why would it access a kubeconfig at all? When rendering a manifest, there's no need to access a cluster. I had thought that's the goal of Helm 3 – getting rid of the server component and only provide a tool that does one thing well: rendering templates and managing releases without talking to a cluster.

@timja
Copy link

timja commented Jul 19, 2019

It's not a breaking change it's a bug, if you read up it should be clear

@AndiDog
Copy link
Contributor

AndiDog commented Jul 19, 2019

It's not a breaking change it's a bug, if you read up it should be clear


In the above comment #5628 (comment), @bacongobbler says:

Right now, we are working out how to read settings from multiple different locations (environment variables, kubeconfig, feature flags)

which means the developers still intend to keep both sources for determining the namespace automatically. Or did I misunderstand? That is the confusing part here. Which source gets precedence must be absolutely clear, and helm template should IMO be a tool that works only with the inputs passed to it – why should it read any kubeconfig? If the kubeconfig feature is really useful for anyone (mostly only for manual work using the helm cli, I presume, maybe someone can explain other rationales), the command line argument must have priority.

@timja
Copy link

timja commented Jul 19, 2019

The namespace value just isn't being passed through properly. So it uses whatever your active namespace is, workaround is to change your active namespace before deploying...

@pkobielak
Copy link

@timja It does not solve the problem if you are deploying resources to many namespaces from a single chart.

@bacongobbler
Copy link
Member

bacongobbler commented Jul 19, 2019

@pkobielak please follow #5953 for the discussion around multiple namespaces from a single chart. This issue and #5953 are separate discussions. Thanks.

RE:

the command line argument must have priority.

I think you're misunderstanding. Helm will continue to respect general CLI conventions here. We're just adding an additional configuration source where we can read the namespace parameter from. If you don't want to read your namespace parameter from your kubeconfig, that will continue to work.

For posterity, general CLI convention order of precedence is

  1. CLI flags (--namespace)
  2. Environment variables ($HELM_NAMESPACE)
  3. Files (kubeconfig)

Again, the current behaviour in dev-v3 (as in, --namespace is being ignored) is not the intended behaviour, and we're working on a fix for that. there's a few code refactors going on within Helm 3 to address issues when loading settings from CLI flags, environment variables and from external sources (like the kubeconfig). We're actively working on a fix.

@mward29
Copy link

mward29 commented Aug 20, 2019

good work all. I ran into this messing around with Alpha.2. Looks like its been fixed for next alpha release. Just wanted to say Thanks.

@RyanSiu1995
Copy link

Thank you the work from you guys. In beta version, the namespace flag has been respected.
In helm 2, if we specify a namespace that isn't existed, helm will try to create that for us.
But in helm 3 beta version, I just realized that the namespace will not be autocreated. Is it an intended behaviour?

@hickeyma
Copy link
Contributor

@RyanSiu1995 This is intended behaviour to mimic the same behaviour as kubectl create --namespace foo -f deployment.yaml. Namespaces are a global cluster resource and the user installing resources into that namespace may not have the proper administrative rights to install the namespace itself. Therefore, Helm 3 now requires for the namespace to be created in advance. It is described in more details in #5753 (comment).

@ichekrygin
Copy link

Sorry for being late to the party :). I just stumbled over the "namespace creation" issue.

@RyanSiu1995 thank you for the explanation. I have a couple of follow up questions:

  1. It appears that --namespace flag handling behavior had changed from Helm2 -> Helm3. If so, perhaps it warrants "better" or more prominent documentation emphasizing this difference. I was caught by surprise by this change, and it sounds like many others. It is more of a recommendation really)

  2. (More important). In response to:

Namespaces are a global cluster resource and the user installing resources into that namespace may not have the proper administrative rights to install the namespace itself.

While I understand the rationale behind "not allowing" Helm3 users to create "cluster scoped" resources, like namespace(s), I cannot help but notice that this rationale is not used (or not consistently followed) in other aspects like creation of CRD's resources, which are also cluster scoped. I.E., if my chart includes CRD's, those will be created w/out any problems, and yet chart installation will fill on:
Error: create: failed to create: namespaces "foo-bar" not found

Thank you in advance.

@bacongobbler
Copy link
Member

bacongobbler commented Oct 23, 2019

Those resources are defined by the chart. Helm allows global resources to be installed as templates within a chart, but it is the responsibility of the cluster administrator to determine the correct scope allowed for a user, including what resources they may or may not be allowed to install (and therefore, what charts they may be permitted to install, and if they can register CRDs or not).

In other words, Helm itself does not assume the user has global administrative privileges to create a namespace similar to kubectl. The chart maintainer can make that assumption if they so desire.

@ichekrygin
Copy link

If I understand what @bacongobbler saying it appears that Helm3 takes an opinionated approach that it will not create namespace via --namespace option. It also appears that Helm3 makes special accommodations for allowing chart authors to create "cluster scoped" resources, moreover, it prioritizes such operation precedence (crds are installed before the chart/template).

Can I as Helm3 chart author allow a chart user to parameterize namespace artifact?

For example, today w/ Helm3 I can place a namespace.yaml into crds directory to work around to achieve almost the same behavior as in Helm2 "--namespace". The biggest caveat, crds resources are not "templatized", thus such workaround will result in "constant" namespace value, which may be not acceptable.

Parallel with CRD's is even more apparent in terms of installation precedence. Granted, as a chart author I can include various "cluster scoped" components inside my chart template. However, the "namespace" similar to "crds" have specific precedence constraints were it must be created prior to any chart resource that will target (use) that names instance. As a chart author, I can attempt to work around this by add ing namespace.yaml to my template, as such:

apiVersion: v1
kind: Namespace
metadata:
  name: {{ .Release.Namespace | quote }}

However, this creates 2 problems.

  1. It seems helm3 will not prioritize namespace.yaml, i.e. results in Error: create: failed to create: namespaces "foo-bar" not found
  2. It will fail to install if said namespace already exists: Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Namespace, namespace: , name: foo-bar

@invidian
Copy link

Also, installing a release into namespace A, which chart creates namespace B is still possible with helm 3.

@marckhouzam
Copy link
Member

Sorry if this was discussed and answered before.

To reduce the impact of migrating to v3, couldn't helm v3 still try to create a non-existing namespace like before? It will fail if it doesn't have the right permissions, like kubectl, but will be backwards compatible when the permissions are sufficient.

For example, our cicd tool deploys our charts and has admin privileges. If v3 tried to create the namespace, my deployments would continue working as is. Currently though, I will need to go through all my different deployment scripts and add a kubectl create namespace before every helm install.

Starting from scratch I agree the current v3 behavior seems right, but considering the v2 behavior, maybe an in-between solution would be more justified for v3?

@ichekrygin
Copy link

It is hard for me to find good justification behind this change beyond possibly following:

Unlike Helm2 "cluster centric", Helm3 is "namespace centric". I.E.,

  • in Helm2: helm list shows all charts across all namespaces irrespective of the current kubeconfig context
  • in Helm3: helm list shows chart only for a specific namespace, and if/when the namespace is not provided then Helm3 defaults to the current kubeconfig context.

If the --namespace functionality is preserved (as in Helm2), I can possibly see how this may lead to somewhat confusing user experience:

helm install test my-chart --namespace foo-bar
helm list

may not show the installed chart if the current kubeconfig context namespace is not foo-bar

However, my $0.02 I think this is a bit "heavy-handed" approach (if this is the main/only rationale), and I would prefer to retain --namespace functionality and use:

helm install test my-chart --namespace foo-bar
helm list --namespace foo-bar

@bacongobbler
Copy link
Member

bacongobbler commented Oct 23, 2019

Many of the reasons why this functionality was removed was listed in #5753, particularly in #5753 (comment).

By supporting the auto-creation of the namespace, community members were proposing new features to Helm which would allow them to modify the namespace during creation, including attaching annotations, labels, policies, constraints, quotas, etc. The list goes on. #3503 is one such example. The creation and management of the namespace is clearly out of scope of helm install, whose goal was to fetch, unpack and install a chart into a cluster. Do one thing and do it well.

The same scoping issue and design flaws were apparent in helm init - users wanted to modify the Deployment and Service that Helm 2 would create server-side. Features like --node-selector, --net-host and other ugly hacks were eventually kludged in to accommodate those requests. It turned out to be a massive design flaw and caused more trouble than it was worth, causing issues like #3414 which was easily handled through tooling that was designed for the workflow of creating and manipulating resources, like kubectl.

The gist of the decision came down to scope. The namespace created during helm install overgrew its original design scope. It turned out to be a poor design decision, so we removed it in favour of tooling that are designed around the creation and management of the namespace, e.g. kubectl create and kubectl edit (or a Helm plugin, hint hint).

Starting from scratch I agree the current v3 behavior seems right, but considering the v2 behavior, maybe an in-between solution would be more justified for v3?

I have to disagree with you here, @marckhouzam. I see where you're coming from, but a poor design choice that was made in a previous version of Helm shouldn't mean that every major version of Helm should support it going forward. A major version of a piece of software is intended to make backwards-incompatible changes in order to move the project forward in a sustainable fashion. If we were supposed to retain behaviour from Helm 2, Tiller would still exist today.

That being said, we are open to suggestions and are always happy to discuss alternative solutions. This was a difficult design choice to make, and I discussed with the other core maintainers a few times why it was removed, so it wasn't done without consideration... or because we felt like breaking user's expectations for the fun of it. 😋

Side-note: can we please carry this discussion forward in #5753? This ticket is about the --namespace not being respected in earlier versions of Helm 3, which has since been fixed. This discussion has gone wayyyyyy off track from the OP's bug report. Thanks.

@ichekrygin
Copy link

@bacongobbler thank you for the background and context.

I think I understand where you're coming from: "getting the things wrong Helm2, and making them right in Helm3".

I am afraid I still cannot fully grok this, especially in light of Helm3 crds support (which on the surface looks very much Hem2Namespace-ish). I think I am in favor of deprecating namespace creation via --namespace flag, as long as there is an alternative way to create and consume namespace in the same Helm chart, i.e., w/out shelling out to kubectl, etc. I think it is a highly desirable trait for Helm to be a "one-stop-shop."

On the surface, it does seem like a minor issue: "just run kubectl create ns foo-bar and be done with it". And this very well could be for the end-user environment since it is very likely that the user has kubectl installed locally. However, it has slightly different implications for CICD/Pipeline environments, where one cannot make such an assertion. Instead, now, with Helm3, one needs to make additional accommodations to install kubectl. This issue only gets worse when working with multi-cluster/multi-namespace environments. It is precisely for that reason having Helm as a "one-stop-shop" is so important.

#5753 (comment)

This was made to mimic the same behaviour as kubectl create --namespace foo -f deployment.yaml

While I understand the rationale, I cannot fully agree with it.
One can place both a namespace definition and a deployment definition targetting said namespace into a single file.yaml and run kubectl apply -f file.yaml, which will yield the expected result. From what I gather, there is no way to do so in Helm3.

@AndiDog
Copy link
Contributor

AndiDog commented Oct 24, 2019

[...] However, it has slightly different implications for CICD/Pipeline environments, where one cannot make such an assertion. Instead, now, with Helm3, one needs to make additional accommodations to install kubectl. [...]

Not really an issue. Imagine you're using Argo CD to declaratively define what's to be installed in cluster(s). In the applications.argoproj.io:Application object (which defines a Git repo with your Helm chart to install), you can specify the target namespace, and create that requirement right next to that object. Typically, you would define 1 "index application" per cluster which lists all the applications to install. That index application can be deployed manually at cluster creation, through Terraform or even by Argo CD itself (if you're adventurous).

Example for how an index application with 2 sub-applications could look lke:

{{- $applications := list "myapp1" "myapp2" }}

{{- range $unused, $applicationAndNsName := $applications -}}

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: "{{ $applicationAndNsName }}"
spec:
  # [...]

  source:
    repoURL: https://my-git-repo
    path: "k8s/autocommit/{{ $applicationAndNsName }}"
    # [...]

  destination:
    namespace: {{ $applicationAndNsName | quote }}
    # [...]

---

apiVersion: v1
kind: Namespace
metadata:
  name: {{ $applicationAndNsName | quote }}

{{ end }}

This will ensure the namespace is created before the CD tool tries to install stuff into it – no matter if it's via Helm, kubectl, kustomize or other tooling. Therefore, Helm should not be special.

@ichekrygin
Copy link

@AndiDog good example, however, I think slightly off point

Imagine you're using Argo CD to declaratively define what's to be installed in cluster(s).

No, I am not. But more importantly.

This will ensure the namespace is created before the CD tool tries to install stuff into it – no matter if it's via Helm, kubectl, kustomize or other tooling

The entire point fo my "concern" is NOT to use anything else other than "Helm", i.e. - this is what I refer to as to "one-stop-shop". Granted, there are a number of workarounds, but those are just that the "workarounds", the things I rather not have, especially in light of the existing behavior of Helm2.

@invidian
Copy link

@ichekrygin you can create umbrella chart yourself, which will create all namespaces you need before you install other other charts. Also, you can store all releases in single namespace, but via templates deploy them into other namespaces.

@bacongobbler
Copy link
Member

bacongobbler commented Oct 24, 2019

#5628 (comment)

Side-note: can we please carry this discussion forward in #5753? This ticket is about the --namespace not being respected in earlier versions of Helm 3, which has since been fixed. This discussion has gone wayyyyyy off track from the OP's bug report. Thanks.

As noted earlier, this discussion is getting off topic from the OP's original issue, so to be respectful of other's time I'm locking this thread as OP's issue has long since been resolved. Please carry the discussion forward in #5753. Thanks.

@helm helm locked as off-topic and limited conversation to collaborators Oct 24, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Categorizes issue or PR as related to a bug. v3.x Issues and Pull Requests related to the major version v3
Projects
None yet
Development

No branches or pull requests