-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm 3: --namespace not respected #5628
Comments
There was a change in Helm 3 where it now takes the current namespace from your local kube config. If it's not present, the default namespace is used. To change the current namespace for the alpha, you can use
Helm will pick that up and use that namespace the next time you invoke it. There's a few design refactors going on within Helm 3 to address prioritization issues with loading settings from CLI flags and the kube config. As I understand it, the --namespace flag was temporarily removed but it's planned to allow some way to change the namespace on-the-fly. @adamreese do you have any other context I may be missing here on why we removed the --namespace flag? |
For reference: Lines 108 to 113 in 658c66d
|
Thanks @bacongobbler for the info, it works! So I guess the --namespace flag would be added back later to customize the namespace config on the kubectl. |
this is quite a confusing breaking change, given the |
Hi,The command to correctly switch between different contexts should be like this:
EXAMPLE:
thanks 😄 |
It seems also that the namespace is no longer created by helm, even when specifying it through the context - please can someone confirm that this is a breaking change ? |
The namespace should still be created as part of |
Done: #5753 |
Note that the same also applies to |
To clarify, this is not a breaking change, but a bug in the current alpha release. We have had several race condition issues raised in Helm 2 where settings weren't being read properly from the environment (or in some cases, being ignored entirely), so we're taking the time to re-work that piece of the architecture. Right now, we are working out how to read settings from multiple different locations (environment variables, kubeconfig, feature flags) and making that a consistent API across the board. The workaround listed above works for the time being, so please feel free to use that while we try and fix this prior to the final release. Thanks for your patience. |
@bacongobbler I think this code is actually fine: Lines 108 to 113 in 658c66d
If the kubeconfig here would be initialized with the correct values then it would work Lines 101 to 106 in 658c66d
It looks ok, but settings are not parsed at that point in time when the method is called. So The code for adding and parsing the flags is here: Lines 52 to 64 in 658c66d
but Lines 61 to 62 in 658c66d
Lines 53 to 54 in 658c66d
before new newRootCmd is executed...
|
FWIW I think this is the cause of cert-manager/cert-manager#1744 if anyone else is trying to figure that out. Good case for testing the fix? |
Different issue, but it is related to namespaces. I'll follow up in that ticket. |
The workaround above does not well, work for me. I am still getting the dreaded:
I'm trying to install seldon-core using helm3. |
@pisymbol that's the same issue as reported above by @airhorns. Please see cert-manager/cert-manager#1744 for more info. |
@bacongobbler I understand that. However, the work around you outlined above (setting the context's default namespace) does not work! i.e. helm3 right now is unusable which makes me sad. EDIT: Is there anyway this problem can be addressed in the short term? It's a real show stopper for me and I'll bet many others. |
I have been testing the Helm3 alpha.1 release and setting the default namespace in my Kube context works for me. You can see my testing example here: #5753 (comment) Note that the chart gets installed in the |
I found out where is located the issue with namespace enforcement. diff --git a/pkg/kube/client.go b/pkg/kube/client.go
index 8df24bef..2af14278 100644
--- a/pkg/kube/client.go
+++ b/pkg/kube/client.go
@@ -114,7 +114,6 @@ func (c *Client) newBuilder() *resource.Builder {
ContinueOnError().
NamespaceParam(c.namespace()).
DefaultNamespace().
- RequireNamespace().
Flatten()
} If the
I am not familiar with all of this, and on what should be done. |
The breaking change doesn't make any sense: |
It's not a breaking change it's a bug, if you read up it should be clear |
In the above comment #5628 (comment), @bacongobbler says:
which means the developers still intend to keep both sources for determining the namespace automatically. Or did I misunderstand? That is the confusing part here. Which source gets precedence must be absolutely clear, and |
The namespace value just isn't being passed through properly. So it uses whatever your active namespace is, workaround is to change your active namespace before deploying... |
@timja It does not solve the problem if you are deploying resources to many namespaces from a single chart. |
@pkobielak please follow #5953 for the discussion around multiple namespaces from a single chart. This issue and #5953 are separate discussions. Thanks. RE:
I think you're misunderstanding. Helm will continue to respect general CLI conventions here. We're just adding an additional configuration source where we can read the namespace parameter from. If you don't want to read your namespace parameter from your kubeconfig, that will continue to work. For posterity, general CLI convention order of precedence is
Again, the current behaviour in dev-v3 (as in, |
good work all. I ran into this messing around with Alpha.2. Looks like its been fixed for next alpha release. Just wanted to say Thanks. |
Thank you the work from you guys. In beta version, the namespace flag has been respected. |
@RyanSiu1995 This is intended behaviour to mimic the same behaviour as |
Sorry for being late to the party :). I just stumbled over the "namespace creation" issue. @RyanSiu1995 thank you for the explanation. I have a couple of follow up questions:
While I understand the rationale behind "not allowing" Helm3 users to create "cluster scoped" resources, like Thank you in advance. |
Those resources are defined by the chart. Helm allows global resources to be installed as templates within a chart, but it is the responsibility of the cluster administrator to determine the correct scope allowed for a user, including what resources they may or may not be allowed to install (and therefore, what charts they may be permitted to install, and if they can register CRDs or not). In other words, Helm itself does not assume the user has global administrative privileges to create a namespace similar to |
If I understand what @bacongobbler saying it appears that Helm3 takes an opinionated approach that it will not create namespace via Can I as Helm3 chart author allow a chart user to parameterize For example, today w/ Helm3 I can place a Parallel with CRD's is even more apparent in terms of installation precedence. Granted, as a chart author I can include various "cluster scoped" components inside my chart template. However, the "namespace" similar to "crds" have specific precedence constraints were it must be created prior to any chart resource that will target (use) that names instance. As a chart author, I can attempt to work around this by add ing apiVersion: v1
kind: Namespace
metadata:
name: {{ .Release.Namespace | quote }} However, this creates 2 problems.
|
Also, installing a release into namespace A, which chart creates namespace B is still possible with helm 3. |
Sorry if this was discussed and answered before. To reduce the impact of migrating to v3, couldn't helm v3 still try to create a non-existing namespace like before? It will fail if it doesn't have the right permissions, like kubectl, but will be backwards compatible when the permissions are sufficient. For example, our cicd tool deploys our charts and has admin privileges. If v3 tried to create the namespace, my deployments would continue working as is. Currently though, I will need to go through all my different deployment scripts and add a Starting from scratch I agree the current v3 behavior seems right, but considering the v2 behavior, maybe an in-between solution would be more justified for v3? |
It is hard for me to find good justification behind this change beyond possibly following: Unlike Helm2 "cluster centric", Helm3 is "namespace centric". I.E.,
If the
may not show the installed chart if the current However, my $0.02 I think this is a bit "heavy-handed" approach (if this is the main/only rationale), and I would prefer to retain
|
Many of the reasons why this functionality was removed was listed in #5753, particularly in #5753 (comment). By supporting the auto-creation of the namespace, community members were proposing new features to Helm which would allow them to modify the namespace during creation, including attaching annotations, labels, policies, constraints, quotas, etc. The list goes on. #3503 is one such example. The creation and management of the namespace is clearly out of scope of The same scoping issue and design flaws were apparent in The gist of the decision came down to scope. The namespace created during
I have to disagree with you here, @marckhouzam. I see where you're coming from, but a poor design choice that was made in a previous version of Helm shouldn't mean that every major version of Helm should support it going forward. A major version of a piece of software is intended to make backwards-incompatible changes in order to move the project forward in a sustainable fashion. If we were supposed to retain behaviour from Helm 2, Tiller would still exist today. That being said, we are open to suggestions and are always happy to discuss alternative solutions. This was a difficult design choice to make, and I discussed with the other core maintainers a few times why it was removed, so it wasn't done without consideration... or because we felt like breaking user's expectations for the fun of it. 😋 Side-note: can we please carry this discussion forward in #5753? This ticket is about the |
@bacongobbler thank you for the background and context. I think I understand where you're coming from: "getting the things wrong Helm2, and making them right in Helm3". I am afraid I still cannot fully grok this, especially in light of Helm3 On the surface, it does seem like a minor issue: "just run
While I understand the rationale, I cannot fully agree with it. |
Not really an issue. Imagine you're using Argo CD to declaratively define what's to be installed in cluster(s). In the Example for how an index application with 2 sub-applications could look lke:
This will ensure the namespace is created before the CD tool tries to install stuff into it – no matter if it's via Helm, kubectl, kustomize or other tooling. Therefore, Helm should not be special. |
@AndiDog good example, however, I think slightly off point
No, I am not. But more importantly.
The entire point fo my "concern" is NOT to use anything else other than "Helm", i.e. - this is what I refer to as to "one-stop-shop". Granted, there are a number of workarounds, but those are just that the "workarounds", the things I rather not have, especially in light of the existing behavior of |
@ichekrygin you can create umbrella chart yourself, which will create all namespaces you need before you install other other charts. Also, you can store all releases in single namespace, but via templates deploy them into other namespaces. |
As noted earlier, this discussion is getting off topic from the OP's original issue, so to be respectful of other's time I'm locking this thread as OP's issue has long since been resolved. Please carry the discussion forward in #5753. Thanks. |
It seems the latest helm3 from dev-v3 branch doesn't take the --namespace parameter. Basically I try to deploy an example chart onto a different namespace, but it doesn't seem to take effect. Below are the commands and outputs. How is helm3 planning to support namespace and how should namespaces be specified in helm charts for helm3? Thanks for any help / pointers!
Output of
helm version
:version.BuildInfo{Version:"v3.0+unreleased", GitCommit:"658c66dc66684a85be787a066c8434ac5212648c", GitTreeState:"clean"}
Output of
kubectl version
:Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"archive", BuildDate:"2018-08-14T19:36:17Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
/usr/local/helm3/linux-amd64/helm install alpine /usr/local/alpine/ --namespace=us1-dev-web-1 --debug
printer.go:84: [debug] Original chart version: ""
printer.go:84: [debug] CHART PATH: /usr/local/alpineclient.go:94: [debug] building resources from manifest
client.go:99: [debug] creating 1 resource(s)
NAME: alpine
LAST DEPLOYED: 2019-04-19 17:07:30.391894983 +0000 UTC m=+0.063175512
NAMESPACE: default
STATUS: deployed
kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
alpine-alpine 1/1 Running 0 6m
The text was updated successfully, but these errors were encountered: