New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Namespace needs to be specified to kubectl when already present in pod spec #7789
Comments
|
Yeah, this regressed from a previous behavior when we removed kubectl/resources.go and replaced it with the resource builder. ----- Original Message -----
|
|
@smarterclayton, so should |
|
Command namespace should overwrite schema namespace on create. Update, Delete, and Get are harder to answer. ----- Original Message -----
|
|
I think it clearly violates POLS for an implicit setting (kubeconfig context, esp. the default one) to override an explicit one (namespace in schema). I understand that we want to make it easy to export objects from one namespace and import them into another, but I think we should provide a mechanism to scrub the objects first, for example, removing the explicit namespace (and status, readonly fields, etc.). If no namespace is specified in the schema, then I'd expect the one from kubeconfig to be used. If a namespace is explicitly specified on the command line and that conflicts with the one in the schema, kubectl should raise an error. |
|
Which is what we do today. I'm ok leaving it as is.
|
|
Well, it would be nice for the implicit setting to be overridden by the explicit one without an error. |
|
One would expect this to work, but it doesn't: If I specify no namespace, the Without allowing the schema to take precedence over "unspecified", there is not a way to deploy multiple manifests in different namespaces without knowing or inspecting all the namespaces. This would tie into #5840 |
|
Someone should have to send a flag. With --all-namespaces being set, that could potentially be supported. You still need a default namespace. Create in many namespaces is an optional, non-default mode, because it is dangerous. ----- Original Message -----
|
|
I can see the potential for abuse. It seems that #7024 implements a flag of sorts with --namespace="*" instead of --all-namespaces |
|
Originally, but it was supposed to switch to --all-namespaces. ----- Original Message -----
|
|
I was trying to put all infrastructure related resources into their own namespace. A side effect of this regression is that manifests have to live in the e.g. given the namespace |
|
Just because you said that - openshift recently switched to creating our system level service accounts in a namespace (configurable) called "openshift-infra". We should try to have a common pattern between Kube and OpenShift there. What name were you going to default to, "cluster"? ----- Original Message -----
|
Yes, after creating too many namespaces (like On the concrete problem, I would expect that the kubelet starts its manifests with the |
|
cc @krousey |
|
So from what I gather the behavior we want is something like this:
Is this right? From what I can tell, I might be able to do this by checking the context's LocationOfOrigin. Still haven't found where the namespace flag is used though, so I have to track that down. |
If .kube/config=yes and spec=yes and they conflict, do you expect an error?
Flags are bound here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/kubectl/cmd/util/factory.go#L332, passed here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/clientcmd/client_config.go#L63, and used here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/clientcmd/client_config.go#L266. Basically, whether the name came from a flag or the .kube/config is transparent to callers who get client.Configs. |
|
cc @ghodss |
I don't think so. On May 5, @bgrant0607 said
I believe he was referring to that case specifically. |
|
As a user I think you want explicit to override, not error. I think.
|
|
It looks like to accomplish this, I would need to expose DirectClientConfig's overrides somehow. https://github.com/GoogleCloudPlatform/kubernetes/blob/5520386b18/pkg%2Fclient%2Fclientcmd%2Fclient_config.go#L54 |
@bgrant0607 comments on this? |
|
I think I'm agreeing with Brian's comment.
|
|
@smarterclayton Ah ok. I thought you were commenting on the first row I listed where it's explicit on the command line and in the spec. |
|
I believe this was fixed by PR #10493 |
|
I believe the: Specifically I have config files that contain explicit and varied namespace statements. Works fine out of cluster, works in-cluster (no .kube/config) with 1.4.x kubectl's, fails in-cluster with 1.5.x. I would like it to work the same in/out of cluster, or I would like a |
|
Yeah, it should be consistent. Can you spawn an issue for it? |
|
Did the followup issue ever get created? I would love to go track it. I've got a manifest that creates RBAC resources in both my-namespace and kube-system. Critically, there's a RoleBinding in kube-system which needs to bind a ClusterRole to a ServiceAccount in my-namespace. I'd like to contain the ServiceAccount, RoleBinding, and ClusterRole in the same manifest, because that's a really sensible way to organize this. My |
If a pod spec has a namespace specified it must still be specified by a flag when creating it with
kubectl. I've already had a discussion about this with @smarterclayton and @jlowdermilk but I think it was deemed to be "worked as intended". However, @bgrant0607 considers this a bug.The text was updated successfully, but these errors were encountered: