Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Namespace needs to be specified to kubectl when already present in pod spec #7789

Closed
satnam6502 opened this issue May 5, 2015 · 27 comments
Closed
Assignees
Labels
area/kubectl kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Milestone

Comments

@satnam6502
Copy link
Contributor

If a pod spec has a namespace specified it must still be specified by a flag when creating it with kubectl. I've already had a discussion about this with @smarterclayton and @jlowdermilk but I think it was deemed to be "worked as intended". However, @bgrant0607 considers this a bug.

$ cat music-db-pod1.yaml 
apiVersion: v1beta3
kind: Pod
metadata:
  name: music-db1
  namespace: mytunes
  labels:
    name: music
spec:
  containers:
  - name: es
    image: satnam6502/elasticsearch:1.3
    ports:
    - name: es
      containerPort: 9200
    - name: es-transport
      containerPort: 9300
  - name: kibana
    image: satnam6502/kibana
    ports:
    - name: kibana
      containerPort: 5601
$ kubectl create -f music-db-pod1.yaml 
Error: the namespace from the provided object "mytunes" does not match the namespace "default". You must pass '--namespace=mytunes' to perform this operation.
$ kubectl create -f music-db-pod1.yaml --namespace=mytunes
pods/music-db1
@satnam6502 satnam6502 added priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels May 5, 2015
@satnam6502 satnam6502 added this to the v1.0-post milestone May 5, 2015
@smarterclayton
Copy link
Contributor

Yeah, this regressed from a previous behavior when we removed kubectl/resources.go and replaced it with the resource builder.

----- Original Message -----

If a pod spec has a namespace specified it must still be specified by a flag
when creating it with kubectl. I've already had a discussion about this
with @smarterclayton and @jlowdermilk but I think it was deemed to be
"worked as intended". However, @bgrant0607 considers this a bug.

$ cat music-db-pod1.yaml
apiVersion: v1beta3
kind: Pod
metadata:
  name: music-db1
  namespace: mytunes
  labels:
    name: music
spec:
  containers:
  - name: es
    image: satnam6502/elasticsearch:1.3
    ports:
    - name: es
      containerPort: 9200
    - name: es-transport
      containerPort: 9300
  - name: kibana
    image: satnam6502/kibana
    ports:
    - name: kibana
      containerPort: 5601
$ kubectl create -f music-db-pod1.yaml
Error: the namespace from the provided object "mytunes" does not match the
namespace "default". You must pass '--namespace=mytunes' to perform this
operation.
$ kubectl create -f music-db-pod1.yaml --namespace=mytunes
pods/music-db1

Reply to this email directly or view it on GitHub:
#7789

@j3ffml
Copy link
Contributor

j3ffml commented May 5, 2015

@smarterclayton, so should kubectl create use namespace provided in schema only if default is unset, or should schema namespace overwrite command namespace?

@smarterclayton
Copy link
Contributor

Command namespace should overwrite schema namespace on create. Update, Delete, and Get are harder to answer.

----- Original Message -----

@smarterclayton, so should kubectl create use namespace provided in schema
only if default is unset, or should schema namespace overwrite command
namespace?


Reply to this email directly or view it on GitHub:
#7789 (comment)

@bgrant0607
Copy link
Member

I think it clearly violates POLS for an implicit setting (kubeconfig context, esp. the default one) to override an explicit one (namespace in schema).

I understand that we want to make it easy to export objects from one namespace and import them into another, but I think we should provide a mechanism to scrub the objects first, for example, removing the explicit namespace (and status, readonly fields, etc.).

If no namespace is specified in the schema, then I'd expect the one from kubeconfig to be used.

If a namespace is explicitly specified on the command line and that conflicts with the one in the schema, kubectl should raise an error.

@bgrant0607 bgrant0607 added kind/bug Categorizes issue or PR as related to a bug. area/kubectl labels May 5, 2015
@bgrant0607 bgrant0607 removed this from the v1.0-post milestone May 5, 2015
@smarterclayton
Copy link
Contributor

Which is what we do today. I'm ok leaving it as is.

On May 5, 2015, at 6:25 PM, Brian Grant notifications@github.com wrote:

I think it clearly violates POLS for an implicit setting (kubeconfig context, esp. the default one) to override an explicit one (namespace in schema).

I understand that we want to make it easy to export objects from one namespace and import them into another, but I think we should provide a mechanism to scrub the objects first, for example, removing the explicit namespace (and status, readonly fields, etc.).

If no namespace is specified in the schema, then I'd expect the one from kubeconfig to be used.

If a namespace is explicitly specified on the command line and that conflicts with the one in the schema, kubectl should raise an error.


Reply to this email directly or view it on GitHub.

@bgrant0607
Copy link
Member

Well, it would be nice for the implicit setting to be overridden by the explicit one without an error.

@hobti01
Copy link

hobti01 commented May 15, 2015

One would expect this to work, but it doesn't:
kubectl create -f /manifestsInManyNamespaces/

If I specify no namespace, the default namespace colliding with the namespace in the schema is unexpected.

Without allowing the schema to take precedence over "unspecified", there is not a way to deploy multiple manifests in different namespaces without knowing or inspecting all the namespaces. This would tie into #5840

@smarterclayton
Copy link
Contributor

Someone should have to send a flag. With --all-namespaces being set, that could potentially be supported. You still need a default namespace. Create in many namespaces is an optional, non-default mode, because it is dangerous.

----- Original Message -----

One would expect this to work, but it doesn't:
kubectl create -f /manifestsInManyNamespaces/

If I specify no namespace, the default namespace colliding with the
namespace in the schema is unexpected.

Without allowing the schema to take precedence over "unspecified", there is
not a way to deploy multiple manifests in different namespaces without
knowing or inspecting all the namespaces. This would tie into
#5840


Reply to this email directly or view it on GitHub:
#7789 (comment)

@hobti01
Copy link

hobti01 commented May 15, 2015

I can see the potential for abuse. It seems that #7024 implements a flag of sorts with --namespace="*" instead of --all-namespaces

@smarterclayton
Copy link
Contributor

Originally, but it was supposed to switch to --all-namespaces.

----- Original Message -----

I can see the potential for abuse. It seems that
#7024 implements a
flag of sorts with --namespace="*" instead of --all-namespaces


Reply to this email directly or view it on GitHub:
#7789 (comment)

@BugRoger
Copy link
Contributor

I was trying to put all infrastructure related resources into their own namespace. A side effect of this regression is that manifests have to live in the default namespace.

e.g. given the namespace cluster directly in the manifest, kubelet will complain with:

kubelet.go:1159] Failed creating a mirror pod "fluentd-to-elasticsearch-10.97.90.138_cluster": the namespace of the provided object does not match the namespace sent on the request

@smarterclayton
Copy link
Contributor

Just because you said that - openshift recently switched to creating our system level service accounts in a namespace (configurable) called "openshift-infra". We should try to have a common pattern between Kube and OpenShift there. What name were you going to default to, "cluster"?

----- Original Message -----

I was trying to put all infrastructure related resources into their own
namespace. A side effect of this regression is that manifests have to live
in the default namespace.

e.g. given the namespace cluster directly in the manifest, kubelet will
complain with:

kubelet.go:1159] Failed creating a mirror pod
"fluentd-to-elasticsearch-10.97.90.138_cluster": the namespace of the
provided object does not match the namespace sent on the request

Reply to this email directly or view it on GitHub:
#7789 (comment)

@BugRoger
Copy link
Contributor

What name were you going to default to, "cluster"?

Yes, after creating too many namespaces (like monitoring, logging, ..) we figured that there is a trade-off between namespaces and filtering. I chose cluster as a "global" bucket for infrastructure like services. Though I'm in no way married to that term.

On the concrete problem, I would expect that the kubelet starts its manifests with the --all-namespaces flag or similar.

@bgrant0607
Copy link
Member

cc @krousey

@krousey
Copy link
Member

krousey commented Jun 25, 2015

So from what I gather the behavior we want is something like this:

Namespace in .kube/config Namespace on command line Namespace in spec Result
* Yes Yes Error if command line and spec disagree
* Yes No Use command line namespace
* No Yes Use spec namespace even if it disagrees with .kube/config
Yes No No Use .kube/config namespace
No No No Use "default" namespace

Is this right? From what I can tell, I might be able to do this by checking the context's LocationOfOrigin. Still haven't found where the namespace flag is used though, so I have to track that down.

@deads2k
Copy link
Contributor

deads2k commented Jun 25, 2015

Is this right?

If .kube/config=yes and spec=yes and they conflict, do you expect an error?

Still haven't found where the namespace flag is used though, so I have to track that down.

Flags are bound here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/kubectl/cmd/util/factory.go#L332, passed here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/clientcmd/client_config.go#L63, and used here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/clientcmd/client_config.go#L266. Basically, whether the name came from a flag or the .kube/config is transparent to callers who get client.Configs.

@krousey
Copy link
Member

krousey commented Jun 25, 2015

cc @ghodss

@krousey
Copy link
Member

krousey commented Jun 25, 2015

If .kube/config=yes and spec=yes and they conflict, do you expect an error?

I don't think so. On May 5, @bgrant0607 said

Well, it would be nice for the implicit setting to be overridden by the explicit one without an error.

I believe he was referring to that case specifically.

@smarterclayton
Copy link
Contributor

As a user I think you want explicit to override, not error. I think.

On Jun 25, 2015, at 5:00 PM, krousey notifications@github.com wrote:

If .kube/config=yes and spec=yes and they conflict, do you expect an error?

I don't think so. On May 5, @bgrant0607 said

Well, it would be nice for the implicit setting to be overridden by the explicit one without an error.

I believe he was referring to that case specifically.


Reply to this email directly or view it on GitHub.

@krousey
Copy link
Member

krousey commented Jun 25, 2015

It looks like to accomplish this, I would need to expose DirectClientConfig's overrides somehow. https://github.com/GoogleCloudPlatform/kubernetes/blob/5520386b18/pkg%2Fclient%2Fclientcmd%2Fclient_config.go#L54

@krousey
Copy link
Member

krousey commented Jun 25, 2015

As a user I think you want explicit to override, not error. I think.

@bgrant0607 comments on this?

@smarterclayton
Copy link
Contributor

I think I'm agreeing with Brian's comment.

On Jun 25, 2015, at 6:25 PM, krousey notifications@github.com wrote:

As a user I think you want explicit to override, not error. I think.

@bgrant0607 comments on this?


Reply to this email directly or view it on GitHub.

@krousey
Copy link
Member

krousey commented Jun 25, 2015

@smarterclayton Ah ok. I thought you were commenting on the first row I listed where it's explicit on the command line and in the spec.

@bgrant0607
Copy link
Member

I believe this was fixed by PR #10493

@anguslees
Copy link
Member

anguslees commented Feb 17, 2017

I believe the:
* No Yes Use spec namespace even if it disagrees with .kube/config
case from the above table was not implemented by PR #10493.

Specifically I have config files that contain explicit and varied namespace statements. Works fine out of cluster, works in-cluster (no .kube/config) with 1.4.x kubectl's, fails in-cluster with 1.5.x. I would like it to work the same in/out of cluster, or I would like a --do-what-i-say flag :/

@smarterclayton
Copy link
Contributor

Yeah, it should be consistent. Can you spawn an issue for it?

@geekofalltrades
Copy link

geekofalltrades commented Dec 5, 2017

Did the followup issue ever get created? I would love to go track it.

I've got a manifest that creates RBAC resources in both my-namespace and kube-system. Critically, there's a RoleBinding in kube-system which needs to bind a ClusterRole to a ServiceAccount in my-namespace. I'd like to contain the ServiceAccount, RoleBinding, and ClusterRole in the same manifest, because that's a really sensible way to organize this.

My .kube/config is using a context with namespace: my-namespace. When I try to apply or create the manifest, the resources in kube-system will not create.

error: the namespace from the provided object "kube-system" does not match the namespace "my-namespace". You must pass '--namespace=kube-system' to perform this operation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests