You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is similar to #43662, but that report (and fix) focussed on the in-cluster namespace vs configured .kube/config context namespace. I have a similar issue with the in-cluster namespace vs an explicit metadata.namespace in the JSON/YAML file I'm passing to kubectl create -f
When run in-cluster, kubectl 1.5+ (including 1.6.0) refuses to allow me to create a resource in another namespace via an explicit metadata.namespace property on a resource.
kubectl 1.4.x honours .metadata.namespace when run in-cluster as expected. All versions honour metadata.namespace when run out-of-cluster.
(No .kube/config, just using in-cluster defaults)
% cat /var/run/secrets/kubernetes.io/serviceaccount/namespacefoo
% jq .metadata.namespace echo.json "foo2"
% ./kubectl-1.6.0 create -f echo.json error: the namespace from the provided object "foo2" does not match the namespace "foo". You must pass '--namespace=foo2' to perform this operation.
% ./kubectl-1.5.6 create -f echo.json error: the namespace from the provided object "foo2" does not match the namespace "foo". You must pass '--namespace=foo2' to perform this operation.
% ./kubectl-1.4.7 create -f echo.json deployment "echoheaders" created
I don't want to / can't provide an explicit --namespace arg or context declaration as suggested by the error text, because this is an automated deployment pipeline that creates resources in multiple namespaces. I just want the client to do-what-I-say without imposing additional client-side restrictions 😛
The text was updated successfully, but these errors were encountered:
Automatic merge from submit-queue
Stop treating in-cluster-config namespace as an override
Fixes#44835
The namespace of an in-cluster config should behave like the namespace specified in a kubeconfig file... it should be used as the default namespace, but be able to be overridden by namespaces specified in yaml files passed to `kubectl create -f`.
```release-note
Restored the ability of kubectl running inside a pod to consume resource files specifying a different namespace than the one the pod is running in.
```
This is similar to #43662, but that report (and fix) focussed on the in-cluster namespace vs configured .kube/config context namespace. I have a similar issue with the in-cluster namespace vs an explicit
metadata.namespace
in the JSON/YAML file I'm passing tokubectl create -f
When run in-cluster,
kubectl
1.5+ (including 1.6.0) refuses to allow me to create a resource in another namespace via an explicitmetadata.namespace
property on a resource.kubectl
1.4.x honours.metadata.namespace
when run in-cluster as expected. All versions honourmetadata.namespace
when run out-of-cluster.I don't want to / can't provide an explicit
--namespace
arg or context declaration as suggested by the error text, because this is an automated deployment pipeline that creates resources in multiple namespaces. I just want the client to do-what-I-say without imposing additional client-side restrictions 😛The text was updated successfully, but these errors were encountered: