-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kustomize adds namespace and labels to Lists causing errors #688
Comments
@chrisghill First of all, this is not a user error. The problem is that Kustomize doesn't recognize The problem is in loading resources part in Kustomize. https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/resource/factory.go#L85 |
@hyww Do you want to help with this issue? |
I think I can take a look. Latest I am not sure if other lists like |
It seems that special properties of Also, Kustomize might be able to validate
|
@hyww Sorry I missed your comments.
Yes, we need to handle all List types similar to |
Is there any functional difference between a kustomize/pkg/resource/factory.go Line 97 in 02d7530
and be done with it. |
Its entirely possible that this is user-error, but Kustomize is adding metadata.namespace and metadata.label to ConfigMapList, RoleBindingList, RoleList and List (and perhaps others) which is breaking kubectl apply. I saw this issue: #514 which seems related, and it sounds like the PR introducing mandatory namespaces was reverted, but the issue still arises with lists. I've tried version 1.0.6, 1.0.10 and 1.0.11 and it fails with all 3 (though slightly differently).
Here is an example:
In versions 1.0.6 and 1.0.10 kustomize build works, but here is an excerpt of the output where it adds the metadata.namespace and metadata.label:
Then when piped to kubectl, it complains:
error: error validating "STDIN": error validating data: [ValidationError(RoleList.metadata): unknown field "labels" in io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta, ValidationError(RoleList.metadata): unknown field "namespace" in io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta]; if you choose to ignore these errors, turn validation off with --validate=false
Kustomize 1.0.11 seems to make matters worse by requiring a metadata.name field, then kubectl complains about that field as well. Kustomize build won't even work without a metadata.name filed on the RoleList:
Error: loadResMapFromBasesAndResources: SemiResources: loadResMapFromBasesAndResources: rawResources failed to read Resources: Missing metadata.name in object {map[apiVersion:rbac.authorization.k8s.io/v1 items:[map[apiVersion:rbac.authorization.k8s.io/v1 kind:Role metadata:map[name:prometheus-k8s namespace:default] rules:[map[apiGroups:[] resources:[nodes services endpoints pods] verbs:[get list watch]]]] map[apiVersion:rbac.authorization.k8s.io/v1 kind:Role metadata:map[name:prometheus-k8s namespace:kube-system] rules:[map[apiGroups:[] resources:[nodes services endpoints pods] verbs:[get list watch]]]] map[kind:Role metadata:map[name:prometheus-k8s namespace:monitoring] rules:[map[apiGroups:[] resources:[nodes services endpoints pods] verbs:[get list watch]]] apiVersion:rbac.authorization.k8s.io/v1]] kind:RoleList]}
So, I add the metadata.name field to the RoleList like so:
And Kustomize build is happy, but kubectl apply isn't:
error: error validating "STDIN": error validating data: [ValidationError(RoleList.metadata): unknown field "labels" in io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta, ValidationError(RoleList.metadata): unknown field "name" in io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta, ValidationError(RoleList.metadata): unknown field "namespace" in io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl is complaining about the metadata.name field that Kustomize forced me to add, as well as the metadata.namespace and metadata.label field that it added automatically.
Perhaps its version mismatches between kubectl and Kustomize? As I mentioned I tried this with Kustomize 1.0.6, 1.0.10 and 1.0.11. My K8S cluster is 1.11.0 and my kubectl is also 1.11.0. Are there any documents specifying which Kustomize to use based on which kubectl/k8s version you are using?
The text was updated successfully, but these errors were encountered: