Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize is confusing Kind: Namespace with other Kind: Namespace's that belong to different API sets. #5072

Closed
buzzaII opened this issue Mar 3, 2023 · 4 comments · Fixed by #5133
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@buzzaII
Copy link

buzzaII commented Mar 3, 2023

What happened?

Using ASO V2 (Azure Operator for Kubernetes v2)

This looks like a bug with Kustomize, though causing issues with the ServiceBus Namespace CRD

Kustomize is confusing a ServiceBus namespace with a k8s namespace:

kustomize.yaml:


apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: podinfo
resources:
  - azure-servicebus.yaml

azure-servicebus.yaml:

apiVersion: servicebus.azure.com/v1beta20210101preview
kind: Namespace
metadata:
  name: core-sb-99
  namespace: podinfo
spec:
  location: australiaeast
  owner:
    name: core-sb
  sku:
    name: Standard
  zoneRedundant: false

running:

kubectl kustomize ./

produces this output:


apiVersion: servicebus.azure.com/v1beta20210101preview
kind: Namespace
metadata:
  name: podinfo
  namespace: podinfo
spec:
  location: australiaeast
  owner:
    name: core-sb
  sku:
    name: Standard
  zoneRedundant: false

Notice the name of the ServiceBus namespace is now podinfo when it was supposed to be core-sb-99

Also dicussed here: Azure/azure-service-operator#2732

What did you expect to happen?

The expectation would be that the output would be:

apiVersion: servicebus.azure.com/v1beta20210101preview
kind: Namespace
metadata:
  name: core-sb-99              <------- this is what it should have produced
  namespace: podinfo
spec:
  location: australiaeast
  owner:
    name: core-sb
  sku:
    name: Standard
  zoneRedundant: false

How can we reproduce it (as minimally and precisely as possible)?

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: podinfo
resources:
  - azure-servicebus.yaml
# resources.yaml
apiVersion: servicebus.azure.com/v1beta20210101preview
kind: Namespace
metadata:
  name: core-sb-99
  namespace: podinfo
spec:
  location: australiaeast
  owner:
    name: core-sb
  sku:
    name: Standard
  zoneRedundant: false

Expected output

apiVersion: servicebus.azure.com/v1beta20210101preview
kind: Namespace
metadata:
  name: core-sb-99
  namespace: podinfo
spec:
  location: australiaeast
  owner:
    name: core-sb
  sku:
    name: Standard
  zoneRedundant: false

Actual output

apiVersion: servicebus.azure.com/v1beta20210101preview
kind: Namespace
metadata:
  name: podinfo
  namespace: podinfo
spec:
  location: australiaeast
  owner:
    name: core-sb
  sku:
    name: Standard
  zoneRedundant: false

Kustomize version

the one included in the latest kubectl

Operating system

None

@buzzaII buzzaII added the kind/bug Categorizes issue or PR as related to a bug. label Mar 3, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 3, 2023
@Serializator
Copy link
Contributor

  1. The NamespaceTransformerPlugin is invoked on the servicebus.azure.com/v1beta20210101preview/Namespace resource.

  2. The NamespaceTransformerPlugin invokes namespace.Filter which boils down to (namespace.Filter).run in which fsslice.Filter is invoked.

  3. The fsslice.Filter goes over all field specs and applies them using the fieldspec.Filter.

  4. In (fieldspec.Filter).Filter the GVK of the field spec and target object are matched using fieldspec.isMatchGVK.

  5. This fieldspec.isMatchGVK checks for the group of the field spec (fs.Group) to see if it's not empty. If it is empty it continues and deems the GVK a match.

Where is the problem you ask?

The field specs it uses are defined in builtinpluginconsts.namespaceFieldSpecs. In this list there is a field spec which does not specify a group, which is meant to target the core Namespace resource.

An empty group is seen as "core" (in this case, the core Namespace resource), except in the logic of fieldspec.isMatchGVK it is seen as a "wildcard" of some kind to match any group.

What now?

It depends.

This kind of logic (where an empty string is considered a "wildcard" to match any) is most likely used / expected across Kustomize and if not there is the chance that this is used by users who manually override the default specified in builtinpluginconsts.namespaceFieldSpecs.

Thus changing the logic in fieldpsec.isMatchGVK is I think the least desirable option, at least before any major version bump in which breaking changes are expected.

@KnVerey, what do you think is the best course of action for this? I don't want to make a PR which implements a "solution" / "hack" which maintainers don't deem appropriate and then throw it away 😜

@natasha41575
Copy link
Contributor

@Serializator Thank you for your detailed investigation and write-up.

I don't think we want to change any logic in fieldspec.isMatchGVK. I think this is a case where kustomize should have detailed knowledge about builtin kubernetes types and use that knowledge to handle them well.

The best course of action IMO would be for the namespace.Filter to have hardcoded logic to recognize a kubernetes builtin Namespace vs another Namespace object. There is already special handling in namespace.Filter for RoleBinding resources; we can similarly have special handling for Namespaces. The special handling for Namespaces can ensure that for kubernetes builtin namespaces, metadata.name is changed, and for other objects of Kind namespace, we only update metadata.namespace.

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 22, 2023
@timja
Copy link
Contributor

timja commented Apr 14, 2023

Test that reproduces the issue: 6fd2773

@timja
Copy link
Contributor

timja commented Apr 14, 2023

I created a PR for this #5133

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants