Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buil fails when using components and strategic merge patch and null node #5554

Open
matthewhughes-uw opened this issue Feb 26, 2024 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@matthewhughes-uw
Copy link

matthewhughes-uw commented Feb 26, 2024

What happened?

Given the following setup:

├── annotations.yaml
├── components
│   └── kustomization.yaml
├── kustomization.yaml
└── manifests.yaml

With contents:

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - manifests.yaml

patches:
  - path: annotations.yaml

components:
  - components
# manifests.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      # NOTE: empty initContainers
      initContainers:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
      affinity:
        podAffinity:
          # imagine some complicated affinity setup
          # we want the component to strip these
          podAffinity: {}
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: my-namespace
  labels:
    run: my-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx
# annotations.yaml
# placeholder patch, just need any patch
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  namespace: my-namespace
  annotations:
    example.com/my.tool: blah
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: my-namespace
  annotations:
    example.com/my.tool: blah
# components/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

patches:
  - target:
      version: v1
      group: apps
    # example patch, just something that reaches inside /spec/template/spec
    patch: |-
      - op: remove
        path: /spec/template/spec/affinity/podAffinity

Then running kustomize build fails with:

Error: updating name reference in 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' field of 'Deployment.v1.apps/my-nginx.my-namespace': considering field 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' of object Deployment.v1.apps/my-nginx.my-namespace: expected sequence or mapping node

What did you expect to happen?

kustomize build would succeed

How can we reproduce it (as minimally and precisely as possible)?

See details above

Expected output

apiVersion: v1
kind: Service
metadata:
  annotations:
    example.com/my.tool: blah
  labels:
    run: my-nginx
  name: my-nginx
  namespace: my-namespace
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    example.com/my.tool: blah
  name: my-nginx
  namespace: my-namespace
spec:
  replicas: 2
  selector:
    matchLabels:
      run: my-nginx
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      affinity: {}
      containers:
      - env:
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        image: nginx
        name: my-nginx
        ports:
        - containerPort: 80

Actual output

The error

Error: updating name reference in 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' field of 'Deployment.v1.apps/my-nginx.my-namespace': considering field 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' of object Deployment.v1.apps/my-nginx.my-namespace: expected sequence or mapping node

Note: things work as expected if the patches are provided as inline JSON6902 patches, i.e. update kustomization.yaml to:

  - target:
      version: v1
      group: apps
      kind: Deployment
    patch: |-
      - op: add
        path: /metadata/annotations/example.com~1my.tool
        value: blah
  - target:
      version: v1
      kind: Service
    patch: |-
      - op: add
        path: /metadata/annotations/example.com~1my.tool
        value: blah

Kustomize version

v5.3.0

Operating system

Linux

@matthewhughes-uw matthewhughes-uw added the kind/bug Categorizes issue or PR as related to a bug. label Feb 26, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 26, 2024
@matthewhughes-uw
Copy link
Author

This feels related to #5050

note in our case we have e.g. initContainers: (i.e. initContainers: null) because the manifests were built from some helm charts and that was what helm originally gave as output

matthewhughes-uw added a commit to utilitywarehouse/kafka-manifests that referenced this issue Feb 27, 2024
* Add component to easily add `oss` tag for services running 3rd party
  software, and used these for the Kafka clusters
* Add required description, tier labels

Note: patches are made as inline JSON6902 patches because of[1]

[1] kubernetes-sigs/kustomize#5554

Ticket: DENA-480
matthewhughes-uw added a commit to utilitywarehouse/kafka-manifests that referenced this issue Feb 27, 2024
* Add component to easily add `oss` tag for services running 3rd party
  software, and used these for the Kafka clusters
* Add required description, tier labels

Note: patches are made as inline JSON6902 patches because of[1]

[1] kubernetes-sigs/kustomize#5554

Ticket: DENA-480
matthewhughes-uw added a commit to utilitywarehouse/kafka-manifests that referenced this issue Feb 28, 2024
* Add component to easily add `oss` tag for services running 3rd party
  software, and used these for the Kafka clusters
* Add required description, tier labels

Note: patches are made as inline JSON6902 patches because of[1]

[1] kubernetes-sigs/kustomize#5554

Ticket: DENA-480
matthewhughes-uw added a commit to utilitywarehouse/kafka-manifests that referenced this issue Feb 28, 2024
* Add component to easily add `oss` tag for services running 3rd party
  software, and used these for the Kafka clusters
* Add required description, tier labels

Note: patches are made as inline JSON6902 patches because of[1]

[1] kubernetes-sigs/kustomize#5554

Ticket: DENA-480
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants