Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kustomize complains that "behavior must be merge or replace" despite dynamic name produced by configMapGenerator #4829

Open
tmmorin opened this issue Oct 12, 2022 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@tmmorin
Copy link

tmmorin commented Oct 12, 2022

When using configMapGenerator in an overlay, with behavior: create, I run into the following error, although the name of the generated configMap once appended with the "-abcde12345" suffix would not result in a collision:

$ kustomize build overlay-attempt
Error: merging from generator &{0xc001e95110 <nil>}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", 
Kind:"ConfigMap", isClusterScoped:false}, Name:"cm", Namespace:""} exists; behavior must be merge or replace

This seems to happen because the configMap base name collides with a resource existing in the base kustomization, which seems unexpected because the configmap defined in my overlay has a dynamic name (does not use options.disableNameSuffixHash: true).

It seems to me that, perhaps, "behavior must be merge or replace" should only be enforced when disableNameSuffixHash: true is used.

Files that can reproduce the issue

$ tree
.
├── base
│   ├── kustomization.yaml
│   └── manifest.yaml
├── overlay-attempt
│   └── kustomization.yaml
└── overlay-working
    └── kustomization.yaml

Base:

$ cat base/manifest.yaml      
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: cm
data:
  foo: 42
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
    - name: a
      configMap:
        name: cm
$ cat base/kustomization.yaml      
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- manifest.yaml

Overlay definition not behaving as hoped:

$ cat overlay-attempt/kustomization.yaml      
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../base

configMapGenerator:
- name: cm
  behavior: create
  literals:
    - bar=43

patches:
- target:
    kind: Pod
    name: x
  patch: |
    - op: add
      path: /spec/volumes/-
      value:
        name: b
        configMap:
          name: cm

An overlay that does nearly the same thing and that is working:

$ cat overlay-working/kustomization.yaml      
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../base

configMapGenerator:
- name: cm-X
  behavior: create
  literals:
    - bar=43

patches:
- target:
    kind: Pod
    name: x
  patch: |
    - op: add
      path: /spec/volumes/-
      value:
        name: b
        configMap:
          name: cm-X

Expected output

I would expect kustomize build overlay-attempt to succeed and give the following:

apiVersion: v1
data:
  foo: 42
kind: ConfigMap
metadata:
  name: cm
---
apiVersion: v1
data:
  bar: "43"
kind: ConfigMap
metadata:
  name: cm-86kfb9ch5m
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
  - configMap:
      name: cm
    name: a
  - configMap:
      name: cm-86kfb9ch5m
    name: b

Note that kustomize build overlay-working works as expected:

apiVersion: v1
data:
  foo: 42
kind: ConfigMap
metadata:
  name: cm
---
apiVersion: v1
data:
  bar: "43"
kind: ConfigMap
metadata:
  name: cm-X-86kfb9ch5m
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
  - configMap:
      name: cm
    name: a
  - configMap:
      name: cm-X-86kfb9ch5m
    name: b

See below, this approach (use "cm-X" instead of "cm" in the overlay) does not work well for me in my actual use case where I would need to avoid to use a different configmap base name in the overlay.

Actual output

$ kustomize build overlay-attempt
Error: merging from generator &{0xc001e95110 <nil>}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"cm", Namespace:""} exists; behavior must be merge or replace

Given that "cm" is the base name of my generated configmap, and not its final name, it seems to me that it is a bug to have this error message.

Kustomize version

$ kustomize version                  
{Version:kustomize/v4.5.7 GitCommit:56d82a8378dfc8dc3b3b1085e5a6e67b82966bd7 BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}

Platform

Linux amd64

Additional context

This example reflects what I need to do, except that my actual use case is not with multiple volumes inside a Pod definition. I picked this illustration to make a simplest as possible example.

I can somehow live with the workaround used in "overlay-working" where we make sure that the configmap base name defined in the overlay is different than the one used in the base.

However I would like to have multiple levels of inheritance :

  • overlay 1 on top of base
  • overlay 2 on top of overlay 1
  • overlay 3 on top of overlay 2
  • etc.
    ... and I would like to generate overlay2 and overlay3 in a context where I don't want to have to pick a different name at each layer.

Note that "behavior: merge" would also not work in my actual use case, where I need distinct configmap at each layer (a common configmap with distinct keys under "data" does not work for me).

@tmmorin tmmorin added the kind/bug Categorizes issue or PR as related to a bug. label Oct 12, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 12, 2022
@tmmorin
Copy link
Author

tmmorin commented Oct 12, 2022

I realize need to provide more information to explain better.

First of all, one may wonder why the configMapGenerator[0].name is cm just like the name of the ConfigMap in the manifest.yaml in the base. Indeed it does not have to be and using a configMapGenerator[0].name that is not equal to any manifest produced by base solves the issue in this specific example.

However, the same error arises even if the configmap in base is defined with a configMapGenerator rather than being defined as a plain manifest.

Here are the files that can be used to observe this:

$ cat base/manifest.yaml      
apiVersion: a.b.c/v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
    - name: a
      configMap:
        name: cm
$ cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- manifest.yaml

configMapGenerator:
- name: cm
  behavior: create
  literals:
    - foo=42
$ kustomize build base
---      
apiVersion: v1
data:
  foo: "42"
kind: ConfigMap
metadata:
  name: cm-7kkbgdfk7d
---
apiVersion: a.b.c/v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
  - configMap:
      name: cm-7kkbgdfk7d
    name: a

(this works as expected, not issue or suprise at this point)

However, the overlay fails (using the same overlay-attempt as the one in the description of this issue):

$ kustomize build overlay-attempt
Error: merging from generator &{0xc001f37a00 <nil>}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"cm", Namespace:""} exists; behavior must be merge or replace

@yuwenma
Copy link
Contributor

yuwenma commented Dec 14, 2022

Thanks for the detailed explanation @tmmorin. I understand the confusions. I think this is actually the nice resource reference feature kustomize has.

How kustomize understand the case

Kustomize refers its ConfigMap by original ID, when the ConfigMap is changed, it will update wherever that ConfigMap is referred correspondingly (e.g. the Pod spec.volumes[].configMap field in your example). See the two examples below

Why fail

two ConfigMap have the same original ID "cm", kustomize cannot distinguish which one will be referred to. What's more, because kustomize has the base/overlay model, even if the "cm" is not referred (say no Pod with spec.volumes[].configMap.name=cm) in your case, your kustomize directory can be treated as the base of other overlays, so kustomize fails as long as it discovers duplicate original ID.

Two examples

Example 1
Update the Pod volumes referencing a ConfigMap
In overlay/kustomization.yaml

resources:
- ../base
namePrefix: new-

In base/manifest.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: cm
data:
  foo: 42
---
apiVersion: v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
    - name: a
      configMap:
        name: cm

output

apiVersion: v1
data:
  foo: 42
kind: ConfigMap
metadata:
  name: new-cm  # new- prefix
---
apiVersion: v1
kind: Pod
metadata:
  name: new-x # new- prefix
spec:
  volumes:
  - configMap:
      name: new-cm # changed because the referred ConfigMap is changed
    name: a

Example 2
Update the Pod volumes referencing a ConfigMapGenerator

In overlay/kustomization.yaml

resources:
- ../base
configMapGenerator:
- name: cm
  behavior: create
  literals:
    - bar=43

In base/manifest.yaml

apiVersion: v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
    - name: a
      configMap:
        name: cm

output

apiVersion: v1
data:
  bar: "43"
kind: ConfigMap
metadata:
  name: cm-86kfb9ch5m
---
apiVersion: v1
kind: Pod
metadata:
  name: x
spec:
  volumes:
  - configMap:
      name: cm-86kfb9ch5m # changed because it refers to the ConfigMapGenerator
    name: a

@yuwenma
Copy link
Contributor

yuwenma commented Dec 14, 2022

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 14, 2022
@yuwenma
Copy link
Contributor

yuwenma commented Dec 14, 2022

@tmmorin I'll leave this issue open, let me know if you have any questions.

@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants