Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic: runtime error: invalid memory address or nil pointer dereference #5471

Open
AxelAlvarsson opened this issue Dec 4, 2023 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@AxelAlvarsson
Copy link

What happened?

Using:

  • Mac Darwin Kernel Version 23.0.0 arm64
  • Kustomize version v5.2.1

Running
kustomize build --load-restrictor LoadRestrictionsNone --enable-alpha-plugins --enable-exec .

Causes error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x40 pc=0x104618b54]

goroutine 1 [running]:
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).Content(...)
	sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:707
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).getMapFieldValue(0x14002260b08?, {0x10476bfb1?, 0x7?})
	sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:420 +0x54
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).GetApiVersion(...)
	sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:402
sigs.k8s.io/kustomize/kyaml/resid.GvkFromNode(0x140017648b8?)
	sigs.k8s.io/kustomize/kyaml/resid/gvk.go:32 +0x40
sigs.k8s.io/kustomize/api/resource.(*Resource).GetGvk(...)
	sigs.k8s.io/kustomize/api/resource/resource.go:57
sigs.k8s.io/kustomize/api/resource.(*Resource).CurId(0x1400044e960)
	sigs.k8s.io/kustomize/api/resource/resource.go:449 +0x48
sigs.k8s.io/kustomize/api/resmap.(*resWrangler).GetMatchingResourcesByAnyId(0x14002260ee8?, 0x14001c81140)
	sigs.k8s.io/kustomize/api/resmap/reswrangler.go:184 +0xac
sigs.k8s.io/kustomize/api/resmap.demandOneMatch(0x14002260ff8, {{{0x140016a08f8, 0x5}, {0x140016a08fe, 0x2}, {0x140016a0920, 0x7}, 0x0}, {0x140021f8ec0, 0x19}, ...}, ...)
	sigs.k8s.io/kustomize/api/resmap/reswrangler.go:227 +0xc8
sigs.k8s.io/kustomize/api/resmap.(*resWrangler).GetById(0x14002220140?, {{{0x140016a08f8, 0x5}, {0x140016a08fe, 0x2}, {0x140016a0920, 0x7}, 0x0}, {0x140021f8ec0, 0x19}, ...})
	sigs.k8s.io/kustomize/api/resmap/reswrangler.go:214 +0x9c
sigs.k8s.io/kustomize/api/internal/builtins.(*PatchTransformerPlugin).transformStrategicMerge(0xf?, {0x104a4e998, 0x1400000f2c0})
	sigs.k8s.io/kustomize/api/internal/builtins/PatchTransformer.go:112 +0x2d0
sigs.k8s.io/kustomize/api/internal/builtins.(*PatchTransformerPlugin).Transform(0x1400000f2c0?, {0x104a4e998?, 0x1400000f2c0?})
	sigs.k8s.io/kustomize/api/internal/builtins/PatchTransformer.go:87 +0x2c
sigs.k8s.io/kustomize/api/internal/target.(*multiTransformer).Transform(0x140021a14a0?, {0x104a4e998, 0x1400000f2c0})
	sigs.k8s.io/kustomize/api/internal/target/multitransformer.go:30 +0x88
sigs.k8s.io/kustomize/api/internal/accumulator.(*ResAccumulator).Transform(...)
	sigs.k8s.io/kustomize/api/internal/accumulator/resaccumulator.go:141
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).runTransformers(0x1400007eeb0, 0x1400007bf80)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:343 +0x1ac
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).accumulateTarget(0x1400007eeb0, 0x140002a2928?)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:237 +0x318
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).AccumulateTarget(0x0?)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:194 +0x10c
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).makeCustomizedResMap(0x1400007eeb0)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:135 +0x68
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).MakeCustomizedResMap(...)
	sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:126
sigs.k8s.io/kustomize/api/krusty.(*Kustomizer).Run(0x14002261c98, {0x104a49758, 0x104fe5840}, {0x16bd6f88a, 0x1})
	sigs.k8s.io/kustomize/api/krusty/kustomizer.go:90 +0x248
sigs.k8s.io/kustomize/kustomize/v5/commands/build.NewCmdBuild.func1(0x140001d6300?, {0x14000048ba0?, 0x4?, 0x104768ff8?})
	sigs.k8s.io/kustomize/kustomize/v5/commands/build/build.go:82 +0x15c
github.com/spf13/cobra.(*Command).execute(0x14000270600, {0x14000048b40, 0x6, 0x6})
	github.com/spf13/cobra@v1.7.0/command.go:940 +0x658
github.com/spf13/cobra.(*Command).ExecuteC(0x14000270000)
	github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320
github.com/spf13/cobra.(*Command).Execute(0x104ef95a8?)
	github.com/spf13/cobra@v1.7.0/command.go:992 +0x1c
main.main()
	sigs.k8s.io/kustomize/kustomize/v5/main.go:14 +0x20

(Naming changed in some of the following)

Kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: namespace-name

resources:
  - ../../base

patches:
  - path: resource-patch.yaml
  - path: delete-some-worker.yaml

delete-some-worker.yaml file

$patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
  name: first-worker
---
$patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
  name: second-worker
---
$patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
  name: third-worker

The base some-worker.yaml definition is fine and works in any other context so that is not the issue.

What did you expect to happen?

Extpecting a successfull manifest output.

How can we reproduce it (as minimally and precisely as possible)?

Use the same format of deletions as with the delete-some-worker.yaml file.

TESTED THIS:

  • If I comment out any two of the three definitions in the delete-some-worker.yaml file it works.
  • Same as if I split them out to their own files, it works.

My current guess is that several $patch: delete definitions with no more than metadata name as difference is not an intentional panic, instead an uncovered usecase?

Expected output

No response

Actual output

No response

Kustomize version

v5.2.1

Operating system

MacOS

@AxelAlvarsson AxelAlvarsson added the kind/bug Categorizes issue or PR as related to a bug. label Dec 4, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Dec 4, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@yogeek
Copy link

yogeek commented Jan 3, 2024

I have the same issue with multiple $patch: delete patches in the same file which prevents me to convert the patchesStrategicMerge to patches
Seems related to #5049

@CyDickey-msr
Copy link

I ran into this problem too when I had multiple $patch: delete in a single patch file.

For example:

---
$patch: delete
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: XXXX
  namespace: XXXX
---
$patch: delete
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: XXXX
  namespace: XXXX

Breaking them out into their own patch .yaml file works tho and so does adding individual patches to the kustomization.yaml. For example:

patches:
  - patch: |-
      $patch: delete
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: XXXX
        namespace: XXXX
  - patch: |-
      $patch: delete
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: XXXX
        namespace: XXXX

Like yogeek said I think this is intentional based on #5049 (comment)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

5 participants