Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CAAPH panic when a HRP should be deleted #92

Closed
nikParasyr opened this issue Jun 27, 2023 · 3 comments · Fixed by #94
Closed

CAAPH panic when a HRP should be deleted #92

nikParasyr opened this issue Jun 27, 2023 · 3 comments · Fixed by #94
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@nikParasyr
Copy link
Contributor

What steps did you take and what happened:

  • create a cluster with a label that matches a hcp
  • cluster gets deployed and so does the hrp
  • remove the label so that the hcp should no longer match
  • caaph crashes with panic (logs below)
logs
I0627 14:48:42.192680       1 controller.go:228] "Starting workers" controller="helmchartproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmChartProxy" worker count=10
I0627 14:48:42.192770       1 controller.go:228] "Starting workers" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" worker count=10
I0627 14:48:42.450134       1 controller.go:118] "Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" HelmReleaseProxy="daphne/openstack-cinder-csi-satellite-dev-tw4lh" namespace="daphne" name="openstack-cinder-csi-satellite-dev-tw4lh" reconcileID=80fff3d9-3315-4473-97ba-9bd6658543e8
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
  panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1f069ba]

goroutine 270 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
  sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:119 +0x1fa
panic({0x21bba80, 0x3cd1c80})
  runtime/panic.go:884 +0x212
sigs.k8s.io/cluster-api-addon-provider-helm/internal.generateHelmUninstallConfig(...)
  sigs.k8s.io/cluster-api-addon-provider-helm/internal/helm_operations.go:461
sigs.k8s.io/cluster-api-addon-provider-helm/internal.UninstallHelmRelease({0x290d608?, 0xc0007994d0?}, {0xc001028800?, _}, {{{0xc000680720, 0x7}, {0xc000680728, 0x6}, {0xc000680730, 0xd}, ...}, ...})
  sigs.k8s.io/cluster-api-addon-provider-helm/internal/helm_operations.go:482 +0x7a
sigs.k8s.io/cluster-api-addon-provider-helm/controllers/helmreleaseproxy.(*HelmReleaseProxyReconciler).reconcileDelete(0x29362f8?, {0x290d608, 0xc0007994d0}, 0xc000699440, {0xc001028800, 0x15da})
  sigs.k8s.io/cluster-api-addon-provider-helm/controllers/helmreleaseproxy/helmreleaseproxy_controller.go:259 +0x6c5
sigs.k8s.io/cluster-api-addon-provider-helm/controllers/helmreleaseproxy.(*HelmReleaseProxyReconciler).Reconcile(0xc000997020, {0x290d608, 0xc0007994d0}, {{{0xc000680700, 0x6}, {0xc0008338f0, 0x28}}})
  sigs.k8s.io/cluster-api-addon-provider-helm/controllers/helmreleaseproxy/helmreleaseproxy_controller.go:141 +0xaba
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x290d608?, {0x290d608?, 0xc0007994d0?}, {{{0xc000680700?, 0x20a6360?}, {0xc0008338f0?, 0x0?}}})
  sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:122 +0xc8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0004b5a40, {0x290d560, 0xc000a17bc0}, {0x228af80?, 0xc00029e5e0?})
  sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:323 +0x38f
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0004b5a40, {0x290d560, 0xc000a17bc0})
  sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:274 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
  sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:235 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
  sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:231 +0x333

What did you expect to happen:
caaph should delete the helm installation and the hrp that corresponds to it

Anything else you would like to add:

  • running manually a helm uninstall on the workload cluster fixes the issue
  • i cannot get more detailed logs as there is no way to increase verbosity

Environment:

  • Cluster API version: 1.4.2
  • Cluster API Add-on Provider for Helm version: v0.1.0-alpha.7
  • minikube/kind version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 27, 2023
@Jont828
Copy link
Contributor

Jont828 commented Jun 28, 2023

/triage accepted

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jun 28, 2023
@Jont828
Copy link
Contributor

Jont828 commented Jun 28, 2023

Thanks for opening the issue, I'll take a look and try to repro. Your stack trace mentions the panic comes from internal.generateHelmUninstallConfig which was added in #82 so I wonder if we missed something there.

@nikParasyr
Copy link
Contributor Author

In case this helps, I haven't set any of the options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants