Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rook operator in panic state #14246

Open
parth-gr opened this issue May 21, 2024 · 0 comments
Open

rook operator in panic state #14246

parth-gr opened this issue May 21, 2024 · 0 comments
Labels

Comments

@parth-gr
Copy link
Member

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:
Apply filesystem svgs, and the operator got in panic state

024-05-21 12:13:32.884085 I | ceph-cosi-controller: successfully started
2024-05-21 12:13:32.884167 I | operator: starting the controller-runtime manager
2024-05-21 12:13:33.003013 I | ceph-spec: parsing mon endpoints: a=10.104.6.197:6789
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1e9eb85]

goroutine 622 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x21d5d40?, 0x3f20610?})
	/opt/hostedtoolcache/go/1.21.10/x64/src/runtime/panic.go:914 +0x21f
github.com/rook/rook/pkg/operator/ceph/file/subvolumegroup.(*ReconcileCephFilesystemSubVolumeGroup).reconcile(0xc000467b00, {{{0xc0001b4a60?, 0x1873cc4?}, {0xc0001b4a50?, 0x0?}}})
	/home/runner/work/rook/rook/pkg/operator/ceph/file/subvolumegroup/controller.go:240 +0x7a5
github.com/rook/rook/pkg/operator/ceph/file/subvolumegroup.(*ReconcileCephFilesystemSubVolumeGroup).Reconcile(0xc000806280?, {0x0?, 0x0?}, {{{0xc0001b4a60, 0x9}, {0xc0001b4a50, 0x9}}})
	/home/runner/work/rook/rook/pkg/operator/ceph/file/subvolumegroup/controller.go:118 +0x48
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x2bfc7c0?, {0x2bf7c60?, 0xc0011339b0?}, {{{0xc0001b4a60?, 0xb?}, {0xc0001b4a50?, 0x0?}}})
	/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000929680, {0x2bf7c98, 0xc0007908c0}, {0x22cfe20?, 0xc0010fcd80?})
	/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:316 +0x3cc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000929680, {0x2bf7c98, 0xc0007908c0})
	/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:266 +0x1af
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 106
	/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:223 +0x565
[rider@localhost examples]$ kubectl get pods -nrook-ceph

Expected behavior:

How to reproduce it (minimal and precise):

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary

Logs to submit:

  • Operator's logs, if necessary

  • Crashing pod(s) logs, if necessary

    To get logs, use kubectl -n <namespace> logs <pod name>
    When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
    Read GitHub documentation if you need help.

Cluster Status to submit:

  • Output of kubectl commands, if necessary

    To get the health of the cluster, use kubectl rook-ceph health
    To get the status of the cluster, use kubectl rook-ceph ceph status
    For more details, see the Rook kubectl Plugin

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod):
  • Storage backend version (e.g. for ceph do ceph -v):
  • Kubernetes version (use kubectl version):
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
@parth-gr parth-gr added the bug label May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant