Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rook-ceph-operator crash with panic: runtime error: invalid memory address or nil pointer dereference #8597

Closed
yanovich opened this issue Aug 26, 2021 · 2 comments
Labels

Comments

@yanovich
Copy link

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:
rook-ceph-operator pod crashes

Expected behavior:
rook-ceph-operator pod runs

How to reproduce it (minimal and precise):

git clone --single-branch --branch v1.7.1 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary
    cluster.yaml
  • Operator's logs, if necessary
2021-08-26 11:42:14.759284 I | rookcmd: starting Rook v1.7.1 with arguments '/usr/local/bin/rook ceph operator'
2021-08-26 11:42:14.759451 I | rookcmd: flag values: --add_dir_header=false, --alsologtostderr=false, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-dep-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner-dep.yaml, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-dep-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner-dep.yaml, --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-flush-frequency=5s, --log-level=INFO, --log_backtrace_at=:0, --log_dir=, --log_file=, --log_file_max_size=1800, --logtostderr=true, --one_output=false, --operator-image=, --service-account=, --skip_headers=false, --skip_log_headers=false, --stderrthreshold=2, --v=0, --vmodule=
2021-08-26 11:42:14.759461 I | cephcmd: starting Rook-Ceph operator
2021-08-26 11:42:15.040205 I | cephcmd: base ceph version inside the rook operator image is "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)"
2021-08-26 11:42:15.071646 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap)
2021-08-26 11:42:15.078495 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap)
2021-08-26 11:42:15.112663 I | operator: looking for secret "rook-ceph-admission-controller"
2021-08-26 11:42:15.120295 I | operator: secret "rook-ceph-admission-controller" not found. proceeding without the admission controller
2021-08-26 11:42:15.129442 I | op-k8sutil: ROOK_ENABLE_FLEX_DRIVER="false" (configmap)
2021-08-26 11:42:15.129476 I | operator: watching all namespaces for ceph cluster CRs
2021-08-26 11:42:15.129608 I | operator: setting up the controller-runtime manager
2021-08-26 11:42:15.133066 I | ceph-cluster-controller: ConfigMap "rook-ceph-operator-config" changes detected. Updating configurations
2021-08-26 11:42:15.143186 I | op-k8sutil: ROOK_LOG_LEVEL="INFO" (configmap)
2021-08-26 11:42:15.996000 I | ceph-cluster-controller: successfully started
2021-08-26 11:42:15.996143 I | ceph-cluster-controller: enabling hotplug orchestration
2021-08-26 11:42:15.996187 I | ceph-crashcollector-controller: successfully started
2021-08-26 11:42:15.996279 I | ceph-block-pool-controller: successfully started
2021-08-26 11:42:15.996374 I | ceph-object-store-user-controller: successfully started
2021-08-26 11:42:15.996481 I | ceph-object-realm-controller: successfully started
2021-08-26 11:42:15.996557 I | ceph-object-zonegroup-controller: successfully started
2021-08-26 11:42:15.996685 I | ceph-object-zone-controller: successfully started
2021-08-26 11:42:15.996893 I | ceph-object-controller: successfully started
2021-08-26 11:42:15.997031 I | ceph-file-controller: successfully started
2021-08-26 11:42:15.997167 I | ceph-nfs-controller: successfully started
2021-08-26 11:42:15.997285 I | ceph-rbd-mirror-controller: successfully started
2021-08-26 11:42:15.997439 I | ceph-client-controller: successfully started
2021-08-26 11:42:15.997554 I | ceph-filesystem-mirror-controller: successfully started
2021-08-26 11:42:15.999157 I | operator: starting the controller-runtime manager
2021-08-26 11:42:16.101203 I | clusterdisruption-controller: create event from ceph cluster CR
2021-08-26 11:42:16.109229 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2021-08-26 11:42:16.123773 I | op-k8sutil: ROOK_ENABLE_FLEX_DRIVER="false" (configmap)
2021-08-26 11:42:16.150676 I | clusterdisruption-controller: deleted all legacy blocking PDBs for osds
2021-08-26 11:42:16.159969 I | clusterdisruption-controller: deleted all legacy node drain canary pods
2021-08-26 11:42:16.177482 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config"
2021-08-26 11:42:16.185152 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="true" (configmap)
2021-08-26 11:42:16.188188 I | op-mon: parsing mon endpoints: a=10.152.183.94:6789
2021-08-26 11:42:16.188223 I | op-mon: updating obsolete maxMonID -1 to actual value 0
2021-08-26 11:42:16.190519 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="true" (configmap)
2021-08-26 11:42:16.204062 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (configmap)
2021-08-26 11:42:16.446051 I | ceph-cluster-controller: detecting the ceph image version for image quay.io/ceph/ceph:v16.2.5...
2021-08-26 11:42:16.581258 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="false" (configmap)
2021-08-26 11:42:16.973365 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default)
2021-08-26 11:42:17.376429 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="quay.io/cephcsi/cephcsi:v3.4.0" (default)
2021-08-26 11:42:17.569142 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0" (default)
2021-08-26 11:42:17.801092 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2" (default)
2021-08-26 11:42:17.993292 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="k8s.gcr.io/sig-storage/csi-attacher:v3.2.1" (default)
2021-08-26 11:42:18.195291 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1" (default)
2021-08-26 11:42:18.369562 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/snap/microk8s/common/var/lib/kubelet" (configmap)
2021-08-26 11:42:18.572881 I | op-k8sutil: CSI_VOLUME_REPLICATION_IMAGE="quay.io/csiaddons/volumereplication-operator:v0.1.0" (default)
2021-08-26 11:42:18.772975 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default)
2021-08-26 11:42:18.976269 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default)
2021-08-26 11:42:18.976306 I | ceph-csi: detecting the ceph csi image version for image "quay.io/cephcsi/cephcsi:v3.4.0"
2021-08-26 11:42:19.173836 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default)
2021-08-26 11:42:20.258615 I | ceph-cluster-controller: detected ceph image version: "16.2.5-0 pacific"
2021-08-26 11:42:20.258644 I | ceph-cluster-controller: validating ceph version from provided image
2021-08-26 11:42:20.374654 I | op-mon: parsing mon endpoints: a=10.152.183.94:6789
2021-08-26 11:42:20.374691 I | op-mon: updating obsolete maxMonID -1 to actual value 0
2021-08-26 11:42:20.576850 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2021-08-26 11:42:20.577132 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2021-08-26 11:42:21.024601 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.5-0 pacific" detected for image "quay.io/ceph/ceph:v16.2.5"
2021-08-26 11:42:21.153658 I | op-mon: start running mons
2021-08-26 11:42:21.171658 I | op-mon: parsing mon endpoints: a=10.152.183.94:6789
2021-08-26 11:42:21.171697 I | op-mon: updating obsolete maxMonID -1 to actual value 0
2021-08-26 11:42:21.780332 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.152.183.94:6789"]}] data:a=10.152.183.94:6789 mapping:{"node":{"a":{"Name":"host22","Hostname":"host22","Address":"192.168.1.22"},"b":{"Name":"host23","Hostname":"host23","Address":"192.168.1.23"},"c":{"Name":"host24","Hostname":"host24","Address":"192.168.1.24"}}} maxMonId:-1]
2021-08-26 11:42:22.372446 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2021-08-26 11:42:22.372710 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2021-08-26 11:42:23.383109 I | ceph-csi: Detected ceph CSI image version: "v3.4.0"
2021-08-26 11:42:23.774084 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap)
2021-08-26 11:42:24.173731 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default)
2021-08-26 11:42:24.375308 I | op-mon: targeting the mon count 3
2021-08-26 11:42:24.392279 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database
2021-08-26 11:42:24.573145 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default)
2021-08-26 11:42:24.774209 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default)
2021-08-26 11:42:24.873683 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database
2021-08-26 11:42:24.873721 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database
2021-08-26 11:42:24.973574 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default)
2021-08-26 11:42:25.173643 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="" (default)
2021-08-26 11:42:25.330811 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database
2021-08-26 11:42:25.330849 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database
2021-08-26 11:42:25.416091 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="" (default)
2021-08-26 11:42:25.575370 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (default)
2021-08-26 11:42:25.774907 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap)
2021-08-26 11:42:25.804611 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database
2021-08-26 11:42:25.804680 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database
2021-08-26 11:42:25.974115 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap)
2021-08-26 11:42:26.175757 I | op-k8sutil: CSI_ENABLE_VOLUME_REPLICATION="false" (configmap)
2021-08-26 11:42:26.263878 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database
2021-08-26 11:42:26.263916 I | op-config: setting "global"="log to file"="false" option to the mon configuration database
2021-08-26 11:42:26.370777 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default)
2021-08-26 11:42:26.598185 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default)
2021-08-26 11:42:26.598221 I | ceph-csi: Kubernetes version is 1.21+
2021-08-26 11:42:26.716154 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database
2021-08-26 11:42:26.716193 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database
2021-08-26 11:42:26.773953 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" (default)
2021-08-26 11:42:26.978301 I | op-k8sutil: CSI_LOG_LEVEL="" (default)
2021-08-26 11:42:27.179832 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database
2021-08-26 11:42:27.179866 I | op-config: deleting "log file" option from the mon configuration database
2021-08-26 11:42:27.184464 I | ceph-csi: successfully started CSI Ceph RBD
2021-08-26 11:42:27.189106 I | ceph-csi: successfully started CSI CephFS driver
2021-08-26 11:42:27.370311 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default)
2021-08-26 11:42:27.573555 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="" (default)
2021-08-26 11:42:27.628666 I | op-config: successfully deleted "log file" option from the mon configuration database
2021-08-26 11:42:27.628706 I | op-mon: creating mon b
2021-08-26 11:42:27.777839 I | op-k8sutil: CSI_PLUGIN_TOLERATIONS="" (default)
2021-08-26 11:42:28.176486 I | op-k8sutil: CSI_PLUGIN_NODE_AFFINITY="" (default)
2021-08-26 11:42:28.576427 I | op-k8sutil: CSI_RBD_PLUGIN_TOLERATIONS="" (default)
2021-08-26 11:42:28.984734 I | op-k8sutil: CSI_RBD_PLUGIN_NODE_AFFINITY="" (default)
2021-08-26 11:42:29.187410 I | op-mon: mon "a" endpoint is [v2:10.152.183.94:3300,v1:10.152.183.94:6789]
2021-08-26 11:42:29.378658 I | op-k8sutil: CSI_RBD_PLUGIN_RESOURCE="" (default)
2021-08-26 11:42:29.611301 I | op-mon: mon "b" endpoint is [v2:10.152.183.70:3300,v1:10.152.183.70:6789]
2021-08-26 11:42:29.782035 I | op-k8sutil: CSI_RBD_PROVISIONER_TOLERATIONS="" (default)
2021-08-26 11:42:30.176649 I | op-k8sutil: CSI_RBD_PROVISIONER_NODE_AFFINITY="" (default)
2021-08-26 11:42:30.577510 I | op-k8sutil: CSI_RBD_PROVISIONER_RESOURCE="" (default)
2021-08-26 11:42:30.778651 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.152.183.70:6789","10.152.183.94:6789"]}] data:a=10.152.183.94:6789,b=10.152.183.70:6789 mapping:{"node":{"a":{"Name":"host22","Hostname":"host22","Address":"192.168.1.22"},"b":{"Name":"host23","Hostname":"host23","Address":"192.168.1.23"},"c":{"Name":"host24","Hostname":"host24","Address":"192.168.1.24"}}} maxMonId:-1]
2021-08-26 11:42:31.974615 I | op-k8sutil: CSI_CEPHFS_PLUGIN_TOLERATIONS="" (default)
2021-08-26 11:42:32.168784 I | op-k8sutil: CSI_CEPHFS_PLUGIN_NODE_AFFINITY="" (default)
2021-08-26 11:42:32.369363 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2021-08-26 11:42:32.369662 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2021-08-26 11:42:32.574870 I | op-k8sutil: CSI_CEPHFS_PLUGIN_RESOURCE="" (default)
2021-08-26 11:42:32.979311 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_TOLERATIONS="" (default)
2021-08-26 11:42:33.207036 I | op-mon: 1 of 2 expected mon deployments exist. creating new deployment(s).
2021-08-26 11:42:33.224091 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed
2021-08-26 11:42:33.255021 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update
2021-08-26 11:42:33.372623 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_NODE_AFFINITY="" (default)
2021-08-26 11:42:33.577366 I | op-mon: updating maxMonID from -1 to 1 after committing mon "b"
2021-08-26 11:42:33.782469 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_RESOURCE="" (default)
2021-08-26 11:42:35.190129 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.152.183.94:6789","10.152.183.70:6789"]}] data:a=10.152.183.94:6789,b=10.152.183.70:6789 mapping:{"node":{"a":{"Name":"host22","Hostname":"host22","Address":"192.168.1.22"},"b":{"Name":"host23","Hostname":"host23","Address":"192.168.1.23"},"c":{"Name":"host24","Hostname":"host24","Address":"192.168.1.24"}}} maxMonId:1]
2021-08-26 11:42:35.190564 I | op-mon: waiting for mon quorum with [a b]
2021-08-26 11:42:35.378222 I | op-k8sutil: CSI_RBD_FSGROUPPOLICY="ReadWriteOnceWithFSType" (configmap)
2021-08-26 11:42:35.404314 I | ceph-csi: CSIDriver object updated for driver "rook-ceph.rbd.csi.ceph.com"
2021-08-26 11:42:35.777398 I | op-k8sutil: CSI_CEPHFS_FSGROUPPOLICY="None" (configmap)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1bd0b77]

goroutine 793 [running]:
github.com/rook/rook/pkg/operator/ceph/csi.v1CsiDriver.reCreateCSIDriverInfo(0x0, 0x0, 0x0, 0x290aee8, 0xc000110008, 0x0, 0x0)
	/home/rook/go/src/github.com/rook/rook/pkg/operator/ceph/csi/csidriver.go:93 +0x37
github.com/rook/rook/pkg/operator/ceph/csi.v1CsiDriver.createCSIDriverInfo(0x0, 0x0, 0x0, 0x290aee8, 0xc000110008, 0x29488d8, 0xc00012a000, 0xc0010b4700, 0x1d, 0xc000e89c80, ...)
	/home/rook/go/src/github.com/rook/rook/pkg/operator/ceph/csi/csidriver.go:75 +0x385
github.com/rook/rook/pkg/operator/ceph/csi.startDrivers(0x29488d8, 0xc00012a000, 0x2913d78, 0xc00051ed50, 0xc000054056, 0x9, 0xc000e9ca20, 0xc00082b1d0, 0xc0010b0300, 0xc0010b0300, ...)
	/home/rook/go/src/github.com/rook/rook/pkg/operator/ceph/csi/spec.go:617 +0x13a8
github.com/rook/rook/pkg/operator/ceph/csi.ValidateAndConfigureDrivers(0xc000529680, 0xc000054056, 0x9, 0xc000543c60, 0x10, 0xc000a19150, 0x10, 0xc000e9ca20, 0xc00082b1d0)
	/home/rook/go/src/github.com/rook/rook/pkg/operator/ceph/csi/csi.go:60 +0x30b
created by github.com/rook/rook/pkg/operator/ceph.(*Operator).updateDrivers
	/home/rook/go/src/github.com/rook/rook/pkg/operator/ceph/operator.go:269 +0x3a5
  • Crashing pod(s) logs, if necessary

Environment:

  • OS (e.g. from /etc/os-release):
    Ubuntu 20.04.3 LTS
  • Kernel (e.g. uname -a):
    Linux host24 5.4.0-81-generic #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Cloud provider or hardware configuration:
    snap install microk8s --clasiic --channel=1.21/stable
  • Rook version (use rook version inside of a Rook Pod):
rook: v1.7.1
go: go1.16.3
  • Storage backend version (e.g. for ceph do ceph -v):
    ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.3-3+90fd5f3d2aea0a", GitCommit:"90fd5f3d2aea0a5788b15a6f0a05e70381af7787", GitTreeState:"clean", BuildDate:"2021-07-16T22:06:06Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.3-3+90fd5f3d2aea0a", GitCommit:"90fd5f3d2aea0a5788b15a6f0a05e70381af7787", GitTreeState:"clean", BuildDate:"2021-07-16T22:01:05Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
    microk8s
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
HEALTH_WARN mons are allowing insecure global_id reclaim
@yanovich yanovich added the bug label Aug 26, 2021
@yanovich
Copy link
Author

diff between rook source and actual config:

diff --git a/rook/cluster.yaml b/rook/cluster.yaml
index cb2ac1e..68c0eed 100644
--- a/rook/cluster.yaml
+++ b/rook/cluster.yaml
@@ -65,9 +65,9 @@ spec:
     # serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
     # urlPrefix: /ceph-dashboard
     # serve the dashboard at the given port.
-    # port: 8443
+    port: 8443
     # serve the dashboard using SSL
-    ssl: true
+    ssl: false
   # enable prometheus alerting for cluster
   monitoring:
     # requires Prometheus to be pre-installed
@@ -207,8 +207,9 @@ spec:
 #    mgr: rook-ceph-mgr-priority-class
   storage: # cluster level storage configuration and selection
     useAllNodes: true
-    useAllDevices: true
+    useAllDevices: false
     #deviceFilter:
+    devicePathFilter: ^/dev/disk/by-partlabel/ceph[0-9]*
     config:
       # crushRoot: "custom-root" # specify a non-default root label for the CRUSH map
       # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
diff --git a/rook/operator.yaml b/rook/operator.yaml
index b0601d9..ba5ecb1 100644
--- a/rook/operator.yaml
+++ b/rook/operator.yaml
@@ -93,7 +93,7 @@ data:
   # CSI_RBD_PLUGIN_UPDATE_STRATEGY: "OnDelete"
 
   # kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
-  # ROOK_CSI_KUBELET_DIR_PATH: "/var/lib/kubelet"
+  ROOK_CSI_KUBELET_DIR_PATH: "/var/snap/microk8s/common/var/lib/kubelet"
 
   # Labels to add to the CSI CephFS Deployments and DaemonSets Pods.
   # ROOK_CSI_CEPHFS_POD_LABELS: "key1=value1,key2=value2"

@Madhu-1
Copy link
Member

Madhu-1 commented Aug 26, 2021

Fixed by #8582 will be available in the next Rook release closing it.

@Madhu-1 Madhu-1 closed this as completed Aug 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants