2024-05-20 12:32:46.008574 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.009222 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:46.009232 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:46.009247 I | operator: setting up schemes 2024-05-20 12:32:46.009735 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.009754 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:32:46.010526 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.010870 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:46.011291 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.011349 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.011383 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:32:46.012251 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.013479 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.013550 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:32:46.014523 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.015728 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.015800 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:32:46.016658 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.017524 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.017564 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.017578 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:32:46.018378 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.019156 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.019196 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.019218 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:32:46.020009 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.021049 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.021074 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:32:46.021842 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.022681 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.022708 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.022724 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:32:46.023452 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.024216 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.024251 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.024274 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:32:46.025027 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.025810 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.025836 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.025849 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:32:46.026589 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.027299 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.027334 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.027354 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:32:46.028213 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.029036 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.029064 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.029077 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:32:46.029810 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.030899 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.030919 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:32:46.031635 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.032671 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.032691 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:32:46.033415 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.034250 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.034285 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.034307 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:32:46.035052 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.036004 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.036030 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.036043 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:32:46.036760 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.037594 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.037627 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.037649 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:32:46.038381 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.039165 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.039192 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.039205 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:32:46.039960 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.040808 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.040841 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.040862 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:32:46.041622 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.042390 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.042416 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.042429 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:32:46.043160 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.045965 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.046000 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.046022 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:32:46.046766 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.047536 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.047563 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.047576 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:32:46.048313 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.049336 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.049354 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:32:46.050074 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.050945 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.050981 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.051003 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:32:46.051768 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.052420 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.052446 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.052459 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:32:46.053183 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.054211 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.054369 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:46.607747 I | op-k8sutil: Retrying 19 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:32:46.613523 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:46.620237 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:46.620269 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:46.620278 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:46.620281 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:46.620296 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:46.620302 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:46.620310 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:46.620321 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:46.620332 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:46.620339 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:46.620346 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:46.620412 I | ceph-object-controller: successfully started 2024-05-20 12:32:46.620451 I | ceph-file-controller: successfully started 2024-05-20 12:32:46.620469 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:46.620483 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:46.620495 I | ceph-client-controller: successfully started 2024-05-20 12:32:46.620503 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:46.620524 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:46.620533 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:46.620541 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:46.620548 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:46.620553 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:46.620559 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:46.620564 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:46.620570 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:46.621446 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:46.621465 I | operator: starting the controller-runtime manager 2024-05-20 12:32:46.722755 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:46.722804 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:46.723496 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723540 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723576 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723611 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723643 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723679 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723713 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723760 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723793 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:46.723824 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723861 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723892 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.723928 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:46.724029 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:46.724136 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:46.724146 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:46.724170 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:46.724189 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:46.724195 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:46.724205 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:46.724250 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:46.724329 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:46.724358 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:46.724372 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:46.724396 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:46.724410 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:46.724418 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:46.724430 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:46.724444 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:46.724462 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:46.724470 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:46.724475 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:46.724489 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:46.724499 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:46.724510 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:46.724564 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:46.724665 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:46.724778 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:46.724821 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:46.724837 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:46.725036 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:46.725085 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:46.725105 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:46.725236 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:46.725290 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:46.725305 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:46.725444 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:46.725500 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:46.725509 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:46.725609 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:46.725633 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:46.725646 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:46.725651 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:46.926948 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:32:46.928761 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:46.928773 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:46.928935 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:32:46.928956 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:32:46.929172 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:32:46.929344 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:46.929355 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:46.930686 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:46.930743 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:32:46.930756 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.930764 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:32:46.930780 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:32:46.930789 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:46.930802 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:32:46.930843 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:46.930852 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:46.930864 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:46.930884 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments?labelSelector=app%3Drook-ceph-drain-canary": context canceled 2024-05-20 12:32:46.930925 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:46.930959 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:46.931094 E | operator: failed to reconcile failed to stop device discovery daemonset: context canceled 2024-05-20 12:32:46.931116 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:32:46.931126 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:46.931253 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:32:46.932891 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:32:46.932901 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:46.932906 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:46.932958 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: context canceled 2024-05-20 12:32:46.933007 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:32:46.933425 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.933518 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.933599 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:32:46.934747 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.935991 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.936014 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:32:46.937003 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.939578 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.939617 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.939634 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:32:46.940484 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:46.940495 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:46.940502 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.940516 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:46.940522 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:46.940605 I | operator: setting up schemes 2024-05-20 12:32:46.941762 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:46.941771 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:46.941801 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.941824 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:32:46.942333 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:46.943048 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.943830 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.943862 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.943877 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:32:46.944861 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:32:46.945362 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.946842 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.946866 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:32:46.948190 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.949064 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.949096 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.949147 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:32:46.950166 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.950864 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.950895 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.950917 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:32:46.952146 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.952860 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.952892 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.952913 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:32:46.954009 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.955223 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.955245 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:32:46.956244 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.957471 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.957493 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:32:46.958390 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.959087 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.959115 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.959131 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:32:46.959967 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.960648 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.960677 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.960694 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:32:46.961545 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.962251 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.962289 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.962310 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:32:46.963112 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.964075 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.964106 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.964120 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:32:46.964915 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.966025 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.966046 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:32:46.966821 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.967596 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.967630 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.967644 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:32:46.968463 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.969676 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.969699 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:32:46.970514 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.971335 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.971364 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.971379 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:32:46.972162 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.972893 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.972926 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.972942 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:32:46.973799 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.974979 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:46.975000 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:32:46.975768 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.976572 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.976618 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.976636 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:32:46.977421 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.978183 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.978212 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.978229 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:32:46.979037 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.979866 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.979901 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.979917 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:32:46.980765 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.981606 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.981641 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.981657 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:32:46.982516 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.983245 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.983281 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:46.983297 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:32:46.984175 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:46.986803 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:46.986848 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:47.079109 I | ceph-spec: parsing mon endpoints: prceph-mon02=10.11.10.30:6789,prceph-mon03=10.11.10.93:6789,prceph-mon01=10.11.10.190:6789 2024-05-20 12:32:47.079138 D | ceph-spec: loaded: maxMonID=2, mons=map[prceph-mon01:0xc009fa8580 prceph-mon02:0xc009fa8500 prceph-mon03:0xc009fa8540], assignment=&{Schedule:map[]} 2024-05-20 12:32:47.079144 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[prceph-mon01:0xc009fa8580 prceph-mon02:0xc009fa8500 prceph-mon03:0xc009fa8540] 2024-05-20 12:32:47.278965 I | cephclient: writing config file /var/lib/rook/rook-ceph-external/rook-ceph-external.config 2024-05-20 12:32:47.279159 I | cephclient: generated admin config in /var/lib/rook/rook-ceph-external 2024-05-20 12:32:47.279219 I | ceph-cluster-controller: external cluster identity established 2024-05-20 12:32:47.279250 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2024-05-20 12:32:47.279357 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:32:47.545236 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:47.553217 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:47.553295 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:47.553385 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:47.553422 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:47.553460 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:47.553483 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:47.553509 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:47.553554 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:47.553716 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:47.553804 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:47.553837 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:47.553953 I | ceph-object-controller: successfully started 2024-05-20 12:32:47.554057 I | ceph-file-controller: successfully started 2024-05-20 12:32:47.554097 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:47.554243 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:47.554287 I | ceph-client-controller: successfully started 2024-05-20 12:32:47.554328 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:47.554361 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:47.554395 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:47.554427 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:47.554453 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:47.554477 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:47.554567 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:47.554605 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:47.554645 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:47.555580 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:47.555631 I | operator: starting the controller-runtime manager 2024-05-20 12:32:47.662688 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:47.662774 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:47.662829 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:47.662862 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:47.662875 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.662914 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.662948 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:47.662958 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.662966 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:47.662984 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.662988 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:47.662997 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:47.663009 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:47.663015 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:47.663019 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.663029 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:47.663045 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.663049 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:47.663059 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:47.663067 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.663071 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:47.663083 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:47.663096 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:47.663115 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:47.663157 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.663189 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:47.664012 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.664028 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:47.664036 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:47.664046 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:47.664050 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:47.664084 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.664115 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:47.664123 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:47.664128 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.664148 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:47.664152 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:47.664155 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:47.664160 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:47.664169 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:47.664180 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:47.664204 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:47.664236 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:47.664254 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:47.664266 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:47.664303 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:47.664333 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:47.664344 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:47.664354 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:47.664358 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:47.664373 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:47.664380 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:47.664408 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:47.664422 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:47.664439 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:47.826315 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2024-05-20 12:32:47.826407 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:32:47.861521 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:47.861598 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:47.861916 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:47.866393 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:47.866404 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:47.866417 I | operator: setting up schemes 2024-05-20 12:32:47.868042 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:48.412597 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2024-05-20 12:32:48.412753 I | op-cfg-keyring: Error getting or creating key for "client.csi-cephfs-provisioner". Attempting to update capabilities in case the user already exists. failed get-or-create-key client.csi-cephfs-provisioner: context canceled 2024-05-20 12:32:48.412824 I | cephclient: updating ceph auth caps "client.csi-cephfs-provisioner" to [mon allow r mgr allow rw osd allow rw tag cephfs metadata=*] 2024-05-20 12:32:48.412960 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to create csi kubernetes secrets: failed to create csi cephfs provisioner ceph keyring: failed to get, create, or update auth key for client.csi-cephfs-provisioner: failed get-or-create-key client.csi-cephfs-provisioner: context canceled" 2024-05-20 12:32:48.421555 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:48.421614 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:48.421740 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:32:48.423194 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.423206 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:48.423312 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.423323 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:48.424647 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:32:48.427141 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:32:48.427245 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc01e940700 h:0xc01e940740 i:0xc01e940780], assignment=&{Schedule:map[e:0xc0168079c0 h:0xc016807a00 i:0xc016807a40]} 2024-05-20 12:32:48.434335 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:48.434351 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s" 2024-05-20 12:32:48.434354 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:48.434358 I | ceph-cluster-controller: ceph status check interval is 1m0s 2024-05-20 12:32:48.434361 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:48.434438 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" 2024-05-20 12:32:48.434445 D | ceph-cluster-controller: checking health of cluster 2024-05-20 12:32:48.434556 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring 2024-05-20 12:32:48.434587 I | op-mon: stopping monitoring of mons in namespace "rook-ceph" 2024-05-20 12:32:48.464016 D | ceph-cluster-controller: cluster spec successfully validated 2024-05-20 12:32:48.464061 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version" 2024-05-20 12:32:48.470607 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:48.478632 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:48.478644 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:48.478655 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:48.478663 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:48.478861 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v15.2.15... 2024-05-20 12:32:48.483936 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:48.483976 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:48.483986 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:48.483989 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:48.483995 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:48.484001 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:48.484012 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:48.484024 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:48.484038 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:48.486685 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:48.486719 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:48.487034 I | ceph-object-controller: successfully started 2024-05-20 12:32:48.487062 I | ceph-file-controller: successfully started 2024-05-20 12:32:48.487082 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:48.487114 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:48.487132 I | ceph-client-controller: successfully started 2024-05-20 12:32:48.487164 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:48.487184 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:48.487194 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:48.487205 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:48.487238 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:48.487245 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:48.487252 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:48.487260 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:48.487267 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:48.487501 D | op-k8sutil: ConfigMap rook-ceph-detect-version is already deleted 2024-05-20 12:32:48.491008 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:48.491033 I | operator: starting the controller-runtime manager 2024-05-20 12:32:48.491585 I | op-k8sutil: Removing previous job rook-ceph-detect-version to start a new one 2024-05-20 12:32:48.533766 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:32:48.592316 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:48.592360 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592395 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592428 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:48.592478 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592538 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592572 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592603 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592637 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592668 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592697 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592732 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.592762 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:48.593249 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:48.594575 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.594613 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:48.595324 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:48.595337 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:48.595995 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:48.596167 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:48.596235 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:48.596286 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:48.596351 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:48.596364 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:48.596383 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:48.596390 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:48.596430 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:48.596444 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.596457 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.596463 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:48.596491 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:48.596520 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:48.596542 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:48.596556 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:48.596763 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:48.596808 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:48.596843 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:48.596899 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:48.596926 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:48.596965 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:48.596985 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:48.597035 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:48.597056 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:48.597086 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:48.597102 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:48.597109 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:48.597119 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:48.597168 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:48.597184 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:48.597192 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:48.597204 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:48.597213 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:48.597224 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:48.597232 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:48.597237 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:48.608008 I | op-k8sutil: Retrying 18 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:32:48.796394 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:48.796409 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:48.797516 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:48.797529 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:48.797532 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:32:48.797561 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:32:48.797618 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:32:48.797931 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:48.798114 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:48.798169 I | op-osd: stopping monitoring of OSDs in namespace "rook-ceph" 2024-05-20 12:32:48.798216 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:48.798311 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:48.798322 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:32:48.798338 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:32:48.798355 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:32:48.798361 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:32:48.798417 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:48.798469 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:32:48.798522 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:48.798550 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:32:48.798574 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:32:48.798674 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:48.798707 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:48.798835 E | operator: failed to reconcile failed to stop device discovery daemonset: context canceled 2024-05-20 12:32:48.807242 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.807250 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:48.807595 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.807603 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:48.808177 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:32:48.808541 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:48.808604 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:48.812499 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to populate external cluster info: context canceled LastHeartbeatTime:2024-05-20 12:32:48.808173432 +0000 UTC m=+10.143338573 LastTransitionTime:2024-05-20 12:32:48.808173366 +0000 UTC m=+10.143338517}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:32:48.812541 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:48.812574 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:48.812636 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:32:48.812682 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:48.812733 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:32:48.812832 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:49.011459 D | ceph-cluster-controller: cluster status: {Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877840684882 UsedBytes:3048952655872 AvailableBytes:2104974544896 TotalBytes:5153927200768 ReadBps:2726449 WriteBps:7390709 ReadOps:74 WriteOps:310 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:32:49.011532 E | ceph-cluster-controller: failed to retrieve ceph cluster "rook-ceph" in namespace "rook-ceph" to update status to &{Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877840684882 UsedBytes:3048952655872 AvailableBytes:2104974544896 TotalBytes:5153927200768 ReadBps:2726449 WriteBps:7390709 ReadOps:74 WriteOps:310 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:32:49.011540 D | ceph-cluster-controller: checking for stuck pods on not ready nodes 2024-05-20 12:32:49.011599 E | ceph-cluster-controller: failed to delete pod on not ready nodes. failed to get NotReady nodes: failed to list kubernetes nodes. context canceled 2024-05-20 12:32:49.011609 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "MDS_CACHE_OVERSIZED", message: "1 MDSs report oversized cache" 2024-05-20 12:32:49.011615 I | ceph-cluster-controller: stopping monitoring of ceph status 2024-05-20 12:32:49.079612 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:49.079676 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:49.079709 I | operator: setting up schemes 2024-05-20 12:32:49.088871 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:49.695340 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:49.702498 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:49.702610 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:49.702624 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:49.702627 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:49.702631 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:49.702635 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:49.702642 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:49.702651 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:49.702660 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:49.702680 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:49.702689 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:49.702757 I | ceph-object-controller: successfully started 2024-05-20 12:32:49.702775 I | ceph-file-controller: successfully started 2024-05-20 12:32:49.702792 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:49.702803 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:49.702814 I | ceph-client-controller: successfully started 2024-05-20 12:32:49.702834 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:49.702848 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:49.702858 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:49.702865 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:49.702873 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:49.702878 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:49.702884 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:49.702940 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:49.702946 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:49.703860 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:49.703876 I | operator: starting the controller-runtime manager 2024-05-20 12:32:49.804717 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:49.805011 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:49.805020 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:49.805030 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:49.805062 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:49.805066 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:49.805207 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:49.805224 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:49.805263 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:49.805289 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:49.805352 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:49.805393 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:49.805410 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:49.805419 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:49.805432 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:49.805458 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:49.805483 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:49.805510 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:49.805524 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:49.805533 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:49.805555 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:49.805570 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:49.805588 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:49.805599 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:49.805603 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:49.805621 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:49.805631 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:49.805640 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:49.805651 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:49.805667 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:49.805704 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:49.805736 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:49.805747 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:49.805770 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:49.805804 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:49.805829 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:49.805909 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:49.805958 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:49.805982 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806042 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806090 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806135 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806185 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806225 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806253 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:49.806300 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:49.806350 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:49.806373 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:49.806421 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:49.806454 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806487 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:49.806512 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806553 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:49.806602 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:49.806979 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:50.005369 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:50.005447 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:50.006199 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:50.009411 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:50.009421 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:50.009433 I | operator: setting up schemes 2024-05-20 12:32:50.011138 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:50.608505 I | op-k8sutil: Retrying 17 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:32:50.613737 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:50.621335 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:50.621372 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:50.621397 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:50.621403 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:50.621408 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:50.621412 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:50.621420 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:50.621431 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:50.621440 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:50.621445 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:50.621451 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:50.621526 I | ceph-object-controller: successfully started 2024-05-20 12:32:50.621564 I | ceph-file-controller: successfully started 2024-05-20 12:32:50.621580 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:50.621592 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:50.621605 I | ceph-client-controller: successfully started 2024-05-20 12:32:50.621627 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:50.621643 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:50.621652 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:50.621660 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:50.621668 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:50.621673 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:50.621679 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:50.621685 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:50.621703 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:50.622588 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:50.622607 I | operator: starting the controller-runtime manager 2024-05-20 12:32:50.723444 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:50.723496 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:50.724068 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:50.724081 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:50.724114 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:50.724118 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:50.724333 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:50.724483 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:50.724675 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:50.724847 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:50.724943 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:50.724982 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:50.725025 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:50.725045 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:50.725205 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:50.725218 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725230 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:50.725248 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:50.725269 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725287 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:50.725313 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725351 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725389 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725404 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:50.725425 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725462 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725488 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725503 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:50.725511 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725518 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:50.725524 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:50.725543 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:50.725547 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:50.725568 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:50.725591 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:50.725604 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:50.725621 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:50.725635 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:50.725654 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:50.725667 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:50.725670 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:50.725707 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:50.725720 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:50.725733 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:50.725753 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:50.725792 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:50.725821 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:50.725834 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:50.725958 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:50.726001 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:50.726044 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:50.726060 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:50.726163 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:50.726194 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:50.726235 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:50.926345 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:32:50.926381 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:32:50.926396 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:32:50.926410 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:32:50.926568 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:50.926661 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:50.927151 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:32:50.927342 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:50.929795 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:50.929804 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:50.929818 I | operator: setting up schemes 2024-05-20 12:32:50.931763 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:51.534315 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:51.534412 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:32:51.541707 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:51.541746 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:51.541755 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:51.541758 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:51.541761 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:51.541765 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:51.541772 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:51.541783 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:51.541796 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:51.541802 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:51.541808 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:51.541875 I | ceph-object-controller: successfully started 2024-05-20 12:32:51.541899 I | ceph-file-controller: successfully started 2024-05-20 12:32:51.541916 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:51.541934 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:51.541947 I | ceph-client-controller: successfully started 2024-05-20 12:32:51.541954 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:51.541984 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:51.541999 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:51.542010 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:51.542017 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:51.542023 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:51.542032 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:51.542039 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:51.542049 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:51.542859 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:51.542876 I | operator: starting the controller-runtime manager 2024-05-20 12:32:51.643522 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:51.643547 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:51.644881 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:51.645082 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:51.645239 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:51.645301 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:51.645316 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:51.645355 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645394 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:51.645409 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645478 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:51.645490 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:51.645506 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:51.645545 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645615 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:51.645625 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:51.645646 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645684 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645734 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645761 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645810 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645835 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645894 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645919 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.645961 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:51.745582 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:51.745627 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:51.745689 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:51.745717 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:51.745804 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:51.745831 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:51.745842 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:51.745852 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:51.745874 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:51.745882 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:51.745890 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:51.745901 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:51.745954 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:51.745976 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:51.745986 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:51.745993 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:51.745998 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:51.746006 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:51.746013 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:51.746034 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:51.746051 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:51.746082 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:51.746104 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:51.746118 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:51.746130 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:51.746232 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:51.746278 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:51.746288 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:51.746325 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:51.746342 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:51.847950 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:51.847965 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:51.848020 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:51.848035 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:51.848205 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:51.848261 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:32:51.848296 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:32:51.848327 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:32:51.848389 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:32:51.849550 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:32:51.849588 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:32:51.849596 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:32:51.849603 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:32:51.849887 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:51.849952 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.850029 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:51.850202 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:51.850235 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:51.850271 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:32:51.850280 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:51.850306 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments?labelSelector=app%3Drook-ceph-drain-canary": context canceled 2024-05-20 12:32:51.850341 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:32:51.850421 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:51.850444 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:32:51.850508 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:51.852037 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:32:51.852050 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:51.852056 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:51.852096 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: context canceled 2024-05-20 12:32:51.852154 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:32:51.855834 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.855910 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.855936 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:32:51.856781 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.857977 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:51.857993 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:51.858001 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.858011 I | operator: setting up schemes 2024-05-20 12:32:51.858016 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:32:51.858839 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.859691 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:51.860044 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.860064 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:32:51.860853 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.861929 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.861949 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:32:51.862725 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.863464 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:51.863475 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:51.863683 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:51.863697 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:51.863822 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.863845 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:32:51.864636 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.864739 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:51.864751 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:51.868251 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.868290 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.868308 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:32:51.869359 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.869807 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:32:51.870199 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.870230 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.870245 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:32:51.871046 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.871865 I | ceph-spec: parsing mon endpoints: prceph-mon02=10.11.10.30:6789,prceph-mon03=10.11.10.93:6789,prceph-mon01=10.11.10.190:6789 2024-05-20 12:32:51.871888 D | ceph-spec: loaded: maxMonID=2, mons=map[prceph-mon01:0xc011f99a80 prceph-mon02:0xc011f99a00 prceph-mon03:0xc011f99a40], assignment=&{Schedule:map[]} 2024-05-20 12:32:51.871897 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[prceph-mon01:0xc011f99a80 prceph-mon02:0xc011f99a00 prceph-mon03:0xc011f99a40] 2024-05-20 12:32:51.871935 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.871965 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.871980 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:32:51.872800 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.873558 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.873591 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.873609 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:32:51.873920 I | cephclient: writing config file /var/lib/rook/rook-ceph-external/rook-ceph-external.config 2024-05-20 12:32:51.874116 I | cephclient: generated admin config in /var/lib/rook/rook-ceph-external 2024-05-20 12:32:51.874125 I | ceph-cluster-controller: external cluster identity established 2024-05-20 12:32:51.874130 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2024-05-20 12:32:51.874248 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:32:51.874572 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.875904 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.875929 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:32:51.876868 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.877711 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.877741 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.877755 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:32:51.878545 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.879266 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.879296 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.879313 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:32:51.882269 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.883254 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.883291 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.883311 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:32:51.884192 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.892918 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.892982 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.893019 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:32:51.894430 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.895718 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.895766 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:32:51.896887 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.898043 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.898078 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.898150 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:32:51.899441 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.900407 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.900444 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.900466 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:32:51.901286 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.902423 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.902444 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:32:51.903260 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.904066 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.904101 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.904119 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:32:51.904940 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.906075 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:51.906096 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:32:51.906900 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.907682 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.907716 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.907789 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:32:51.909109 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.910022 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.910060 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.910079 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:32:51.910895 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.911606 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.911639 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.911657 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:32:51.912478 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.913205 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.913300 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.913319 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:32:51.914538 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.915495 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.915523 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.915538 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:32:51.916357 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.917108 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.917139 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:51.917255 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:32:51.918526 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:51.919432 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:51.919459 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.433899 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2024-05-20 12:32:52.433927 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:32:52.463615 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:52.471928 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:52.471962 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:52.471970 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:52.471973 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:52.471977 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:52.471981 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:52.471989 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:52.472001 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:52.472010 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:52.472016 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:52.472024 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:52.472121 I | ceph-object-controller: successfully started 2024-05-20 12:32:52.472147 I | ceph-file-controller: successfully started 2024-05-20 12:32:52.472170 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:52.472192 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:52.472215 I | ceph-client-controller: successfully started 2024-05-20 12:32:52.472231 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:52.472249 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:52.472262 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:52.472273 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:52.472280 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:52.472287 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:52.472292 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:52.472298 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:52.472303 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:52.473006 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:52.473024 I | operator: starting the controller-runtime manager 2024-05-20 12:32:52.573624 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573667 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573688 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573715 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:52.573744 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:52.573765 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573794 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573818 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573841 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573866 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573890 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573915 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.573937 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:52.574122 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:52.574179 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:52.574242 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:52.574282 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:52.574294 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:52.574332 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:52.574340 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:52.574446 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:52.574500 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:52.574543 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:52.574568 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:52.574650 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:52.574682 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:52.574699 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:52.574713 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:52.574732 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:52.574850 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:52.574871 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:52.574923 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:52.574942 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:52.574968 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:52.574988 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:52.575010 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:52.575029 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:52.575057 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:52.575089 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:52.575119 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:52.575142 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:52.575153 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:52.575161 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:52.575194 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:52.575208 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:52.575271 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:52.575292 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:52.575307 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:52.575333 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:52.575356 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:52.575368 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:52.575376 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:52.575388 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:52.575401 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:52.575542 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:52.609162 I | op-k8sutil: Retrying 16 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:32:52.777399 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:52.777413 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:52.777735 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:32:52.777748 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:52.777753 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:52.778226 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:52.778239 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:52.778373 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:32:52.779450 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.780175 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:32:52.780493 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:32:52.780516 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:32:52.780536 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:32:52.780544 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:32:52.780713 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:32:52.780736 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:32:52.781091 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:32:52.781606 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:52.781690 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:52.781856 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Put "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments/rook-ceph-crashcollector-ceph02e": context canceled 2024-05-20 12:32:52.781883 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:52.781891 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:32:52.781964 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:32:52.781995 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:52.782034 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments?labelSelector=app%3Drook-ceph-drain-canary": context canceled 2024-05-20 12:32:52.782050 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:52.782068 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:52.782076 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:52.782093 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/pods/rook-ceph-operator-6bc54d9b6f-thxtc": context canceled 2024-05-20 12:32:52.782118 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:32:52.782157 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:32:52.782210 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:32:52.782879 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.786624 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.786728 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.786756 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:32:52.788499 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.789598 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:52.789628 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:32:52.790512 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.795924 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.795984 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.796025 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:32:52.796177 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:32:52.796249 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc0278de180 h:0xc0278de1c0 i:0xc0278de200], assignment=&{Schedule:map[e:0xc027803fc0 h:0xc0278fe000 i:0xc0278fe040]} 2024-05-20 12:32:52.796325 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". invalid pool CR "replicapool" spec: failed to get crush map: failed to get crush map. : context canceled 2024-05-20 12:32:52.796517 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:52.796529 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:52.796548 I | operator: setting up schemes 2024-05-20 12:32:52.797161 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.797430 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:52.797520 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:52.797644 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:52.797656 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:52.797666 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:52.797674 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:52.797872 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:52.797906 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:52.798204 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.798247 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.798271 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:32:52.799504 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.799525 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:52.800310 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:32:52.800735 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.800775 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.800800 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:32:52.802075 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.803116 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.803168 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.803192 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:32:52.804322 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.805559 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:52.805584 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:32:52.806527 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.807878 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.807918 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.807939 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:32:52.808889 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.809668 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.809729 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.809769 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:32:52.810781 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.811876 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.812032 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.812190 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:32:52.813182 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.814036 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.814073 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.814093 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:32:52.815140 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.815878 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.815945 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.815982 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:32:52.816781 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.817927 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:52.817980 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:32:52.818974 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.819964 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.819996 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.820013 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:32:52.821022 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.822540 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:52.822564 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:32:52.823630 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.824572 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.824711 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.824778 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:32:52.825628 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.826370 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.826473 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.826561 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:32:52.827625 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.828846 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:52.828898 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:32:52.829925 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.830920 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.830965 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.830988 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:32:52.832116 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.832911 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.832939 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.832953 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:32:52.833937 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.835219 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:52.835239 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:32:52.836107 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.836897 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.836932 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.836950 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:32:52.837910 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.838787 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.839748 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.839769 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:32:52.840656 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.841454 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.841528 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.841604 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:32:52.842557 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.843798 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:52.843849 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:32:52.844816 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:52.845657 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:52.845713 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:52.849393 I | ceph-spec: parsing mon endpoints: prceph-mon02=10.11.10.30:6789,prceph-mon03=10.11.10.93:6789,prceph-mon01=10.11.10.190:6789 2024-05-20 12:32:52.849416 D | ceph-spec: loaded: maxMonID=2, mons=map[prceph-mon01:0xc029f5f620 prceph-mon02:0xc029f5f5a0 prceph-mon03:0xc029f5f5e0], assignment=&{Schedule:map[]} 2024-05-20 12:32:52.849422 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[prceph-mon01:0xc029f5f620 prceph-mon02:0xc029f5f5a0 prceph-mon03:0xc029f5f5e0] 2024-05-20 12:32:53.007152 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2024-05-20 12:32:53.007184 I | op-cfg-keyring: Error getting or creating key for "client.csi-cephfs-provisioner". Attempting to update capabilities in case the user already exists. failed get-or-create-key client.csi-cephfs-provisioner: context canceled 2024-05-20 12:32:53.007194 I | cephclient: updating ceph auth caps "client.csi-cephfs-provisioner" to [mon allow r mgr allow rw osd allow rw tag cephfs metadata=*] 2024-05-20 12:32:53.007238 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to create csi kubernetes secrets: failed to create csi cephfs provisioner ceph keyring: failed to get, create, or update auth key for client.csi-cephfs-provisioner: failed get-or-create-key client.csi-cephfs-provisioner: context canceled" 2024-05-20 12:32:53.017775 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:53.017798 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:53.017911 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:32:53.018787 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:53.018799 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.018819 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:53.018827 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.019263 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:53.019350 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.019409 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:53.019422 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.049825 I | cephclient: writing config file /var/lib/rook/rook-ceph-external/rook-ceph-external.config 2024-05-20 12:32:53.049935 I | cephclient: generated admin config in /var/lib/rook/rook-ceph-external 2024-05-20 12:32:53.049943 I | ceph-cluster-controller: external cluster identity established 2024-05-20 12:32:53.049948 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2024-05-20 12:32:53.049956 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:32:53.085582 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.085890 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.086083 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.086101 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:53.249540 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:32:53.402971 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:53.410355 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:53.410386 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:53.410395 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:53.410398 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:53.410401 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:53.410412 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:53.410418 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:53.410429 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:53.410439 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:53.410444 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:53.410450 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:53.410551 I | ceph-object-controller: successfully started 2024-05-20 12:32:53.410599 I | ceph-file-controller: successfully started 2024-05-20 12:32:53.410613 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:53.410625 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:53.410660 I | ceph-client-controller: successfully started 2024-05-20 12:32:53.410671 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:53.410684 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:53.410691 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:53.410728 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:53.410754 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:53.410761 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:53.410767 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:53.410772 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:53.410777 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:53.411709 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:53.411743 I | operator: starting the controller-runtime manager 2024-05-20 12:32:53.459152 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:32:53.459191 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc02c252700 h:0xc02c252740 i:0xc02c252780], assignment=&{Schedule:map[e:0xc02c23b8c0 h:0xc02c23b900 i:0xc02c23b940]} 2024-05-20 12:32:53.468798 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:53.468836 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s" 2024-05-20 12:32:53.468840 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:53.468846 I | ceph-cluster-controller: ceph status check interval is 1m0s 2024-05-20 12:32:53.468848 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:53.468878 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" 2024-05-20 12:32:53.468888 I | op-mon: stopping monitoring of mons in namespace "rook-ceph" 2024-05-20 12:32:53.468903 D | ceph-cluster-controller: checking health of cluster 2024-05-20 12:32:53.468960 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring 2024-05-20 12:32:53.521641 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:53.522167 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:53.522196 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:53.522230 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:53.522376 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:53.522400 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:53.522439 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.522470 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:53.522511 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:53.522529 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:53.522834 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:53.522850 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:53.522889 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:53.522918 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:53.523005 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:53.523041 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:53.523058 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:53.523072 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:53.523083 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:53.523097 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:53.523116 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:53.523135 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:53.523159 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:53.523172 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:53.523185 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:53.523202 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:53.523205 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:53.523215 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:53.523264 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:53.523279 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:53.523497 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:53.523714 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:53.523829 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:53.523880 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:53.523901 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:53.524077 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:53.524354 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:53.524492 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.524511 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:53.524558 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:53.524589 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:53.525566 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.525700 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.526107 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.526529 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.527087 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.527353 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:53.527859 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.528241 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.528707 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.529126 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.529594 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.530142 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:53.530415 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:53.617999 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2024-05-20 12:32:53.618025 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:32:53.623695 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.678088 D | ceph-cluster-controller: cluster spec successfully validated 2024-05-20 12:32:53.678142 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version" 2024-05-20 12:32:53.693629 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.693647 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.693657 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.693664 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.693893 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.693906 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.694219 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.694232 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.694552 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v15.2.15... 2024-05-20 12:32:53.695526 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.695538 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.712968 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:53.713035 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:53.713082 I | op-osd: stopping monitoring of OSDs in namespace "rook-ceph" 2024-05-20 12:32:53.713257 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. context canceled" 2024-05-20 12:32:53.713413 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:53.728985 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.728998 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.729269 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.729280 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.730044 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:53.730071 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:32:53.730117 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:53.730129 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:53.730259 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:53.850872 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:53.850884 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:53.850896 I | operator: setting up schemes 2024-05-20 12:32:53.852894 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:54.044627 D | ceph-cluster-controller: cluster status: {Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877844517729 UsedBytes:3048965398528 AvailableBytes:2104961802240 TotalBytes:5153927200768 ReadBps:2448069 WriteBps:3603724 ReadOps:53 WriteOps:136 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:32:54.044723 E | ceph-cluster-controller: failed to retrieve ceph cluster "rook-ceph" in namespace "rook-ceph" to update status to &{Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877844517729 UsedBytes:3048965398528 AvailableBytes:2104961802240 TotalBytes:5153927200768 ReadBps:2448069 WriteBps:3603724 ReadOps:53 WriteOps:136 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:32:54.044727 D | ceph-cluster-controller: checking for stuck pods on not ready nodes 2024-05-20 12:32:54.044770 E | ceph-cluster-controller: failed to delete pod on not ready nodes. failed to get NotReady nodes: failed to list kubernetes nodes. context canceled 2024-05-20 12:32:54.044778 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "MDS_CACHE_OVERSIZED", message: "1 MDSs report oversized cache" 2024-05-20 12:32:54.044783 I | ceph-cluster-controller: stopping monitoring of ceph status 2024-05-20 12:32:54.153921 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2024-05-20 12:32:54.153954 I | op-cfg-keyring: Error getting or creating key for "client.csi-cephfs-provisioner". Attempting to update capabilities in case the user already exists. failed get-or-create-key client.csi-cephfs-provisioner: context canceled 2024-05-20 12:32:54.153963 I | cephclient: updating ceph auth caps "client.csi-cephfs-provisioner" to [mon allow r mgr allow rw osd allow rw tag cephfs metadata=*] 2024-05-20 12:32:54.154031 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed to create csi kubernetes secrets: failed to create csi cephfs provisioner ceph keyring: failed to get, create, or update auth key for client.csi-cephfs-provisioner: failed get-or-create-key client.csi-cephfs-provisioner: context canceled" 2024-05-20 12:32:54.163490 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to create csi kubernetes secrets: failed to create csi cephfs provisioner ceph keyring: failed to get, create, or update auth key for client.csi-cephfs-provisioner: failed get-or-create-key client.csi-cephfs-provisioner: context canceled LastHeartbeatTime:2024-05-20 12:32:54.154024241 +0000 UTC m=+15.489189382 LastTransitionTime:2024-05-20 12:32:54.154024186 +0000 UTC m=+15.489189327}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:32:54.163503 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:54.163521 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:54.163565 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:32:54.166618 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:32:54.249836 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:32:54.249875 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc02c0d8c80 h:0xc02c0d8cc0 i:0xc02c0d8d00], assignment=&{Schedule:map[e:0xc01f370d40 h:0xc01f370d80 i:0xc01f370dc0]} 2024-05-20 12:32:54.257658 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:54.257675 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s" 2024-05-20 12:32:54.257678 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:54.257683 I | ceph-cluster-controller: ceph status check interval is 1m0s 2024-05-20 12:32:54.257686 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:32:54.257723 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" 2024-05-20 12:32:54.257731 I | op-mon: stopping monitoring of mons in namespace "rook-ceph" 2024-05-20 12:32:54.257744 D | ceph-cluster-controller: checking health of cluster 2024-05-20 12:32:54.257762 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring 2024-05-20 12:32:54.457101 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:54.463436 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:54.463478 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:54.463490 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:54.463493 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:54.463499 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:54.463504 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:54.463515 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:54.463528 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:54.463544 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:54.463552 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:54.463559 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:54.463653 I | ceph-object-controller: successfully started 2024-05-20 12:32:54.463681 I | ceph-file-controller: successfully started 2024-05-20 12:32:54.463704 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:54.463739 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:54.463759 I | ceph-client-controller: successfully started 2024-05-20 12:32:54.463770 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:54.463792 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:54.463807 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:54.463819 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:54.463829 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:54.463836 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:54.463844 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:54.463851 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:54.463858 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:54.464835 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:54.464856 I | operator: starting the controller-runtime manager 2024-05-20 12:32:54.478744 D | ceph-cluster-controller: cluster spec successfully validated 2024-05-20 12:32:54.478787 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version" 2024-05-20 12:32:54.501670 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:54.501683 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:54.502368 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:54.502378 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:54.502382 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:54.502389 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:54.504049 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v15.2.15... 2024-05-20 12:32:54.534865 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:32:54.565364 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:54.565376 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:54.565409 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:54.565413 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:54.565424 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:54.566055 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:54.566094 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:54.566102 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:54.566107 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:54.566117 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:54.567264 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:54.567698 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:54.567769 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.567818 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.567862 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.567900 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.567935 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.567970 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.568035 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.568076 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:54.568109 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.568145 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.568178 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.568209 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.568269 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:54.572479 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:54.572576 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:54.572635 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:54.572682 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:54.572726 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:54.572755 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:54.572774 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:54.572817 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:54.572833 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:54.572841 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:54.572853 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:54.572869 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:54.572886 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:54.572919 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:54.572934 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:54.572941 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:54.572972 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:54.572994 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:54.573022 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:54.573044 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:54.573068 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:54.573075 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:54.573096 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:54.573116 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:54.573129 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:54.573150 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:54.573162 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:54.573171 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:54.573184 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:54.573191 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:54.610245 I | op-k8sutil: Retrying 15 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:32:54.650042 D | op-k8sutil: ConfigMap rook-ceph-detect-version is already deleted 2024-05-20 12:32:54.767491 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:54.767507 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:54.770116 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:54.770204 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:54.770997 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:32:54.771011 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:54.771017 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:54.772013 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:32:54.772074 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:32:54.772700 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:54.772767 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:54.772809 I | op-osd: stopping monitoring of OSDs in namespace "rook-ceph" 2024-05-20 12:32:54.772848 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: context canceled 2024-05-20 12:32:54.772884 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:54.772908 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:54.772933 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:54.772963 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed waiting for results ConfigMap rook-ceph-detect-version. failed to start watcher for the results ConfigMap. failed to list the current ConfigMaps in order to start ConfigMap watcher. context canceled" 2024-05-20 12:32:54.772975 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:32:54.772993 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:54.773060 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:54.782875 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:32:54.788287 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:54.788301 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:54.788359 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed waiting for results ConfigMap rook-ceph-detect-version. failed to start watcher for the results ConfigMap. failed to list the current ConfigMaps in order to start ConfigMap watcher. context canceled LastHeartbeatTime:2024-05-20 12:32:54.772954664 +0000 UTC m=+16.108119806 LastTransitionTime:2024-05-20 12:32:54.772954574 +0000 UTC m=+16.108119727}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:32:54.788374 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:54.788404 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:32:54.788453 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:54.788464 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:54.788564 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:54.789110 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:54.789121 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:54.790265 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:54.790274 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:54.795941 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to populate external cluster info: context canceled LastHeartbeatTime:2024-05-20 12:32:54.782869678 +0000 UTC m=+16.118034826 LastTransitionTime:2024-05-20 12:32:54.782869608 +0000 UTC m=+16.118034763}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:32:54.795955 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:54.795974 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:54.796028 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:32:54.796063 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:54.796084 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:32:54.796211 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:54.810327 D | ceph-cluster-controller: cluster status: {Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877844517729 UsedBytes:3048965398528 AvailableBytes:2104961802240 TotalBytes:5153927200768 ReadBps:2448069 WriteBps:3603724 ReadOps:53 WriteOps:136 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:32:54.810391 E | ceph-cluster-controller: failed to retrieve ceph cluster "rook-ceph" in namespace "rook-ceph" to update status to &{Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877844517729 UsedBytes:3048965398528 AvailableBytes:2104961802240 TotalBytes:5153927200768 ReadBps:2448069 WriteBps:3603724 ReadOps:53 WriteOps:136 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:32:54.810397 D | ceph-cluster-controller: checking for stuck pods on not ready nodes 2024-05-20 12:32:54.810438 E | ceph-cluster-controller: failed to delete pod on not ready nodes. failed to get NotReady nodes: failed to list kubernetes nodes. context canceled 2024-05-20 12:32:54.810445 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "MDS_CACHE_OVERSIZED", message: "1 MDSs report oversized cache" 2024-05-20 12:32:54.810449 I | ceph-cluster-controller: stopping monitoring of ceph status I0520 12:32:55.847026 1 request.go:665] Waited for 1.074198043s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/api/v1/namespaces/rook-ceph/configmaps/rook-ceph-operator-config 2024-05-20 12:32:55.850442 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:55.850454 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:55.850467 I | operator: setting up schemes 2024-05-20 12:32:55.852256 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:56.454476 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:56.462063 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:56.462174 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:56.462295 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:56.462303 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:56.462308 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:56.462314 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:56.462322 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:56.462333 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:56.462342 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:56.462361 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:56.462383 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:56.462465 I | ceph-object-controller: successfully started 2024-05-20 12:32:56.462491 I | ceph-file-controller: successfully started 2024-05-20 12:32:56.462596 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:56.462622 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:56.462648 I | ceph-client-controller: successfully started 2024-05-20 12:32:56.462665 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:56.462858 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:56.462873 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:56.462881 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:56.462891 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:56.462899 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:56.462921 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:56.462928 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:56.462944 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:56.463982 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:56.464004 I | operator: starting the controller-runtime manager 2024-05-20 12:32:56.564558 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:56.565286 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.565325 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:56.565699 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.565718 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:56.565809 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:56.565831 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:56.565972 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:56.566027 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:56.566170 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:56.566307 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:56.566322 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:56.566456 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.566511 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:56.566569 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:56.566585 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:56.566720 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:56.566790 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:56.566808 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:56.566822 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:56.566898 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:56.567085 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:56.567136 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:56.567164 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:56.567209 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:56.567239 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.567270 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:56.567293 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:56.567321 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.567389 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:56.567407 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:56.567429 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:56.567440 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:56.567448 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:56.567464 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:56.567474 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:56.567481 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:56.567514 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:56.567537 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:56.567549 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:56.567574 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.567592 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:56.567606 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:56.567631 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:56.567646 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:56.567669 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.567718 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.567786 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.567862 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.567928 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:56.568066 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.568147 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.568206 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.568307 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.568371 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:56.611175 I | op-k8sutil: Retrying 14 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:32:56.767489 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:56.767505 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:56.769145 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:32:56.769159 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:56.769163 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:32:56.770483 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:32:56.770505 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:32:56.770522 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:32:56.770528 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:32:56.770697 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:56.770711 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:56.770718 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:56.770821 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:56.770863 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:32:56.770902 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:32:56.770915 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:56.770928 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:56.770944 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:56.770986 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/pods/rook-ceph-operator-6bc54d9b6f-thxtc": context canceled 2024-05-20 12:32:56.771042 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:56.771054 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:32:56.771062 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:56.771072 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:32:56.771099 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:32:56.772089 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.773307 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.773329 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:32:56.774259 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.775012 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.775101 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.775121 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:32:56.775944 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.777072 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.777091 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:32:56.777992 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.779172 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.779193 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:32:56.780023 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.788590 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.788648 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.788676 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:32:56.789813 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.789824 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:56.790191 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.790203 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:56.790353 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.790772 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:32:56.791153 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.791163 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:56.792201 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.792233 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:32:56.793358 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.794421 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.794459 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.794478 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:32:56.795674 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.796604 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.796636 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.796653 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:32:56.797710 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.799656 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.799691 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.799709 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:32:56.800409 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:56.800437 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:32:56.800500 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:32:56.800549 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:32:56.800577 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:32:56.800694 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.800704 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:56.800741 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.800800 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.800828 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:56.801496 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:56.801507 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:32:56.801658 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.801711 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.801737 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:32:56.802838 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.804153 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.804176 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:32:56.805362 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.806641 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.806665 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:32:56.807907 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.808793 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.808826 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.808841 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:32:56.809968 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.811848 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.811881 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.812017 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:32:56.813008 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.813886 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.813916 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.813930 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:32:56.815007 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.815848 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.815880 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.815998 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:32:56.817061 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.819839 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.819874 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.819889 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:32:56.820814 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.823231 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.823264 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.823280 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:32:56.824591 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.825477 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.825519 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.825551 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:32:56.827871 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.829414 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.829462 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.829490 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:32:56.830700 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.831695 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.831878 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.832127 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:32:56.833472 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.834664 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.834701 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.834725 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:32:56.835827 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.836725 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.836886 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.836915 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:32:56.838127 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.839399 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.839425 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:32:56.840374 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.841228 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.841258 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.841272 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:32:56.842150 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.843231 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:32:56.843252 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:32:56.844121 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:32:56.844863 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:32:56.844891 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:32:56.845068 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:56.849589 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:56.849599 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:56.849616 I | operator: setting up schemes 2024-05-20 12:32:56.851541 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:57.454611 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:57.464110 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:57.464142 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:57.464151 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:57.464154 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:57.464157 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:57.464161 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:57.464168 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:57.464178 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:57.464188 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:57.464193 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:57.464199 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:57.464258 I | ceph-object-controller: successfully started 2024-05-20 12:32:57.464284 I | ceph-file-controller: successfully started 2024-05-20 12:32:57.464319 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:57.464341 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:57.464374 I | ceph-client-controller: successfully started 2024-05-20 12:32:57.464390 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:57.464404 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:57.464414 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:57.464426 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:57.464435 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:57.464443 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:57.464450 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:57.464457 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:57.464465 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:57.465510 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:57.465527 I | operator: starting the controller-runtime manager 2024-05-20 12:32:57.534952 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:32:57.566124 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:57.566152 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:57.566338 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:57.566431 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:57.566476 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:57.566508 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:57.566693 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:57.566701 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:57.566761 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:57.566767 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:57.566812 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.566847 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.566872 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.566896 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.566921 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.566943 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.566969 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.566998 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.567020 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.567078 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.567115 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:57.567127 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.567163 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:57.567194 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:57.567347 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:57.567408 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:57.567438 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:57.567455 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:57.567470 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:57.567495 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:57.567583 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:57.567673 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:57.567715 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:57.567739 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:57.567750 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:57.567795 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:57.567816 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:57.567835 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:57.567844 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:57.567858 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:57.567870 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:57.567895 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:57.567912 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:57.567925 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:57.567931 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:57.567937 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:57.567943 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:57.567957 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:57.567983 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:57.567994 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:57.568083 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:57.568112 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:57.568130 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:57.568136 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:57.568221 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:57.768466 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:57.768562 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:57.769056 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:57.772334 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:57.772343 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:57.772352 I | operator: setting up schemes 2024-05-20 12:32:57.773937 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:58.378031 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:58.395863 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:58.395898 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:58.395908 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:58.395911 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:58.395914 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:58.395918 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:58.395925 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:58.395936 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:58.395949 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:58.395955 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:58.395962 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:58.396064 I | ceph-object-controller: successfully started 2024-05-20 12:32:58.396083 I | ceph-file-controller: successfully started 2024-05-20 12:32:58.396098 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:58.396110 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:58.396123 I | ceph-client-controller: successfully started 2024-05-20 12:32:58.396131 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:58.396143 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:58.396151 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:58.396164 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:58.396171 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:58.396177 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:58.396184 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:58.396190 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:58.396199 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:58.397401 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:58.397427 I | operator: starting the controller-runtime manager 2024-05-20 12:32:58.499122 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.499747 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.499875 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:58.499896 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:58.499908 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:58.499950 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.499964 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:58.499972 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:58.499975 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:58.499987 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.500008 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:58.500018 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:58.500056 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:58.500065 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:58.500069 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:58.500117 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:58.500176 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.500214 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:58.500480 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:58.500538 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.500591 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.501028 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.501078 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.501112 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:58.599373 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:58.599412 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:58.599418 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:58.599427 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:58.599450 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:58.599716 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:58.599748 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:58.599757 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:58.599788 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:58.599799 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:58.599868 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:58.599892 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:58.599927 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:58.600033 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:58.600057 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:58.600075 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:58.600096 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:58.600135 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:58.600150 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:58.600160 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:58.600178 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:58.600191 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:58.600212 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:58.600233 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:58.600239 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:58.600284 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:58.600306 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:58.600349 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:58.600367 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:58.600399 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:58.601173 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:58.611357 I | op-k8sutil: Retrying 13 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:32:58.700749 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:58.700812 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:58.701398 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:58.706007 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:58.706015 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:58.706026 I | operator: setting up schemes 2024-05-20 12:32:58.707541 I | operator: setting up the controller-runtime manager 2024-05-20 12:32:59.310209 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:32:59.318177 I | ceph-cluster-controller: successfully started 2024-05-20 12:32:59.318207 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:32:59.318215 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:32:59.318218 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:32:59.318224 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:32:59.318227 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:32:59.318237 I | ceph-block-pool-controller: successfully started 2024-05-20 12:32:59.318249 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:32:59.318262 I | ceph-object-realm-controller: successfully started 2024-05-20 12:32:59.318274 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:32:59.318284 I | ceph-object-zone-controller: successfully started 2024-05-20 12:32:59.318493 I | ceph-object-controller: successfully started 2024-05-20 12:32:59.318516 I | ceph-file-controller: successfully started 2024-05-20 12:32:59.318537 I | ceph-nfs-controller: successfully started 2024-05-20 12:32:59.318556 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:32:59.318574 I | ceph-client-controller: successfully started 2024-05-20 12:32:59.318582 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:32:59.318594 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:32:59.318606 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:32:59.318619 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:32:59.318628 I | ceph-bucket-topic: successfully started 2024-05-20 12:32:59.318637 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:59.318646 I | ceph-bucket-notification: successfully started 2024-05-20 12:32:59.318654 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:32:59.318662 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:32:59.319674 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:32:59.319690 I | operator: starting the controller-runtime manager 2024-05-20 12:32:59.420428 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:59.420456 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:32:59.421057 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:32:59.421183 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:32:59.421229 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:32:59.421309 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:59.421322 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:59.421366 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:59.421376 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:32:59.421489 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:59.421558 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:32:59.421632 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.421729 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:32:59.421761 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:32:59.421772 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:59.421809 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:32:59.421822 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:32:59.421841 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:32:59.421874 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422037 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:32:59.422071 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:32:59.422088 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:32:59.422141 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:32:59.422160 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:32:59.422247 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:32:59.422269 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:32:59.422282 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:59.422306 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:32:59.422328 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422351 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:32:59.422383 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:32:59.422394 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:32:59.422406 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:32:59.422422 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:32:59.422432 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:32:59.422462 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:32:59.422483 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:32:59.422495 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:32:59.422508 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:32:59.422539 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422550 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:32:59.422557 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:32:59.422577 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422586 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:32:59.422631 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:32:59.422643 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:32:59.422679 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422693 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:32:59.422723 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422751 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:32:59.422814 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422885 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.422948 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:32:59.422983 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:32:59.521903 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:32:59.623323 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:32:59.623338 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:32:59.623978 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:32:59.624012 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:32:59.624027 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:32:59.624035 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:32:59.624233 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:32:59.624326 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:32:59.624430 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:59.624517 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:32:59.624530 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:32:59.624543 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:32:59.624700 I | operator: successfully started the controller-runtime manager 2024-05-20 12:32:59.631347 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:32:59.631357 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:32:59.631370 I | operator: setting up schemes 2024-05-20 12:32:59.632958 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:00.235785 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:00.242954 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:00.242987 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:00.242996 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:00.242999 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:00.243003 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:00.243007 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:00.243015 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:00.243026 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:00.243037 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:00.243048 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:00.243057 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:00.243142 I | ceph-object-controller: successfully started 2024-05-20 12:33:00.243167 I | ceph-file-controller: successfully started 2024-05-20 12:33:00.243189 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:00.243213 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:00.243236 I | ceph-client-controller: successfully started 2024-05-20 12:33:00.243253 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:00.243275 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:00.243292 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:00.243304 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:00.243315 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:00.243322 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:00.243328 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:00.243334 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:00.243340 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:00.244095 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:00.244114 I | operator: starting the controller-runtime manager 2024-05-20 12:33:00.344631 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:00.345155 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:00.345207 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:00.345837 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:00.345860 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.345866 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:00.345907 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.345931 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.345957 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346025 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:00.346035 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:00.346044 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:00.346062 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:00.346080 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:00.346099 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:00.346147 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346202 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346252 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346315 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346387 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346454 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346526 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:00.346566 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:00.346635 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.346710 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:00.347256 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:00.347279 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:00.347511 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:00.347528 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:00.347652 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:00.347708 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:00.347771 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:00.347813 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:00.347873 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:00.347912 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:00.347940 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:00.347970 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:00.348024 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:00.348073 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:00.348120 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:00.348150 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:00.348188 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:00.348224 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:00.348266 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:00.348338 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:00.348364 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:00.348388 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:00.348434 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:00.348475 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:00.348523 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:00.348566 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:00.348648 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:00.348685 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:00.348767 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:00.348795 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:00.535689 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:00.547991 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:00.548043 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:00.548367 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:00.551272 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:00.551281 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:00.551295 I | operator: setting up schemes 2024-05-20 12:33:00.553112 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:00.611771 I | op-k8sutil: Retrying 12 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:01.157221 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:01.165048 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:01.165080 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:01.165217 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:01.165226 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:01.165233 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:01.165238 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:01.165249 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:01.165284 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:01.165303 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:01.165311 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:01.165318 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:01.165622 I | ceph-object-controller: successfully started 2024-05-20 12:33:01.165649 I | ceph-file-controller: successfully started 2024-05-20 12:33:01.165695 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:01.165719 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:01.165734 I | ceph-client-controller: successfully started 2024-05-20 12:33:01.165746 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:01.165779 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:01.165796 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:01.165959 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:01.165970 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:01.165978 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:01.165988 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:01.165997 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:01.166018 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:01.166832 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:01.166851 I | operator: starting the controller-runtime manager 2024-05-20 12:33:01.267648 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:01.267928 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:01.268487 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.268529 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:01.268584 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:01.268623 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:01.268660 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:01.268684 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:01.268713 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:01.268788 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.268855 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.268951 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:01.268992 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.269050 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.269087 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:01.269101 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:01.269110 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:01.269127 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.269142 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:01.269156 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:01.269180 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.269199 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:01.269234 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.269280 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.269290 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:01.269336 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:01.269347 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:01.269392 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:01.269440 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:01.269486 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:01.269518 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:01.269550 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:01.269578 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:01.269700 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:01.269720 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:01.269874 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:01.269915 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:01.269953 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:01.269967 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:01.270031 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:01.270061 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:01.270128 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:01.270150 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:01.270332 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:01.270354 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:01.270362 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:01.270368 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:01.270408 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:01.270722 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:01.270769 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:01.270817 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:01.270841 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:01.270879 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:01.270889 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:01.270898 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:01.468579 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:01.468637 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:01.468949 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:01.471710 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:01.471719 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:01.471743 I | operator: setting up schemes 2024-05-20 12:33:01.473747 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:02.077001 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:02.095912 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:02.095952 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:02.095962 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:02.095981 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:02.095985 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:02.095989 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:02.095996 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:02.096006 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:02.096014 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:02.096020 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:02.096026 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:02.096108 I | ceph-object-controller: successfully started 2024-05-20 12:33:02.096126 I | ceph-file-controller: successfully started 2024-05-20 12:33:02.096140 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:02.096161 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:02.096176 I | ceph-client-controller: successfully started 2024-05-20 12:33:02.096185 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:02.096200 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:02.096210 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:02.096218 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:02.096233 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:02.096244 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:02.096250 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:02.096255 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:02.096261 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:02.097380 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:02.097397 I | operator: starting the controller-runtime manager 2024-05-20 12:33:02.198274 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:02.198305 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:02.198875 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:02.198888 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:02.198944 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:02.198955 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:02.199035 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:02.199196 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:02.199232 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199293 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:02.199315 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:02.199445 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:02.199462 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199515 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199589 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199637 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199673 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199746 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199804 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:02.199866 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:02.199894 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.199913 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:02.199921 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:02.199931 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:02.199981 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:02.200004 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:02.200027 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:02.200043 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:02.200060 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:02.200218 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.200301 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.200396 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.200601 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:02.200620 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:02.200630 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:02.200680 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.200690 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:02.200699 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:02.200736 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:02.200788 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:02.200809 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:02.200827 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:02.200917 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:02.200935 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:02.201070 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:02.201126 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:02.201140 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:02.201158 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:02.201184 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:02.201219 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:02.201233 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:02.201250 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:02.201259 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:02.201315 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:02.201321 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:02.403523 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:02.403585 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:02.404045 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:02.407696 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:02.407706 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:02.407717 I | operator: setting up schemes 2024-05-20 12:33:02.409409 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:02.612650 I | op-k8sutil: Retrying 11 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:02.763997 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:02.764070 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.012225 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:03.019254 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:03.019288 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:03.019296 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:03.019299 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:03.019303 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:03.019307 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:03.019331 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:03.019344 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:03.019354 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:03.019361 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:03.019366 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:03.019421 I | ceph-object-controller: successfully started 2024-05-20 12:33:03.019441 I | ceph-file-controller: successfully started 2024-05-20 12:33:03.019457 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:03.019469 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:03.019481 I | ceph-client-controller: successfully started 2024-05-20 12:33:03.019489 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:03.019500 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:03.019511 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:03.019519 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:03.019526 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:03.019532 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:03.019537 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:03.019543 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:03.019548 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:03.020318 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:03.020336 I | operator: starting the controller-runtime manager 2024-05-20 12:33:03.120741 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:03.120876 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:03.120935 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:03.120974 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:03.121035 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:03.121068 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:03.121089 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:03.121115 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:03.121163 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:03.121181 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:03.121213 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:03.121232 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:03.121238 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:03.121283 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:03.121294 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:03.121314 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:03.121365 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:03.121370 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:03.121386 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:03.121393 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:03.121403 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:03.121434 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:03.121479 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:03.121500 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:03.121512 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:03.121530 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:03.121553 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:03.121565 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:03.121594 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:03.121620 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:03.121637 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:03.121644 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:03.121652 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:03.121674 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:03.121704 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:03.121744 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:03.121783 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:03.121875 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:03.122002 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122043 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:03.122079 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122117 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122129 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:03.122146 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122177 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122296 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122340 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:03.122375 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122393 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:03.122418 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:03.122435 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:03.122489 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122547 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.122594 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:03.123163 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:03.325122 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:03.325209 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:03.326425 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:03.326448 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:03.326503 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:03.326512 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:03.326949 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:03.326960 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:03.326967 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:03.327068 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:03.327263 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:03.327438 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments?labelSelector=app%3Drook-ceph-drain-canary": context canceled 2024-05-20 12:33:03.327496 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:03.327567 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:03.327584 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:03.327589 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/pods/rook-ceph-operator-6bc54d9b6f-thxtc": context canceled 2024-05-20 12:33:03.327654 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:33:03.327679 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:03.328029 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:03.333908 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:03.333917 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:03.333934 I | operator: setting up schemes 2024-05-20 12:33:03.335837 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:03.337205 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:03.337216 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:03.337487 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:03.337494 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:03.338002 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:03.338012 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:03.339017 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:33:03.343519 I | ceph-spec: parsing mon endpoints: prceph-mon02=10.11.10.30:6789,prceph-mon03=10.11.10.93:6789,prceph-mon01=10.11.10.190:6789 2024-05-20 12:33:03.343542 D | ceph-spec: loaded: maxMonID=2, mons=map[prceph-mon01:0xc03a2528c0 prceph-mon02:0xc03a252840 prceph-mon03:0xc03a252880], assignment=&{Schedule:map[]} 2024-05-20 12:33:03.343549 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[prceph-mon01:0xc03a2528c0 prceph-mon02:0xc03a252840 prceph-mon03:0xc03a252880] 2024-05-20 12:33:03.345648 I | cephclient: writing config file /var/lib/rook/rook-ceph-external/rook-ceph-external.config 2024-05-20 12:33:03.345728 I | cephclient: generated admin config in /var/lib/rook/rook-ceph-external 2024-05-20 12:33:03.345736 I | ceph-cluster-controller: external cluster identity established 2024-05-20 12:33:03.345746 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2024-05-20 12:33:03.345754 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:33:03.536734 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:03.889488 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2024-05-20 12:33:03.889514 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:33:03.938224 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:03.946503 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:03.946537 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:03.946545 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:03.946548 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:03.946551 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:03.946556 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:03.946564 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:03.946573 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:03.946582 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:03.946587 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:03.946595 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:03.946664 I | ceph-object-controller: successfully started 2024-05-20 12:33:03.946704 I | ceph-file-controller: successfully started 2024-05-20 12:33:03.946724 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:03.946742 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:03.946757 I | ceph-client-controller: successfully started 2024-05-20 12:33:03.946773 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:03.946796 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:03.946809 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:03.946851 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:03.946865 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:03.946874 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:03.946882 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:03.946891 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:03.946900 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:03.947711 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:03.947743 I | operator: starting the controller-runtime manager 2024-05-20 12:33:04.048145 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.048159 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:04.048206 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.048211 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:04.048441 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048490 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048532 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048580 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048623 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048667 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048687 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:04.048707 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:04.048742 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048787 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:04.048797 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048805 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:04.048844 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048884 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.048925 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:04.048969 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:04.048986 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.049021 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:04.049038 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:04.049048 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:04.049057 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:04.049076 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:04.049093 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:04.049104 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:04.049120 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:04.049131 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:04.049151 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:04.049173 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:04.049187 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:04.049227 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:04.049250 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:04.049264 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:04.049286 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:04.049344 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:04.049392 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:04.049459 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:04.049503 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:04.049548 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:04.049594 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:04.049662 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:04.049737 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:04.049753 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:04.049767 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:04.049787 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:04.050618 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:04.051164 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.051326 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.051960 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.051992 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.052926 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:04.052942 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:04.250440 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:04.250478 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:04.250498 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:04.250723 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:04.250735 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:04.253067 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.253250 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:04.253262 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:04.255385 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:33:04.255409 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:33:04.255423 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:33:04.255429 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:33:04.255439 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:04.255494 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:04.255794 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:04.255807 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:04.255823 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:04.255916 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:04.255996 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:04.256051 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:33:04.256065 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Put "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments/rook-ceph-crashcollector-ceph02a": context canceled 2024-05-20 12:33:04.256094 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:04.256103 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:04.256119 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/pods/rook-ceph-operator-6bc54d9b6f-thxtc": context canceled 2024-05-20 12:33:04.256126 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:04.256176 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:04.256187 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:04.256206 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:04.256221 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:04.256258 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments?labelSelector=app%3Drook-ceph-drain-canary": context canceled 2024-05-20 12:33:04.256294 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:04.256391 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:33:04.257491 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.258723 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:04.258758 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:04.259640 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.260800 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:04.260825 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:04.261852 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.263106 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:04.263126 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:04.264002 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.265428 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.265441 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.265580 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:04.265602 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:33:04.265761 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.265771 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.265881 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:33:04.266003 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.266010 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.266562 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:04.266571 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:04.266584 I | operator: setting up schemes 2024-05-20 12:33:04.266733 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.267420 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.267431 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.268358 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:04.268496 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:04.268520 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:04.269553 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.270818 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:04.270847 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to populate external cluster info: context canceled LastHeartbeatTime:2024-05-20 12:33:04.265876858 +0000 UTC m=+25.601042003 LastTransitionTime:2024-05-20 12:33:04.265876775 +0000 UTC m=+25.601041927}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:04.270855 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:04.270884 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:04.270943 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:33:04.271004 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:33:04.272435 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.273653 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.273753 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.273775 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:33:04.274697 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.275893 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.275929 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.275948 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:33:04.277782 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.278625 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.278659 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.278674 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:33:04.279492 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:33:04.279606 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.283265 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.283307 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.283326 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:33:04.283376 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:33:04.283411 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc04168a060 h:0xc04168a0a0 i:0xc04168a0e0], assignment=&{Schedule:map[e:0xc045c3cc40 h:0xc045c3cc80 i:0xc045c3cd00]} 2024-05-20 12:33:04.289210 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:33:04.289229 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s" 2024-05-20 12:33:04.289232 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:33:04.289237 I | ceph-cluster-controller: ceph status check interval is 1m0s 2024-05-20 12:33:04.289240 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:33:04.289269 D | ceph-cluster-controller: checking health of cluster 2024-05-20 12:33:04.289286 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring 2024-05-20 12:33:04.289474 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" 2024-05-20 12:33:04.289487 I | op-mon: stopping monitoring of mons in namespace "rook-ceph" 2024-05-20 12:33:04.290762 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.291945 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.291983 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.292007 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:33:04.293016 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.293896 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.293934 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.293956 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:33:04.294853 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.295697 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.295742 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.295769 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:33:04.296685 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.299844 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.299881 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.299902 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:33:04.300768 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.301597 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.301630 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.301650 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:33:04.302583 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.303364 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.303401 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.303420 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:33:04.304341 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.305489 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:04.305532 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:33:04.306394 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.307240 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.307272 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.307289 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:33:04.308203 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.309444 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.309478 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.309497 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:33:04.310414 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.311466 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.311509 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.311536 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:33:04.312531 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.313634 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.313672 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.313696 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:33:04.314566 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.315675 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.315710 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.315749 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:33:04.316617 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.317605 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.317641 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.317662 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:33:04.318535 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.319687 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.319722 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.319757 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:33:04.320734 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.321611 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.321641 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.321659 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:33:04.322542 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:04.323353 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:04.323393 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:04.356046 D | ceph-cluster-controller: cluster spec successfully validated 2024-05-20 12:33:04.356099 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version" 2024-05-20 12:33:04.368765 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v15.2.15... 2024-05-20 12:33:04.368899 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.368912 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.368975 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.368985 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.369216 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.369226 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.369424 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.369434 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.472716 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2024-05-20 12:33:04.472749 I | op-cfg-keyring: Error getting or creating key for "client.csi-cephfs-provisioner". Attempting to update capabilities in case the user already exists. failed get-or-create-key client.csi-cephfs-provisioner: context canceled 2024-05-20 12:33:04.472758 I | cephclient: updating ceph auth caps "client.csi-cephfs-provisioner" to [mon allow r mgr allow rw osd allow rw tag cephfs metadata=*] 2024-05-20 12:33:04.472834 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed to create csi kubernetes secrets: failed to create csi cephfs provisioner ceph keyring: failed to get, create, or update auth key for client.csi-cephfs-provisioner: failed get-or-create-key client.csi-cephfs-provisioner: context canceled" 2024-05-20 12:33:04.484804 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:04.485419 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:04.486196 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:33:04.488022 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.495765 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.495968 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.495984 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.496054 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.496067 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.496140 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.496149 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:04.556545 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.557140 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.557600 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.557990 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.558376 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.558798 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.559164 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.559541 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.559969 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.560388 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.560771 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.561338 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.561752 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.562111 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.562533 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.562944 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:04.612959 I | op-k8sutil: Retrying 10 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:04.728687 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:33:04.808676 D | ceph-cluster-controller: cluster status: {Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877856129506 UsedBytes:3049003331584 AvailableBytes:2104923869184 TotalBytes:5153927200768 ReadBps:1507996 WriteBps:4440558 ReadOps:46 WriteOps:169 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:33:04.814854 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2024-05-20 12:33:04.871712 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:04.879429 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:04.879465 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:04.879476 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:04.879481 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:04.879486 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:04.879492 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:04.879502 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:04.879517 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:04.879531 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:04.879544 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:04.879555 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:04.879632 I | ceph-object-controller: successfully started 2024-05-20 12:33:04.879657 I | ceph-file-controller: successfully started 2024-05-20 12:33:04.879680 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:04.879700 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:04.879716 I | ceph-client-controller: successfully started 2024-05-20 12:33:04.879740 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:04.879761 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:04.879775 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:04.879805 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:04.879821 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:04.879830 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:04.879839 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:04.879850 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:04.879861 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:04.880927 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:04.880957 I | operator: starting the controller-runtime manager 2024-05-20 12:33:04.930129 D | op-k8sutil: ConfigMap rook-ceph-detect-version was deleted after 0 retries every 2ns seconds 2024-05-20 12:33:04.937464 I | op-k8sutil: Removing previous job rook-ceph-detect-version to start a new one 2024-05-20 12:33:04.960888 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:04.989799 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.989815 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:04.989847 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.989851 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:04.990102 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:04.990167 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:04.990202 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:04.990208 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:04.990229 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:04.990244 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:04.990250 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:04.990267 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:04.990314 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:04.990333 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:04.990340 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:04.990354 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:04.990366 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:04.990392 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:04.990411 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:04.990440 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:04.990487 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:04.990519 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:04.990533 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:04.990579 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:04.990587 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:04.990621 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:04.990632 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:04.990641 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:04.990643 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.990672 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.990741 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:04.990788 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.990828 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.990873 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.990915 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:04.990942 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.991027 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.991061 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.991087 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:04.991123 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.991151 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:04.991159 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:04.991207 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:04.991246 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:04.991281 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:04.991290 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:04.991306 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:04.991323 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:04.991348 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:04.991927 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.991969 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.992314 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:04.992349 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:04.992822 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:04.992842 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:05.128006 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:33:05.128043 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc04416a520 h:0xc04416a560 i:0xc04416a5a0], assignment=&{Schedule:map[e:0xc04415c9c0 h:0xc04415ca00 i:0xc04415ca40]} 2024-05-20 12:33:05.133828 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:33:05.133848 D | ceph-cluster-controller: monitoring routine for "osd" is already running 2024-05-20 12:33:05.133861 D | ceph-cluster-controller: monitoring routine for "status" is already running 2024-05-20 12:33:05.133902 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" 2024-05-20 12:33:05.133908 I | op-mon: stopping monitoring of mons in namespace "rook-ceph" 2024-05-20 12:33:05.195991 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:05.196008 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:05.196071 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:33:05.198459 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:05.198525 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:05.198583 I | op-osd: stopping monitoring of OSDs in namespace "rook-ceph" 2024-05-20 12:33:05.198593 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.198656 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:05.198714 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:05.198764 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:05.198853 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:05.198899 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:33:05.198927 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:05.198932 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:05.198998 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:33:05.199044 D | ceph-cluster-controller: cluster spec successfully validated 2024-05-20 12:33:05.199078 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version" 2024-05-20 12:33:05.200374 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.200398 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:05.201400 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.202632 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.202654 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:05.203582 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.204841 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.204861 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:33:05.205901 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.207072 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.207093 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:05.207934 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.209087 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.209106 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:05.209864 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.210953 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.210972 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:05.212009 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.213075 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.213094 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:05.213860 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.214936 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:05.214956 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:33:05.215789 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.216766 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v15.2.15... 2024-05-20 12:33:05.216906 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. context canceled" 2024-05-20 12:33:05.217150 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.217284 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:05.217294 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:05.217326 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.217375 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:33:05.217744 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:05.217755 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:05.217817 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:05.217864 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:05.217952 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:05.217961 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:05.218273 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:05.218282 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:05.219343 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.221667 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.221700 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.221716 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:33:05.222878 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.223669 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.223705 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.223754 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:33:05.224542 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed the ceph version check: failed to complete ceph version job: failed to run CmdReporter rook-ceph-detect-version successfully. failed to delete existing results ConfigMap rook-ceph-detect-version. failed to delete ConfigMap rook-ceph-detect-version. context canceled LastHeartbeatTime:2024-05-20 12:33:05.216897687 +0000 UTC m=+26.552062837 LastTransitionTime:2024-05-20 12:33:05.216897598 +0000 UTC m=+26.552062749}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:05.224554 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:05.224576 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:33:05.224733 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:05.225356 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.227150 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.227182 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.227219 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:33:05.228592 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.229403 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.229431 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.229445 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:33:05.230562 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.231318 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.231347 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.231403 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:33:05.232617 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.233497 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.233524 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.233545 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:33:05.234542 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.235439 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.235467 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.235482 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:33:05.237432 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.238190 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.238218 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.238232 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:33:05.239300 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.240117 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.240155 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.240169 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:33:05.241208 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.241930 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.241960 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.241973 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:33:05.243081 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.244016 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.244043 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.244215 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:33:05.245609 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.246377 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.246406 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.246422 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:33:05.247494 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.248532 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.248559 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.248572 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:33:05.249686 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.250593 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.250623 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.250636 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:33:05.252099 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.253069 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.253099 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.253117 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:33:05.254247 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.255122 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.255157 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.255271 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:33:05.256439 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.257346 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.257374 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.257388 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:33:05.258459 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:05.259274 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:05.259309 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:05.259614 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:05.374764 D | cephclient: {"mon":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":3},"mgr":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"osd":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":8},"mds":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"overall":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":13}} 2024-05-20 12:33:05.374840 D | cephclient: {"mon":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":3},"mgr":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"osd":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":8},"mds":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"overall":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":13}} 2024-05-20 12:33:05.374931 D | ceph-cluster-controller: updating ceph cluster "rook-ceph" status and condition to &{Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877856129506 UsedBytes:3049003331584 AvailableBytes:2104923869184 TotalBytes:5153927200768 ReadBps:1507996 WriteBps:4440558 ReadOps:46 WriteOps:169 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}}, True, ClusterCreated, Cluster created successfully 2024-05-20 12:33:05.374966 D | ceph-spec: CephCluster "rook-ceph" status: "Ready". "Cluster created successfully" 2024-05-20 12:33:05.387984 E | ceph-spec: failed to update cluster condition to {Type:Ready Status:True Reason:ClusterCreated Message:Cluster created successfully LastHeartbeatTime:2024-05-20 12:33:05.374963287 +0000 UTC m=+26.710128426 LastTransitionTime:2024-05-09 11:32:32 +0000 UTC}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:05.387998 D | ceph-cluster-controller: checking for stuck pods on not ready nodes 2024-05-20 12:33:05.388038 E | ceph-cluster-controller: failed to delete pod on not ready nodes. failed to get NotReady nodes: failed to list kubernetes nodes. context canceled 2024-05-20 12:33:05.388045 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "MDS_CACHE_OVERSIZED", message: "1 MDSs report oversized cache" 2024-05-20 12:33:05.388054 I | ceph-cluster-controller: stopping monitoring of ceph status 2024-05-20 12:33:05.729631 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:05.729643 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:05.729658 I | operator: setting up schemes 2024-05-20 12:33:05.731569 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:06.335237 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:06.343974 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:06.344407 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:06.344861 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:06.345180 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:06.345513 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:06.345830 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:06.346140 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:06.346638 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:06.346956 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:06.347264 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:06.347567 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:06.347941 I | ceph-object-controller: successfully started 2024-05-20 12:33:06.348290 I | ceph-file-controller: successfully started 2024-05-20 12:33:06.348623 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:06.348960 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:06.349290 I | ceph-client-controller: successfully started 2024-05-20 12:33:06.349620 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:06.349946 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:06.350269 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:06.350590 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:06.350912 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:06.351230 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:06.351548 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:06.351928 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:06.352259 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:06.353315 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:06.353332 I | operator: starting the controller-runtime manager 2024-05-20 12:33:06.453772 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:06.453792 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:06.453901 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:06.453959 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:06.453969 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:06.454017 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:06.454030 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:06.454044 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:06.454054 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:06.454169 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.454208 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.454437 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.454492 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.454536 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.454572 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.454601 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:06.454631 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.454749 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:06.454847 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.455133 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.455218 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:06.455266 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:06.455295 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.455380 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:06.455411 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:06.455419 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.455425 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:06.455501 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:06.455543 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:06.455596 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:06.455613 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:06.455620 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:06.455631 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:06.455674 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:06.455701 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:06.455723 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:06.455768 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:06.455796 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:06.455821 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:06.455857 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:06.455882 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:06.455889 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:06.455905 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:06.455922 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:06.455944 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:06.455959 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:06.455993 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:06.456033 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:06.456118 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:06.456135 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:06.456154 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:06.456199 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:06.456217 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:06.456274 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:06.456493 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:06.537110 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:06.613887 I | op-k8sutil: Retrying 9 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:06.657188 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:06.657203 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:06.658670 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:06.658683 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:06.658689 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:06.659333 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:06.659354 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:06.659373 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:06.659385 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:06.659422 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:33:06.660698 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:06.660710 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:06.661064 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:33:06.661608 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:33:06.661632 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:33:06.661652 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:33:06.661661 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:33:06.661697 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:06.661758 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:06.661836 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:33:06.661909 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:06.661930 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:06.661952 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/pods/rook-ceph-operator-6bc54d9b6f-thxtc": context canceled 2024-05-20 12:33:06.662023 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:06.662041 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:06.662088 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/deployments?labelSelector=app%3Drook-ceph-drain-canary": context canceled 2024-05-20 12:33:06.662131 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:06.662212 E | clusterdisruption-controller: failed to delete all the legacy drain-canary pods with label "rook-ceph-drain-canary": context canceled 2024-05-20 12:33:06.662823 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.664005 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.664031 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:33:06.665111 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.666105 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.666192 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.666211 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:33:06.667257 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.668423 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.668454 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.668468 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:06.669552 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.670754 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.670774 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:33:06.671791 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:33:06.671890 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.673166 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:06.673178 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:06.673270 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:06.673286 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:06.673334 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:06.673343 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:06.673370 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.673390 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:06.673396 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:06.673404 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.673533 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:33:06.674642 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.675440 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.675470 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.675484 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:33:06.676597 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.677359 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to populate external cluster info: context canceled LastHeartbeatTime:2024-05-20 12:33:06.671787015 +0000 UTC m=+28.006952156 LastTransitionTime:2024-05-20 12:33:06.67178696 +0000 UTC m=+28.006952103}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:06.677369 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:06.677383 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:06.677402 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.677424 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:33:06.677436 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.677455 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:06.677504 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:33:06.677521 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:33:06.678639 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.680204 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.680240 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.680256 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:33:06.681141 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.687860 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.687961 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.688020 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:33:06.689207 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.690621 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.690688 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:06.691814 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.693029 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.693080 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:33:06.694094 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.695172 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.695203 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.695220 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:33:06.696095 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.697060 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.697093 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.697110 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:33:06.697965 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.698875 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.698935 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.698977 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:33:06.700026 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.700953 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.701012 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.701050 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:06.702150 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.703340 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.703390 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:33:06.704379 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.705177 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.705207 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.705221 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:33:06.706088 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.706862 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.706921 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.706956 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:06.707718 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.708831 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.708851 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:06.709638 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.710730 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.710748 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:06.711492 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.712526 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:06.712545 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:33:06.713289 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.714130 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.714165 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.714209 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:33:06.715498 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.716660 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.716690 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.716705 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:33:06.717557 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.718838 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.718870 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.718901 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:33:06.720247 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.721474 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.721502 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.721518 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:33:06.722479 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.723341 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.723370 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.723389 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:33:06.724406 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:06.725289 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:06.725319 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:06.729764 I | ceph-spec: parsing mon endpoints: prceph-mon02=10.11.10.30:6789,prceph-mon03=10.11.10.93:6789,prceph-mon01=10.11.10.190:6789 2024-05-20 12:33:06.729786 D | ceph-spec: loaded: maxMonID=2, mons=map[prceph-mon01:0xc03a1a69a0 prceph-mon02:0xc03a1a6920 prceph-mon03:0xc03a1a6960], assignment=&{Schedule:map[]} 2024-05-20 12:33:06.729792 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) 2024-05-20 12:33:06.729796 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph-external.ceph.rook.io/bucket" 2024-05-20 12:33:06.730039 I | op-bucket-prov: successfully reconciled bucket provisioner 2024-05-20 12:33:06.730112 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled I0520 12:33:06.730164 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="rook-ceph-external.ceph.rook.io/bucket" I0520 12:33:06.730227 1 manager.go:148] objectbucket.io/provisioner-manager "msg"="stopping provisioner" "name"="rook-ceph-external.ceph.rook.io/bucket" "reason"="context canceled" 2024-05-20 12:33:06.730313 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:06.928466 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:06.928478 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:06.928492 I | operator: setting up schemes 2024-05-20 12:33:06.930111 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:07.532855 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:07.539925 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:07.539959 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:07.539967 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:07.539970 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:07.539974 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:07.539978 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:07.539986 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:07.539996 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:07.540006 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:07.540012 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:07.540017 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:07.540087 I | ceph-object-controller: successfully started 2024-05-20 12:33:07.540112 I | ceph-file-controller: successfully started 2024-05-20 12:33:07.540131 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:07.540148 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:07.540164 I | ceph-client-controller: successfully started 2024-05-20 12:33:07.540176 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:07.540191 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:07.540202 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:07.540212 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:07.540221 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:07.540227 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:07.540232 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:07.540238 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:07.540243 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:07.541052 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:07.541070 I | operator: starting the controller-runtime manager 2024-05-20 12:33:07.642202 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:07.642463 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642515 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642561 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642583 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:07.642595 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642600 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:07.642605 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:07.642638 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:07.642648 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:07.642655 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642684 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642719 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642758 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642770 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:07.642805 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642847 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642887 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:07.642909 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642952 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:07.642983 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:07.643053 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:07.643079 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:07.643262 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:07.643361 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:07.643395 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:07.643404 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:07.643422 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:07.643430 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:07.643457 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:07.643477 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:07.643483 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:07.643490 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:07.643503 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:07.643524 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:07.643531 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:07.643539 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:07.643552 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:07.643566 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:07.643577 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:07.643585 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:07.643595 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:07.643628 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:07.643644 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:07.643669 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:07.643704 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:07.643740 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:07.643749 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:07.643771 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:07.643781 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:07.643788 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:07.643791 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:07.643798 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:07.643807 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:07.643828 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:07.744877 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:07.744892 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:07.746082 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:07.746103 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:07.746203 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:07.746216 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:07.746223 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:07.747847 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:33:07.748656 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:07.750586 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:33:07.751818 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:33:07.751849 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc039e08b20 h:0xc039e08b60 i:0xc039e08ba0], assignment=&{Schedule:map[e:0xc02bc49440 h:0xc02bc49480 i:0xc02bc494c0]} 2024-05-20 12:33:07.751860 D | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2024-05-20 12:33:07.752019 I | clusterdisruption-controller: deleted all legacy node drain canary pods 2024-05-20 12:33:07.752768 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:07.753879 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:07.754104 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2024-05-20 12:33:07.756125 D | ceph-crashcollector-controller: deployment successfully reconciled for node "ceph02b". operation: "updated" 2024-05-20 12:33:07.756806 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:07.758838 D | ceph-crashcollector-controller: cronJob resource not found. Ignoring since object must be deleted. 2024-05-20 12:33:07.758865 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:33:07.759830 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:07.807971 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:07.810505 D | ceph-crashcollector-controller: cronJob resource not found. Ignoring since object must be deleted. 2024-05-20 12:33:07.810543 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:33:07.812017 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:07.842291 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:07.842312 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:07.842995 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:07.843018 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:07.843919 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:33:07.843948 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:33:07.843969 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:33:07.843976 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:33:07.844021 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:07.844107 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:07.844160 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:07.844192 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:33:07.844204 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:07.844235 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:07.853213 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:33:07.853236 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:07.853243 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:07.853303 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:07.853352 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:07.853467 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:07.853479 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:07.853482 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:07.853488 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:07.858013 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to populate external cluster info: context canceled LastHeartbeatTime:2024-05-20 12:33:07.853208739 +0000 UTC m=+29.188373886 LastTransitionTime:2024-05-20 12:33:07.853208669 +0000 UTC m=+29.188373810}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:07.858025 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:07.858044 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:07.858095 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:33:07.858127 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:07.858141 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:33:07.928750 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:33:07.928789 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc03b0980a0 h:0xc03b0980e0 i:0xc03b098120], assignment=&{Schedule:map[e:0xc03ff3fb00 h:0xc03ff3fec0 i:0xc03ff3ff00]} 2024-05-20 12:33:07.928797 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) 2024-05-20 12:33:07.928801 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph.ceph.rook.io/bucket" 2024-05-20 12:33:07.929075 I | op-bucket-prov: successfully reconciled bucket provisioner 2024-05-20 12:33:07.929170 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled I0520 12:33:07.929205 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="rook-ceph.ceph.rook.io/bucket" I0520 12:33:07.929227 1 manager.go:148] objectbucket.io/provisioner-manager "msg"="stopping provisioner" "name"="rook-ceph.ceph.rook.io/bucket" "reason"="context canceled" 2024-05-20 12:33:07.961784 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:08.009021 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:08.009074 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:08.009100 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:33:08.010224 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:08.208970 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:08.209036 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:08.209071 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:33:08.210659 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:08.217976 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:08.217997 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph" 2024-05-20 12:33:08.223002 D | ceph-block-pool-controller: pool "rook-ceph/replicapool" status updated to "Failure" 2024-05-20 12:33:08.223026 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to create pool "replicapool".: failed to create pool "replicapool".: failed to create pool "replicapool": failed to create replicated crush rule "replicapool": failed to create crush rule replicapool: context canceled 2024-05-20 12:33:08.314393 D | cephclient: all placement groups have reached a clean state: [{StateName:active+clean Count:177}] 2024-05-20 12:33:08.314410 D | clusterdisruption-controller: no OSD is down in the "host" failure domains: [ceph02a ceph02b ceph02c ceph02d ceph02e ceph02m ceph02n ceph02o]. pg health: "all PGs in cluster are clean" 2024-05-20 12:33:08.328993 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:08.329003 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:08.329014 I | operator: setting up schemes 2024-05-20 12:33:08.330591 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:08.408787 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:08.408831 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:08.408855 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:33:08.409893 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:08.608047 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:08.614176 I | op-k8sutil: Retrying 8 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:08.808233 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:08.808282 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:08.808305 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:33:08.809479 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:08.810741 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:08.810767 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:33:08.811663 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:08.933916 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:08.941577 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:08.941709 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:08.941723 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:08.941727 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:08.941732 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:08.941736 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:08.941743 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:08.941766 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:08.941789 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:08.941795 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:08.941859 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:08.941964 I | ceph-object-controller: successfully started 2024-05-20 12:33:08.941985 I | ceph-file-controller: successfully started 2024-05-20 12:33:08.941998 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:08.942009 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:08.942024 I | ceph-client-controller: successfully started 2024-05-20 12:33:08.942073 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:08.942097 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:08.942104 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:08.942111 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:08.942129 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:08.942144 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:08.942150 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:08.942155 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:08.942160 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:09.008770 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:09.008813 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02a". 2024-05-20 12:33:09.208256 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:09.208312 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:09.208337 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:33:09.209784 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:09.408006 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:09.408034 I | operator: starting the controller-runtime manager 2024-05-20 12:33:09.509002 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:09.509066 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509150 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509212 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509261 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509310 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509374 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509430 D | ceph-cluster-controller: node watcher: node "ceph02m" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509448 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:09.509485 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:09.509506 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:09.509525 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:09.509589 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:09.509607 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:09.509619 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:09.509629 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509645 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:09.509668 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:09.509676 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:09.509700 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509760 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509797 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:09.509832 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:09.509860 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:09.510048 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:09.510134 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:09.510166 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:09.510193 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:09.510219 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:09.510227 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:09.510240 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:09.510246 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:09.510253 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:09.510263 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:09.510271 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:09.510289 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:09.510319 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:09.510381 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:09.510404 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:09.510424 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:09.510435 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:09.510447 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:09.510522 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:09.510553 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.510563 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:09.510748 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:09.510982 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:09.510995 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:09.511555 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.511571 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:09.511591 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.511614 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.511622 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:09.511704 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.512095 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.537205 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:09.608967 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:09.609003 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02b". 2024-05-20 12:33:09.712872 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:33:09.713432 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:09.713448 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:09.713465 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:09.713483 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:09.713548 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:09.714550 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:33:09.714577 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:33:09.714674 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:33:09.714686 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:33:09.714878 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:09.714961 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:09.715084 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:33:09.715129 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:09.715165 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:09.715216 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:09.715884 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:09.717235 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:09.717257 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:09.718143 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:09.719313 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:09.719333 D | ceph-crashcollector-controller: reconciling node: "worker02k" 2024-05-20 12:33:09.720282 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:09.725247 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.725257 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.725845 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:33:09.725875 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.725882 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.726303 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.726314 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.726346 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:09.726359 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:09.726368 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.726372 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.726387 I | operator: setting up schemes 2024-05-20 12:33:09.727249 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:09.727260 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.727972 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:09.731710 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to populate external cluster info: context canceled LastHeartbeatTime:2024-05-20 12:33:09.725841907 +0000 UTC m=+31.061007048 LastTransitionTime:2024-05-20 12:33:09.725841844 +0000 UTC m=+31.061006994}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:09.731720 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:09.731744 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:09.731800 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:33:09.736889 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:33:09.738987 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:33:09.739018 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc041855c20 h:0xc041855c60 i:0xc041855ca0], assignment=&{Schedule:map[e:0xc0477b3580 h:0xc0477b35c0 i:0xc0477b3600]} 2024-05-20 12:33:09.745230 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:33:09.745248 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s" 2024-05-20 12:33:09.745254 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:33:09.745261 I | ceph-cluster-controller: ceph status check interval is 1m0s 2024-05-20 12:33:09.745264 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph" 2024-05-20 12:33:09.745329 D | op-mon: ceph mon status in namespace "rook-ceph" check interval "45s" 2024-05-20 12:33:09.745338 I | op-mon: stopping monitoring of mons in namespace "rook-ceph" 2024-05-20 12:33:09.745458 D | ceph-cluster-controller: checking health of cluster 2024-05-20 12:33:09.745519 D | exec: Running command: ceph status --format json --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring 2024-05-20 12:33:09.775159 D | ceph-cluster-controller: cluster spec successfully validated 2024-05-20 12:33:09.775210 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Detecting Ceph version" 2024-05-20 12:33:09.787887 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.787903 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.788776 I | ceph-spec: detecting the ceph image version for image quay.io/ceph/ceph:v15.2.15... 2024-05-20 12:33:09.788970 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.788982 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.791891 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.791908 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.792461 D | op-k8sutil: ConfigMap rook-ceph-detect-version is already deleted 2024-05-20 12:33:09.792552 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.792581 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.793533 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:09.793553 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:09.808676 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:09.808714 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:09.808735 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:09.809888 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:09.811820 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:09.811846 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:33:09.812858 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:10.008264 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:10.008294 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02c". 2024-05-20 12:33:10.208576 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:10.208592 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:10.208596 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:10.208634 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: context canceled 2024-05-20 12:33:10.208673 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:10.280976 D | ceph-cluster-controller: cluster status: {Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877859103007 UsedBytes:3049013092352 AvailableBytes:2104914108416 TotalBytes:5153927200768 ReadBps:1319345 WriteBps:3807206 ReadOps:47 WriteOps:172 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}} 2024-05-20 12:33:10.287592 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2024-05-20 12:33:10.331625 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:10.340340 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:10.340374 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:10.340382 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:10.340385 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:10.340388 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:10.340393 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:10.340399 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:10.340407 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:10.340415 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:10.340420 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:10.340426 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:10.340489 I | ceph-object-controller: successfully started 2024-05-20 12:33:10.340510 I | ceph-file-controller: successfully started 2024-05-20 12:33:10.340527 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:10.340545 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:10.340561 I | ceph-client-controller: successfully started 2024-05-20 12:33:10.340569 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:10.340583 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:10.340592 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:10.340603 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:10.340613 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:10.340622 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:10.340627 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:10.340632 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:10.340639 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:10.408504 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:10.608865 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:10.609109 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:10.609138 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:33:10.610331 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:10.614965 I | op-k8sutil: Retrying 7 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:10.808251 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:10.808298 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:10.808327 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:33:10.809657 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:10.811109 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:10.811139 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:33:10.812330 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:10.832188 D | cephclient: {"mon":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":3},"mgr":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"osd":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":8},"mds":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"overall":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":13}} 2024-05-20 12:33:10.832201 D | cephclient: {"mon":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":3},"mgr":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"osd":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":8},"mds":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"overall":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":13}} 2024-05-20 12:33:10.832281 D | ceph-cluster-controller: updating ceph cluster "rook-ceph" status and condition to &{Health:{Status:HEALTH_WARN Checks:map[MDS_CACHE_OVERSIZED:{Severity:HEALTH_WARN Summary:{Message:1 MDSs report oversized cache}}]} FSID:a72c4707-301f-4acd-8007-41af0a11a860 ElectionEpoch:1280 Quorum:[0 1 2] QuorumNames:[e h i] MonMap:{Epoch:15 FSID: CreatedTime: ModifiedTime: Mons:[]} OsdMap:{OsdMap:{Epoch:0 NumOsd:0 NumUpOsd:0 NumInOsd:0 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:177}] Version:0 NumPgs:177 DataBytes:877859103007 UsedBytes:3049013092352 AvailableBytes:2104914108416 TotalBytes:5153927200768 ReadBps:1319345 WriteBps:3807206 ReadOps:47 WriteOps:172 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:0 ActiveGID:0 ActiveName: ActiveAddr: Available:true Standbys:[]} Fsmap:{Epoch:307807 ID:1 Up:1 In:1 Max:1 ByRank:[{FilesystemID:1 Rank:0 Name:myfs-a Status:up:active Gid:51498894} {FilesystemID:1 Rank:0 Name:myfs-b Status:up:standby-replay Gid:51918032}] UpStandby:0}}, True, ClusterCreated, Cluster created successfully 2024-05-20 12:33:10.832292 D | ceph-spec: CephCluster "rook-ceph" status: "Ready". "Cluster created successfully" 2024-05-20 12:33:10.851857 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:10.851868 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:10.857170 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:10.857181 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:10.857188 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:10.857212 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:10.857233 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:10.857241 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:10.857305 D | ceph-cluster-controller: checking for stuck pods on not ready nodes 2024-05-20 12:33:10.857716 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:10.857727 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:10.886150 D | ceph-cluster-controller: Health: "HEALTH_WARN", code: "MDS_CACHE_OVERSIZED", message: "1 MDSs report oversized cache" 2024-05-20 12:33:10.962201 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:11.008350 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:11.008379 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02d". 2024-05-20 12:33:11.106046 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.106502 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.106584 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.106622 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.107250 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:11.208009 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:11.208049 I | operator: starting the controller-runtime manager 2024-05-20 12:33:11.308827 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:11.309012 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.309102 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:11.309709 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.309722 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:11.309755 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:11.309763 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:11.309848 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:11.309953 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.310026 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:11.310112 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:11.310143 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:11.310297 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310339 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310384 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310451 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310493 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310528 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:11.310563 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310598 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310608 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:11.310647 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:11.310680 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:11.310695 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:11.310796 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.310848 D | ceph-cluster-controller: node "ceph02m" is ready, checking if it can run OSDs 2024-05-20 12:33:11.310878 D | exec: Running command: ceph osd crush ls ceph02m --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2024-05-20 12:33:11.310887 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:11.311364 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:11.311389 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:11.311887 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:11.311968 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:11.311981 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:11.312036 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:11.312052 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:11.312064 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:11.312189 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:11.312206 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:11.312375 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:11.312412 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:11.312433 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:11.312482 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:11.312503 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:11.312514 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:11.312523 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:11.312533 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:11.312557 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:11.312600 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:11.312624 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:11.312638 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:11.312683 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:11.312704 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:11.312717 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:11.312734 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:11.312748 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:11.408452 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:11.408767 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:11.408801 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:11.408861 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:11.416417 D | CmdReporter: job rook-ceph-detect-version has returned results 2024-05-20 12:33:11.430602 I | ceph-spec: detected ceph image version: "15.2.15-0 octopus" 2024-05-20 12:33:11.430613 I | ceph-cluster-controller: validating ceph version from provided image 2024-05-20 12:33:11.436777 D | ceph-spec: found existing monitor secrets for cluster rook-ceph 2024-05-20 12:33:11.442195 I | ceph-spec: parsing mon endpoints: e=10.102.64.224:6789,h=10.109.166.21:6789,i=10.101.141.73:6789 2024-05-20 12:33:11.442234 D | ceph-spec: loaded: maxMonID=8, mons=map[e:0xc040baa1a0 h:0xc040baa1e0 i:0xc040baa240], assignment=&{Schedule:map[e:0xc04bd26fc0 h:0xc04bd27000 i:0xc04bd27040]} 2024-05-20 12:33:11.446399 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2024-05-20 12:33:11.446504 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2024-05-20 12:33:11.446537 D | exec: Running command: ceph versions --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json 2024-05-20 12:33:11.447615 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447627 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447631 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447635 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447641 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447645 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447711 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447715 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447721 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447724 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447737 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447740 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447810 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447822 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447833 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447852 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447974 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447985 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447989 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.447997 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.448100 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.448108 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.448114 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.448117 D | ceph-spec: object "rook-ceph-detect-version" did not match on delete 2024-05-20 12:33:11.511857 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:11.511907 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:11.511917 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:11.511937 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:11.511949 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:11.513035 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:33:11.513778 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:11.513823 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:11.514761 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:11.515334 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:11.515393 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:11.515422 I | ceph-cluster-controller: stopping monitoring of ceph status 2024-05-20 12:33:11.515443 I | op-osd: stopping monitoring of OSDs in namespace "rook-ceph" 2024-05-20 12:33:11.515459 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:11.515516 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:11.515596 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:11.516344 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:11.517723 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:11.517748 D | ceph-crashcollector-controller: reconciling node: "master02c" 2024-05-20 12:33:11.518949 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:11.522598 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.522614 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.522619 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.522627 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.522690 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.522705 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.522719 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:33:11.522815 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.522829 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.523309 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.523323 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.523764 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.523776 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.529460 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:11.529506 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:11.529565 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:33:11.529619 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:11.529648 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:33:11.529667 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.529676 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.529696 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.529707 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.529941 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.529952 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.530093 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.530102 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.530202 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.530390 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.530730 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:11.530744 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:11.608244 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:11.608302 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:11.608330 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:11.609736 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:11.611070 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:11.611101 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:33:11.612215 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:11.766513 D | ceph-cluster-controller: node watcher: node "ceph02m" is already an OSD node with "[\"osd.5\"]" 2024-05-20 12:33:11.766609 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:11.808657 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:11.808705 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:11.808729 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:33:11.809982 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:11.929177 I | ceph-spec: parsing mon endpoints: prceph-mon02=10.11.10.30:6789,prceph-mon03=10.11.10.93:6789,prceph-mon01=10.11.10.190:6789 2024-05-20 12:33:11.929216 D | ceph-spec: loaded: maxMonID=2, mons=map[prceph-mon01:0xc04784a5c0 prceph-mon02:0xc04784a2e0 prceph-mon03:0xc04784a4a0], assignment=&{Schedule:map[]} 2024-05-20 12:33:11.929224 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) 2024-05-20 12:33:11.929229 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph-external.ceph.rook.io/bucket" 2024-05-20 12:33:11.929600 I | op-bucket-prov: successfully reconciled bucket provisioner I0520 12:33:11.929718 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="rook-ceph-external.ceph.rook.io/bucket" 2024-05-20 12:33:11.929734 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled I0520 12:33:11.929752 1 manager.go:148] objectbucket.io/provisioner-manager "msg"="stopping provisioner" "name"="rook-ceph-external.ceph.rook.io/bucket" "reason"="context canceled" 2024-05-20 12:33:12.008047 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:12.008082 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02e". 2024-05-20 12:33:12.012140 D | cephclient: {"mon":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":3},"mgr":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"osd":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":8},"mds":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"overall":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":13}} 2024-05-20 12:33:12.012151 D | cephclient: {"mon":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":3},"mgr":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"osd":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":8},"mds":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":1},"overall":{"ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)":13}} 2024-05-20 12:33:12.012204 D | ceph-cluster-controller: both cluster and image spec versions are identical, doing nothing 15.2.15-0 octopus 2024-05-20 12:33:12.012213 I | ceph-cluster-controller: cluster "rook-ceph": version "15.2.15-0 octopus" detected for image "quay.io/ceph/ceph:v15.2.15" 2024-05-20 12:33:12.012280 E | ceph-cluster-controller: failed to retrieve ceph cluster "rook-ceph" to update ceph version to {Major:15 Minor:2 Extra:15 Build:0 CommitID:2dfb18841cfecc2f7eb7eb2afd65986ca4d95985}. context canceled 2024-05-20 12:33:12.012345 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Configuring the Ceph cluster" 2024-05-20 12:33:12.027677 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.027687 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.029296 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.029308 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.029973 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.029985 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.030061 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.030074 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.030751 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.030762 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.030976 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.030985 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.129157 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:12.129170 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:12.129184 I | operator: setting up schemes 2024-05-20 12:33:12.130920 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:12.209099 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:12.332065 D | ceph-cluster-controller: monitors are about to reconcile, executing pre actions 2024-05-20 12:33:12.332161 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "Configuring Ceph Mons" 2024-05-20 12:33:12.344450 D | op-mon: Acquiring lock for mon orchestration 2024-05-20 12:33:12.344462 D | op-mon: Acquired lock for mon orchestration 2024-05-20 12:33:12.344465 I | op-mon: start running mons 2024-05-20 12:33:12.344468 D | op-mon: establishing ceph cluster info 2024-05-20 12:33:12.344515 D | op-mon: Released lock for mon orchestration 2024-05-20 12:33:12.344537 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.344550 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.344554 D | ceph-spec: CephCluster "rook-ceph" status: "Progressing". "failed to create cluster: failed to start ceph monitors: failed to initialize ceph cluster info: failed to get cluster info: failed to get mon secrets: context canceled" 2024-05-20 12:33:12.344740 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.344752 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.344990 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.345002 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.345056 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.345069 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.345134 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.345146 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.346493 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:12.346503 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:12.350208 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to create cluster: failed to start ceph monitors: failed to initialize ceph cluster info: failed to get cluster info: failed to get mon secrets: context canceled LastHeartbeatTime:2024-05-20 12:33:12.344546953 +0000 UTC m=+33.679712094 LastTransitionTime:2024-05-20 12:33:12.344546893 +0000 UTC m=+33.679712043}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:12.350219 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:12.350241 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:33:12.408756 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:12.408772 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:12.408776 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:12.408821 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: context canceled 2024-05-20 12:33:12.408880 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:12.538134 I | op-k8sutil: batch job rook-ceph-detect-version still exists I0520 12:33:12.607492 1 request.go:665] Waited for 1.092585616s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/version 2024-05-20 12:33:12.608702 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:12.615788 I | op-k8sutil: Retrying 6 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:12.737425 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:12.745189 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:12.745222 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:12.745231 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:12.745233 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:12.745237 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:12.745242 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:12.745248 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:12.745257 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:12.745266 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:12.745271 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:12.745277 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:12.745334 I | ceph-object-controller: successfully started 2024-05-20 12:33:12.745351 I | ceph-file-controller: successfully started 2024-05-20 12:33:12.745365 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:12.745378 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:12.745389 I | ceph-client-controller: successfully started 2024-05-20 12:33:12.745398 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:12.745418 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:12.745433 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:12.745444 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:12.745451 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:12.745457 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:12.745463 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:12.745469 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:12.745475 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:12.808308 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:12.808431 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:12.808478 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:12.809805 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:12.811089 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:12.811114 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:33:12.812241 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:13.008658 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:13.008706 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:13.008733 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:33:13.009890 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:13.208990 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:13.209041 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:13.209068 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:33:13.210502 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:13.408248 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:13.408278 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02m". 2024-05-20 12:33:13.608501 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:13.608842 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:13.808647 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:13.809020 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:13.809081 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:13.809150 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:13.962910 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:14.007998 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:14.008027 I | operator: starting the controller-runtime manager 2024-05-20 12:33:14.108785 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:14.110305 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:14.110317 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:14.110323 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:14.110326 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:14.110341 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.110359 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:14.110363 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:14.110371 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:14.110401 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.110420 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:14.110426 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:14.110442 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:14.110448 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:14.110456 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:14.110686 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:14.110742 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:14.110798 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:14.110830 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:14.110859 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:14.110908 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:14.110940 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.110988 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:14.111013 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:14.111032 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:14.111046 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.111073 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:14.111095 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.111133 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:14.111159 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:14.111193 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:14.111218 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:14.111248 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.111287 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:14.111313 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.111355 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.111391 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:14.111424 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:14.111447 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:14.111504 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.111540 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:14.111580 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:14.111602 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:14.111633 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:14.111667 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:14.111690 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:14.111752 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:14.111783 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:14.111830 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:14.111862 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:14.111902 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:14.111948 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:14.111983 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:14.112015 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:14.208152 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:14.208214 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:14.208238 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:33:14.209703 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:14.311788 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:14.311822 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:14.313584 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:14.313683 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:14.313802 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:14.313958 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:14.314029 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:14.314829 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:14.317552 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:14.317561 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:14.317574 I | operator: setting up schemes 2024-05-20 12:33:14.319253 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:14.408685 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:14.408735 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:14.408758 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:33:14.410060 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:14.608526 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:14.608576 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:14.608600 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:33:14.609739 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:14.616652 I | op-k8sutil: Retrying 5 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:14.809051 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:14.809081 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02n". 2024-05-20 12:33:14.921786 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:14.929493 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:14.929525 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:14.929533 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:14.929542 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:14.929550 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:14.929554 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:14.929561 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:14.929572 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:14.929580 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:14.929591 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:14.929606 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:14.929669 I | ceph-object-controller: successfully started 2024-05-20 12:33:14.929692 I | ceph-file-controller: successfully started 2024-05-20 12:33:14.929709 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:14.929726 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:14.929761 I | ceph-client-controller: successfully started 2024-05-20 12:33:14.929772 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:14.929789 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:14.929797 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:14.929810 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:14.929821 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:14.929827 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:14.929836 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:14.929844 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:14.929849 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:15.008702 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:15.208964 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:15.209074 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:15.209125 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:15.210365 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:15.211566 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:15.211618 D | ceph-crashcollector-controller: reconciling node: "worker02c" 2024-05-20 12:33:15.212566 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:15.408028 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:15.408139 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:15.408194 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:15.409430 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:15.410620 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:15.410674 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:33:15.411590 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:15.539254 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:15.608329 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:15.608439 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:15.608499 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:15.609649 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:15.610819 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:15.610870 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:33:15.611750 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:15.808502 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:15.808533 D | clusterdisruption-controller: deleted temporary blocking pdb for "host" failure domain "ceph02o". 2024-05-20 12:33:15.808784 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:16.008625 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:16.008718 I | operator: starting the controller-runtime manager 2024-05-20 12:33:16.109638 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:16.109677 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:16.110280 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:16.110388 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:16.110431 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:16.110774 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:16.110787 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:16.110792 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:16.110842 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:16.110884 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:16.110924 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:16.110964 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:16.110982 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:16.110992 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:16.110998 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:16.111002 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:16.111007 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:16.111037 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:16.111079 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:16.111143 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:16.111175 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:16.111214 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:16.111236 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:16.111250 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:16.111268 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:16.111296 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:16.111313 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:16.111332 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:16.111353 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:16.111373 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:16.111389 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:16.111408 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:16.111436 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:16.111493 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:16.111539 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:16.111648 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:16.111667 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:16.111702 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:16.111723 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:16.111748 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:16.111777 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:16.111789 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:16.111826 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:16.111849 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:16.111886 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:16.111926 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:16.111962 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:16.111993 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:16.112139 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:16.112173 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:16.112193 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:16.112208 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:16.112249 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:16.112378 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:16.208066 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:16.208504 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:16.313048 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:16.313110 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:16.313712 I | operator: successfully started the controller-runtime manager 2024-05-20 12:33:16.317792 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:16.317801 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:16.317821 I | operator: setting up schemes 2024-05-20 12:33:16.319463 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:16.408441 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:16.408491 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:16.408533 D | ceph-crashcollector-controller: reconciling node: "worker02f" 2024-05-20 12:33:16.409709 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:16.608514 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:16.608560 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:16.608583 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:33:16.609676 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:16.617636 I | op-k8sutil: Retrying 4 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:16.808216 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:16.808263 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:16.808286 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:33:16.809521 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:16.923839 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:16.933686 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:16.933716 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:16.933724 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:16.933737 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:16.933741 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:16.933745 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:16.933752 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:16.933761 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:16.933789 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:16.933794 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:16.933799 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:16.933874 I | ceph-object-controller: successfully started 2024-05-20 12:33:16.933894 I | ceph-file-controller: successfully started 2024-05-20 12:33:16.933923 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:16.933935 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:16.933950 I | ceph-client-controller: successfully started 2024-05-20 12:33:16.933960 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:16.933981 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:16.933992 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:16.934001 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:16.934008 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:16.934014 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:16.934019 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:16.934024 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:16.934029 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:16.963786 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:17.008993 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:17.208236 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:17.208301 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:17.208324 D | ceph-crashcollector-controller: reconciling node: "worker02m" 2024-05-20 12:33:17.209811 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:17.408607 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:17.408667 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:17.408691 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:17.410211 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:17.411545 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:17.411580 D | ceph-crashcollector-controller: reconciling node: "master02b" 2024-05-20 12:33:17.412617 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:17.608849 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:17.608907 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:17.608944 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:17.610570 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:17.614265 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:17.614305 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:33:17.615531 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:17.808358 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:17.808385 I | operator: starting the controller-runtime manager 2024-05-20 12:33:17.909618 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:17.909647 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:17.909663 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:17.909672 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:17.909725 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:17.909830 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:17.909839 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:17.909888 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:17.909895 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:17.909984 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910035 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910081 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910121 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910161 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910196 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:17.910230 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910251 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:17.910261 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:17.910292 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:17.910332 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910378 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:17.910414 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:17.910461 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910496 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:17.910503 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:17.910995 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:17.911010 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:17.911425 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:17.911562 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:17.911635 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:17.911721 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:17.911826 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:17.911910 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:17.912171 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:17.912341 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:17.912502 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:17.912582 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:17.912731 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:17.912918 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:17.912965 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:17.913023 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:17.913139 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:17.913323 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:17.913431 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:17.913548 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:17.913689 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:17.913743 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:17.913805 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:17.913862 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:17.913960 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:17.914027 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:17.914138 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:17.914169 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:17.914193 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:18.008409 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:18.008688 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:18.008718 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:18.112136 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:18.112639 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:33:18.112668 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:33:18.112683 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:33:18.112689 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:33:18.112829 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:18.112844 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:18.112874 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:18.112919 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:18.113003 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:18.113015 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:18.113093 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:18.113161 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:18.113208 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:33:18.113251 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:18.113312 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:18.113339 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:18.113368 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:18.115353 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.116633 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.116656 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:18.117567 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.117895 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:18.117907 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:18.117923 I | operator: setting up schemes 2024-05-20 12:33:18.118723 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.118745 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:18.119580 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.119766 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:18.120714 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.120736 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:33:18.121615 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.122813 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.122840 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:33:18.123762 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.125003 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.125027 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:18.126008 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.127113 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.127138 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:18.128023 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.129196 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.129216 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:18.129988 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.131081 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.131101 D | ceph-crashcollector-controller: reconciling node: "worker02g" 2024-05-20 12:33:18.131945 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.208322 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:18.208459 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:18.208540 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:33:18.210011 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.409182 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:18.409244 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:18.409271 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:18.410585 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.411943 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:18.411976 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:33:18.412911 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.539587 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:18.608269 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:18.608418 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:18.608498 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:33:18.609847 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:18.618427 I | op-k8sutil: Retrying 3 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:18.722839 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:18.735602 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:18.735640 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:18.735648 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:18.735651 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:18.735654 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:18.735658 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:18.735664 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:18.735672 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:18.735680 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:18.735686 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:18.735692 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:18.735796 I | ceph-object-controller: successfully started 2024-05-20 12:33:18.735821 I | ceph-file-controller: successfully started 2024-05-20 12:33:18.735834 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:18.735847 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:18.735862 I | ceph-client-controller: successfully started 2024-05-20 12:33:18.735876 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:18.735895 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:18.735906 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:18.735915 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:18.735921 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:18.735931 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:18.735936 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:18.735941 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:18.735946 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:18.808932 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:19.008269 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:19.008356 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:19.008394 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:19.008481 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: context canceled 2024-05-20 12:33:19.008569 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:19.208746 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:19.208882 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:19.208935 D | ceph-crashcollector-controller: reconciling node: "worker02b" 2024-05-20 12:33:19.210218 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.409390 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:19.409447 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:19.409478 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:19.410671 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.411869 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:19.411899 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:19.412710 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.413823 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:19.413845 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:33:19.414625 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.608665 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:19.608729 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:19.608763 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:33:19.609803 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.808942 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:19.809002 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:19.809032 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:19.810495 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.811750 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:19.811789 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:19.813153 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.814269 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:19.814294 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:33:19.815486 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:19.854501 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:19.854609 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:19.854771 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:19.854955 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:19.855106 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:19.964846 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:20.008952 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:20.008982 I | operator: starting the controller-runtime manager 2024-05-20 12:33:20.210707 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.210762 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.210793 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.210820 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.211048 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.211058 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:20.211092 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:20.211100 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:20.211107 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.211136 D | ceph-cluster-controller: node watcher: node "ceph02e" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.211170 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:20.211192 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.211221 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:20.211247 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.211269 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:20.211292 D | ceph-cluster-controller: node watcher: node "ceph02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:20.212491 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.212535 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:20.212550 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:20.213275 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:20.213291 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:20.213334 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.213378 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:20.300869 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:20.301213 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:20.301250 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:20.301288 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:20.313203 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:20.410997 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:20.411057 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:20.411087 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:20.411106 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:20.411119 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:20.411217 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:20.411492 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:20.411538 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:20.411591 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:20.411636 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:20.411932 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:20.411949 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:20.411964 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:20.412022 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:20.412042 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:20.412054 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:20.412095 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:20.412131 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:20.412150 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:20.412160 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:20.412574 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:20.412604 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:20.412627 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:20.412672 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:20.412706 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:20.412722 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:20.412755 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:20.412793 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:20.412801 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:20.412809 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:20.425257 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:20.425409 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:20.425468 D | ceph-crashcollector-controller: reconciling node: "worker02h" 2024-05-20 12:33:20.427254 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.608423 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:20.608483 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:20.608508 D | ceph-crashcollector-controller: reconciling node: "worker02o" 2024-05-20 12:33:20.609879 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.615097 D | ceph-crashcollector-controller: reconciling node: "ceph02e" 2024-05-20 12:33:20.615568 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:20.615935 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:20.616344 D | clusterdisruption-controller: reconciling "rook-ceph/" 2024-05-20 12:33:20.616420 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:20.616470 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:20.616503 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:20.616520 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:20.616801 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:20.616851 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:20.616971 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:20.617012 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph-external/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:20.617101 E | op-bucket-prov: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:20.617117 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:20.617135 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: Get "https://10.96.0.1:443/api/v1/namespaces/rook-ceph/secrets/rook-ceph-mon": context canceled 2024-05-20 12:33:20.618289 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.619161 I | op-k8sutil: Retrying 2 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted 2024-05-20 12:33:20.619788 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.619810 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:20.620895 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.622156 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.622177 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:33:20.623129 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.624304 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2024-05-20 12:33:20.624313 I | operator: watching all namespaces for Ceph CRs 2024-05-20 12:33:20.624327 I | operator: setting up schemes 2024-05-20 12:33:20.624598 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.624644 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:20.625862 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.626500 I | operator: setting up the controller-runtime manager 2024-05-20 12:33:20.626655 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.626668 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:20.626800 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.626810 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:20.627335 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.627364 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:20.627656 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.627667 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:20.627845 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.627856 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:20.628004 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.628013 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:20.628095 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:20.628106 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:20.628955 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.630258 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.630284 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:20.631179 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:33:20.631566 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.632852 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.632875 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:20.634018 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.635372 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.635394 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:20.635893 I | ceph-spec: parsing mon endpoints: prceph-mon02=10.11.10.30:6789,prceph-mon03=10.11.10.93:6789,prceph-mon01=10.11.10.190:6789 2024-05-20 12:33:20.635953 D | ceph-spec: loaded: maxMonID=2, mons=map[prceph-mon01:0xc05d9c63c0 prceph-mon02:0xc05d9c6340 prceph-mon03:0xc05d9c6380], assignment=&{Schedule:map[]} 2024-05-20 12:33:20.635985 I | ceph-spec: found the cluster info to connect to the external cluster. will use "client.admin" to check health and monitor status. mons=map[prceph-mon01:0xc05d9c63c0 prceph-mon02:0xc05d9c6340 prceph-mon03:0xc05d9c6380] 2024-05-20 12:33:20.636578 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.637896 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.637919 D | ceph-crashcollector-controller: reconciling node: "worker02n" 2024-05-20 12:33:20.638066 I | cephclient: writing config file /var/lib/rook/rook-ceph-external/rook-ceph-external.config 2024-05-20 12:33:20.638167 I | cephclient: generated admin config in /var/lib/rook/rook-ceph-external 2024-05-20 12:33:20.638199 I | ceph-cluster-controller: external cluster identity established 2024-05-20 12:33:20.638221 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2024-05-20 12:33:20.638246 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-provisioner mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:33:20.639397 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.808918 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:20.808968 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:20.808991 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:20.810206 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:20.811515 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:20.811551 D | ceph-crashcollector-controller: reconciling node: "worker02l" 2024-05-20 12:33:20.812597 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:21.008121 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:21.008174 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:21.008197 D | ceph-crashcollector-controller: reconciling node: "master02a" 2024-05-20 12:33:21.009234 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:21.186081 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2024-05-20 12:33:21.186817 D | exec: Running command: ceph auth get-or-create-key client.csi-rbd-node mon profile rbd mgr allow rw osd profile rbd --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:33:21.208614 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:21.233054 I | operator: delete Issuer and Certificate since secret is not found 2024-05-20 12:33:21.241115 I | ceph-cluster-controller: successfully started 2024-05-20 12:33:21.241153 I | ceph-cluster-controller: enabling hotplug orchestration 2024-05-20 12:33:21.241161 I | ceph-crashcollector-controller: successfully started 2024-05-20 12:33:21.241164 D | ceph-crashcollector-controller: watch for changes to the nodes 2024-05-20 12:33:21.241167 D | ceph-crashcollector-controller: watch for changes to the ceph-crash deployments 2024-05-20 12:33:21.241172 D | ceph-crashcollector-controller: watch for changes to the ceph pod nodename and enqueue their nodes 2024-05-20 12:33:21.241180 I | ceph-block-pool-controller: successfully started 2024-05-20 12:33:21.241190 I | ceph-object-store-user-controller: successfully started 2024-05-20 12:33:21.241201 I | ceph-object-realm-controller: successfully started 2024-05-20 12:33:21.241207 I | ceph-object-zonegroup-controller: successfully started 2024-05-20 12:33:21.241213 I | ceph-object-zone-controller: successfully started 2024-05-20 12:33:21.241296 I | ceph-object-controller: successfully started 2024-05-20 12:33:21.241317 I | ceph-file-controller: successfully started 2024-05-20 12:33:21.241333 I | ceph-nfs-controller: successfully started 2024-05-20 12:33:21.241346 I | ceph-rbd-mirror-controller: successfully started 2024-05-20 12:33:21.241360 I | ceph-client-controller: successfully started 2024-05-20 12:33:21.241369 I | ceph-filesystem-mirror-controller: successfully started 2024-05-20 12:33:21.241384 I | operator: rook-ceph-operator-config-controller successfully started 2024-05-20 12:33:21.241395 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2024-05-20 12:33:21.241404 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2024-05-20 12:33:21.241411 I | ceph-bucket-topic: successfully started 2024-05-20 12:33:21.241416 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:21.241421 I | ceph-bucket-notification: successfully started 2024-05-20 12:33:21.241427 I | ceph-fs-subvolumegroup-controller: successfully started 2024-05-20 12:33:21.241434 I | blockpool-rados-namespace-controller: successfully started 2024-05-20 12:33:21.410261 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:21.410319 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:21.410348 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:33:21.412039 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:21.540585 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:21.608971 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:21.609022 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:21.609054 D | ceph-crashcollector-controller: reconciling node: "worker02e" 2024-05-20 12:33:21.610178 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:21.725423 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2024-05-20 12:33:21.725506 D | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-provisioner mon allow r mgr allow rw osd allow rw tag cephfs metadata=* --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:33:21.809014 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (default) 2024-05-20 12:33:21.809030 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:21.809034 D | ceph-csi: not a multus cluster "rook-ceph/rook-ceph-operator-config" or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder 2024-05-20 12:33:21.809079 W | ceph-csi: could not find deployment owner reference to assign to csi drivers. could not find pod "rook-ceph-operator-6bc54d9b6f-thxtc" in namespace "rook-ceph" to find deployment owner reference: context canceled 2024-05-20 12:33:21.809147 E | ceph-csi: failed to reconcile failed creating csi config map: failed to create initial csi config map "rook-ceph-csi-config" (in "rook-ceph"): context canceled 2024-05-20 12:33:22.008637 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:22.208972 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:22.209122 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:22.209171 D | ceph-crashcollector-controller: reconciling node: "worker02p" 2024-05-20 12:33:22.210415 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:22.270506 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2024-05-20 12:33:22.270573 D | exec: Running command: ceph auth get-or-create-key client.csi-cephfs-node mon allow r mgr allow rw osd allow rw tag cephfs *=* mds allow rw --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:33:22.408983 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:22.409039 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:22.409069 D | ceph-crashcollector-controller: reconciling node: "worker02r" 2024-05-20 12:33:22.410613 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:22.608583 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:22.608628 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:22.608659 D | ceph-crashcollector-controller: reconciling node: "worker02d" 2024-05-20 12:33:22.609793 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:22.619699 I | op-k8sutil: Retrying 1 more times every 2 seconds for ConfigMap rook-ceph-detect-version to be deleted I0520 12:33:22.807717 1 request.go:665] Waited for 1.599062062s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/version 2024-05-20 12:33:22.809565 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:22.810503 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:22.821063 D | op-cfg-keyring: updating secret for rook-csi-rbd-provisioner 2024-05-20 12:33:22.826465 D | op-cfg-keyring: updating secret for rook-csi-rbd-node 2024-05-20 12:33:22.831911 D | op-cfg-keyring: updating secret for rook-csi-cephfs-provisioner 2024-05-20 12:33:22.836390 D | op-cfg-keyring: updating secret for rook-csi-cephfs-node 2024-05-20 12:33:22.838896 I | ceph-csi: created kubernetes csi secrets for cluster "rook-ceph-external" 2024-05-20 12:33:22.845166 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2024-05-20 12:33:22.845500 D | ceph-csi: using "rook-ceph" for csi configmap namespace 2024-05-20 12:33:22.965372 I | op-k8sutil: batch job rook-ceph-detect-version still exists 2024-05-20 12:33:23.008387 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:23.008468 I | operator: starting the controller-runtime manager 2024-05-20 12:33:23.047183 I | ceph-cluster-controller: successfully updated csi config map 2024-05-20 12:33:23.047211 D | exec: Running command: ceph version --connect-timeout=15 --cluster=rook-ceph-external --conf=/var/lib/rook/rook-ceph-external/rook-ceph-external.config --name=client.admin --keyring=/var/lib/rook/rook-ceph-external/client.admin.keyring --format json 2024-05-20 12:33:23.109766 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02m-zzttb" is a ceph pod! 2024-05-20 12:33:23.109808 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02c-6968d66b97-gcf7b" is a ceph pod! 2024-05-20 12:33:23.109877 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-a-85dc75b664-vln4n" is a ceph pod! 2024-05-20 12:33:23.109909 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02n-867c4b8cd-rr94c" is a ceph pod! 2024-05-20 12:33:23.109956 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02o-7bb7d9c5b5-5vjpq" is a ceph pod! 2024-05-20 12:33:23.110019 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02c-5qdnn" is a ceph pod! 2024-05-20 12:33:23.110159 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02d-svhsp" is a ceph pod! 2024-05-20 12:33:23.110177 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02n-qr8td" is a ceph pod! 2024-05-20 12:33:23.110222 D | ceph-crashcollector-controller: "rook-ceph-mds-myfs-b-7df8698c66-7php7" is a ceph pod! 2024-05-20 12:33:23.110286 D | ceph-crashcollector-controller: "rook-ceph-osd-1-ffb885fff-xbktr" is a ceph pod! 2024-05-20 12:33:23.110419 D | ceph-crashcollector-controller: "rook-ceph-mon-h-6c9b78cb4d-2g529" is a ceph pod! 2024-05-20 12:33:23.110460 D | ceph-crashcollector-controller: "rook-ceph-osd-6-7d8c87b949-t5q98" is a ceph pod! 2024-05-20 12:33:23.110474 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02a-777d6cdc4f-jr569" is a ceph pod! 2024-05-20 12:33:23.110481 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02m-777556b5dc-xm779" is a ceph pod! 2024-05-20 12:33:23.110495 D | ceph-crashcollector-controller: "rook-ceph-osd-4-5f95965c9b-6zcj9" is a ceph pod! 2024-05-20 12:33:23.110501 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02b-677995cffb-dzf76" is a ceph pod! 2024-05-20 12:33:23.110515 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02a-drds2" is a ceph pod! 2024-05-20 12:33:23.110539 D | ceph-crashcollector-controller: "rook-ceph-osd-5-549d98c7bd-xfqdd" is a ceph pod! 2024-05-20 12:33:23.110555 D | ceph-crashcollector-controller: "rook-ceph-mon-i-67bb88f5f6-lqxs4" is a ceph pod! 2024-05-20 12:33:23.110564 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02e-h54qz" is a ceph pod! 2024-05-20 12:33:23.110594 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02b-r7bq7" is a ceph pod! 2024-05-20 12:33:23.110630 D | ceph-crashcollector-controller: "rook-ceph-osd-prepare-ceph02o-vjg92" is a ceph pod! 2024-05-20 12:33:23.110639 D | ceph-spec: create event from a CR: "replicapool" 2024-05-20 12:33:23.110659 D | ceph-crashcollector-controller: "rook-ceph-osd-7-5c6fbccff4-l5gvj" is a ceph pod! 2024-05-20 12:33:23.110674 D | ceph-crashcollector-controller: "rook-ceph-mon-e-856c85f568-df9b5" is a ceph pod! 2024-05-20 12:33:23.110694 D | ceph-crashcollector-controller: "rook-ceph-mgr-a-7dd76c6d55-76ln6" is a ceph pod! 2024-05-20 12:33:23.110709 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.110727 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:23.110738 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02e-5b8d868687-69xbb" is a ceph pod! 2024-05-20 12:33:23.110764 D | ceph-crashcollector-controller: "rook-ceph-osd-0-67d54c6c5b-zqq2x" is a ceph pod! 2024-05-20 12:33:23.110770 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:23.110775 D | ceph-cluster-controller: create event from a CR 2024-05-20 12:33:23.110780 D | ceph-crashcollector-controller: "rook-ceph-osd-3-7dc67bf67d-vxmng" is a ceph pod! 2024-05-20 12:33:23.110793 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:23.110814 D | ceph-crashcollector-controller: "rook-ceph-crashcollector-ceph02d-79fbc8fbfb-rkq2v" is a ceph pod! 2024-05-20 12:33:23.110845 D | ceph-crashcollector-controller: "rook-ceph-osd-2-59c55584c7-q65x5" is a ceph pod! 2024-05-20 12:33:23.110861 D | ceph-cluster-controller: node watcher: node "worker02q" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.110906 D | ceph-cluster-controller: node watcher: node "master02c" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.110944 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.110956 D | ceph-cluster-controller: node watcher: node "worker02f" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.110986 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:23.110998 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:23.111033 D | clusterdisruption-controller: create event from ceph cluster CR 2024-05-20 12:33:23.111041 D | ceph-cluster-controller: node watcher: node "worker02r" is not tolerable for cluster "rook-ceph", skipping 2024-05-20 12:33:23.111113 D | ceph-cluster-controller: node watcher: node "master02b" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.111157 D | ceph-spec: create event from a CR: "myfs" 2024-05-20 12:33:23.111186 D | ceph-cluster-controller: node watcher: cluster "rook-ceph" is not ready. skipping orchestration 2024-05-20 12:33:23.111198 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph" 2024-05-20 12:33:23.111237 D | ceph-cluster-controller: node watcher: node "worker02p" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.111272 D | ceph-cluster-controller: node watcher: node "ceph02o" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.111318 D | ceph-cluster-controller: node watcher: node "master02a" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.111354 D | ceph-cluster-controller: node watcher: node "worker02g" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.111385 D | ceph-cluster-controller: node watcher: node "worker02h" is not tolerable for cluster "rook-ceph-external", skipping 2024-05-20 12:33:23.111442 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.208068 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:23.208125 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:23.208151 D | ceph-crashcollector-controller: reconciling node: "worker02a" 2024-05-20 12:33:23.209308 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.312807 D | ceph-crashcollector-controller: reconciling node: "ceph02m" 2024-05-20 12:33:23.312873 D | ceph-spec: "ceph-file-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:23.312885 D | ceph-spec: "ceph-file-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:23.312926 D | ceph-spec: "ceph-block-pool-controller": CephCluster resource "rook-ceph" found in namespace "rook-ceph" 2024-05-20 12:33:23.312936 D | ceph-spec: "ceph-block-pool-controller": ceph status is "HEALTH_WARN", operator is ready to run ceph command, reconciling 2024-05-20 12:33:23.314396 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:23.314441 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:23.314591 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph-external" 2024-05-20 12:33:23.314619 D | ceph-spec: CephCluster "rook-ceph-external" status: "Connecting". "Attempting to connect to an external Ceph cluster" 2024-05-20 12:33:23.316122 D | ceph-spec: found existing monitor secrets for cluster rook-ceph-external 2024-05-20 12:33:23.316770 D | operator: reconciling rook-ceph/rook-ceph-operator-config 2024-05-20 12:33:23.316807 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2024-05-20 12:33:23.316811 I | op-k8sutil: ROOK_LOG_LEVEL="DEBUG" (configmap) 2024-05-20 12:33:23.316817 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (configmap) 2024-05-20 12:33:23.316968 D | operator: webhook secret created reloading the manager to enable the webhook server 2024-05-20 12:33:23.316989 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.317065 I | operator: reloading operator's CRDs manager, cancelling all orchestrations! 2024-05-20 12:33:23.317200 E | ceph-file-controller: failed to reconcile failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:23.317219 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/replicapool". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:23.317306 E | ceph-file-controller: failed to reconcile CephFilesystem "rook-ceph/myfs". failed to populate cluster info: failed to get mon secrets: context canceled 2024-05-20 12:33:23.317471 E | operator: failed to reconcile failed to stop device discovery daemonset: Delete "https://10.96.0.1:443/apis/apps/v1/namespaces/rook-ceph/daemonsets/rook-discover": context canceled 2024-05-20 12:33:23.318809 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:23.318834 D | ceph-crashcollector-controller: reconciling node: "ceph02c" 2024-05-20 12:33:23.319817 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.321047 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:23.321067 D | ceph-crashcollector-controller: reconciling node: "ceph02b" 2024-05-20 12:33:23.322015 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.323250 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:23.323271 D | ceph-crashcollector-controller: reconciling node: "ceph02n" 2024-05-20 12:33:23.324183 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.325325 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:23.325350 D | ceph-crashcollector-controller: reconciling node: "ceph02o" 2024-05-20 12:33:23.325490 D | ceph-spec: CephCluster "rook-ceph-external" status: "Progressing". "failed to populate external cluster info: context canceled" 2024-05-20 12:33:23.325900 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.325913 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:23.325935 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.325964 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:23.326048 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.326060 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:23.326101 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.326114 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:23.326422 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.326462 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:23.326563 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.326588 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:23.326814 D | ceph-spec: found 1 ceph clusters in namespace "rook-ceph-external" 2024-05-20 12:33:23.326825 D | ceph-cluster-controller: update event on CephCluster CR 2024-05-20 12:33:23.327580 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.328831 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:23.328855 D | ceph-crashcollector-controller: reconciling node: "ceph02d" 2024-05-20 12:33:23.330041 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.330431 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:False Reason:ClusterProgressing Message:failed to populate external cluster info: context canceled LastHeartbeatTime:2024-05-20 12:33:23.325323829 +0000 UTC m=+44.660488971 LastTransitionTime:2024-05-20 12:33:23.325323772 +0000 UTC m=+44.660488913}. failed to update object "rook-ceph-external/rook-ceph-external" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph-external": the object has been modified; please apply your changes to the latest version and try again 2024-05-20 12:33:23.330442 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:23.330456 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:23.330500 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2024-05-20 12:33:23.330532 I | ceph-cluster-controller: context cancelled, exiting reconcile 2024-05-20 12:33:23.330553 D | ceph-cluster-controller: successfully configured CephCluster "rook-ceph/rook-ceph" 2024-05-20 12:33:23.331621 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:23.331644 D | ceph-crashcollector-controller: reconciling node: "ceph02a" 2024-05-20 12:33:23.332594 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.333708 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": context canceled 2024-05-20 12:33:23.333731 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:33:23.334584 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.409376 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 12:33:23.409423 E | ceph-crashcollector-controller: context canceled 2024-05-20 12:33:23.409447 D | ceph-crashcollector-controller: reconciling node: "worker02q" 2024-05-20 12:33:23.410514 D | ceph-spec: ceph version found "15.2.15-0" 2024-05-20 12:33:23.608073 D | op-k8sutil: kubernetes version fetched 1.26.9 2024-05-20 12:33:23.608377 D | clusterdisruption-controller: ceph "rook-ceph" cluster failed to check cluster health. failed to get status. : context canceled 2024-05-20 12:33:23.608406 D | clusterdisruption-controller: reconciling "rook-ceph-external/rook-ceph-external" 2024-05-20 12:33:23.608436 D | clusterdisruption-controller: reconciling "rook-ceph/rook-ceph" 2024-05-20 12:33:23.621704 D | cephclient: {"version":"ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)"} 2024-05-20 12:33:23.621717 D | cephclient: {"version":"ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)"} panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1985999] goroutine 31282 [running]: github.com/rook/rook/pkg/operator/ceph/cluster.(*ClusterController).configureExternalCephCluster(0xc000712680, 0xc003c76000) /home/runner/work/rook/rook/pkg/operator/ceph/cluster/cluster_external.go:140 +0x759 github.com/rook/rook/pkg/operator/ceph/cluster.(*ClusterController).initializeCluster(0xc000712680, 0xc003c76000) /home/runner/work/rook/rook/pkg/operator/ceph/cluster/cluster.go:185 +0xb3 github.com/rook/rook/pkg/operator/ceph/cluster.(*ClusterController).reconcileCephCluster(0xc000712680, 0xc0616b7400, 0x233deb8) /home/runner/work/rook/rook/pkg/operator/ceph/cluster/controller.go:372 +0x21a github.com/rook/rook/pkg/operator/ceph/cluster.(*ReconcileCephCluster).reconcile(_, {{{_, _}, {_, _}}}) /home/runner/work/rook/rook/pkg/operator/ceph/cluster/controller.go:258 +0x46d github.com/rook/rook/pkg/operator/ceph/cluster.(*ReconcileCephCluster).Reconcile(0xc059cfb980, {0x2300f88, 0xc05a680360}, {{{0xc04c55a7c8, 0x12}, {0xc04c55a7b0, 0x12}}}) /home/runner/work/rook/rook/pkg/operator/ceph/cluster/controller.go:217 +0xc5 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc059d960b0, {0x2300f88, 0xc05a680330}, {{{0xc04c55a7c8, 0x1e3a1e0}, {0xc04c55a7b0, 0x413a34}}}) /home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:114 +0x26f sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc059d960b0, {0x2300ee0, 0xc059e37400}, {0x1d1b1c0, 0xc056c28fc0}) /home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:311 +0x33e sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc059d960b0, {0x2300ee0, 0xc059e37400}) /home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:266 +0x205 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2() /home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:227 +0x85 created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 /home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:223 +0x357