-
Hi, I have purged several OSDs from the cluster (rook-operator scaled down, deployment osd.X scaled down, then ceph osd out, ceph osd purge, deployment osd.X deleted), rebalance, then move onto next osd. After working on this for a day or two and purging all OSD I wanted, I removed the devices that the purged OSD were using from cephcluster CRD and then scaled up the operator at which point it recreated all purged OSD deployments. These are now failing of course since they have been purged from ceph. ceph osd tree does not contain any purged OSD I also tried using Trying to purge the OSD again using the rook-ceph kubectl plugin it complains that it can not find it in osddump
this is rook v1.13.4 and ceph 18.2.0 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@mliker do you have the matching osd count in cephcluster up? Generally, I have seen this error when someone removed a osd but the osd-count in cephCLuster doesn't match with the osd that are up |
Beta Was this translation helpful? Give feedback.
Hi @subhamkrai thanks for getting back to me. I realised that I hadn't cleaned up the drives themselves so the prepare jobs were re-discovering them. Now the OSDs are no longer recreated when I scale up the rook operator