Replies: 1 comment
-
Nevermind, now it's ok. I guess it took a while to resolve. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I had a healthy host-based cluster where I had to expand RAM, and there was no hot-add.
So for one node at the time I scaled down its OSD, drained it, shut it down, expanded RAM, started it back, uncordoned it, scaled up the osd deploy.
This worked for a couple nodes, but after expanding the RAM on the 3rd node and finishing all the steps I see this in ceph status.
And ceph osd status shows this.
I've tried unsetting these flags on every osd, not sure if I should reference them by osd.# or just # but I've tried both.
How do I fix this issue?
Beta Was this translation helpful? Give feedback.
All reactions