-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ceph Cluster keeps entering in "Progressing" status and simply not working. #13466
Labels
Comments
@alifiroozi80 could you get ceph status output? |
Hello @subhamkrai $ ceph -s
cluster:
id: 21fb4292-6775-4b24-bf4b-97ae1bdf76dd
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 6h)
mgr: b(active, since 2d), standbys: a
mds: 1/1 daemons up, 1 hot standby
osd: 9 osds: 9 up (since 12h), 9 in (since 3w)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 12 pools, 265 pgs
objects: 15.73k objects, 59 GiB
usage: 187 GiB used, 83 GiB / 270 GiB avail
pgs: 265 active+clean
io:
client: 8.8 KiB/s rd, 144 KiB/s wr, 3 op/s rd, 2 op/s wr |
I mean to ask, check the ceph status when ceph cluster status is in progressing |
Here you are: ubuntu@master-1:~$ kubectl -n rook-ceph get cephcluster
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID
rook-ceph /var/lib/rook 3 135d Progressing Processing OSD 4 on node "worker-3" HEALTH_OK 21fb4292-6775-4b24-bf4b-97ae1bdf76dd
ubuntu@master-1:~$ kubectl -n rook-ceph exec -it rook-ceph-tools-6db96f8f67-dbd8n -- ceph -s
cluster:
id: 21fb4292-6775-4b24-bf4b-97ae1bdf76dd
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 9h)
mgr: a(active, since 2h), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 9 osds: 9 up (since 16h), 9 in (since 3w)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 12 pools, 265 pgs
objects: 15.77k objects, 59 GiB
usage: 186 GiB used, 84 GiB / 270 GiB avail
pgs: 265 active+clean
io:
client: 937 B/s rd, 18 KiB/s wr, 1 op/s rd, 0 op/s wr |
Awesome |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello Folks
I have Rook with my K8s cluster for over four months now.
Till now, I had Rook version
1.12.X
and Ceph1.17.X
, and they were awesome.Recently, I've updated the Rook to
1.13.1
, and Ceph to1.18.2
, and everything seems ok, but, every couple of seconds, thePHASE
ofcephcluster
changes fromReady
toProgressing
.Meanwhile, no volumes can be initialized!!!
The
MESSAGE
is different every time, here are a couple of them that I captured:OSD
andnode
is different everytime)Here are the Pods:
Here are the last couple of lines of the
rook-ceph-operator
deployment:It seems that it's ok right? but NO
It will not create Volumes while it's in the
Progressing
phase!Here is the sample PVC:
I applied that and saw that it stuck at the
Pending
and meanwhile, thecephcluster
was inProgressing
After a couple of minutes, it became
Ready
, and the PVC was created!UPDATE: I downgrade the Rook version to
1.12.8
and left the Ceph version to1.18.2
and now everything is work as expected.The text was updated successfully, but these errors were encountered: