You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[root@rook-ceph-tools-5cb7d9cf4d-sq52c /]# radosgw-admin period commit
failed to commit period: (2) No such file or directory
failed to commit period: (2) No such file or directory
2024-03-22T13:45:22.800+0000 7f97ec0b1a80 0 period failed to read sync status: (2) No such file or directory
2024-03-22T13:45:22.800+0000 7f97ec0b1a80 0 failed to update metadata sync status: (2) No such file or directory
[root@rook-ceph-tools-5cb7d9cf4d-sq52c /]# radosgw-admin period update --commit
failed to commit period: (2) No such file or directory
2024-03-22T13:46:50.865+0000 7f6c4c7f3a80 0 period failed to read sync status: (2) No such file or directory
2024-03-22T13:46:50.865+0000 7f6c4c7f3a80 0 failed to update metadata sync status: (2) No such file or directory
failed to commit period: (2) No such file or directory
[root@rook-ceph-tools-5cb7d9cf4d-sq52c /]# radosgw-admin period update --commit
failed to commit period: 2024-03-22T13:47:52.194+0000 7f8e252f6a80 0 period failed to read sync status: (2) No such file or directory
2024-03-22T13:47:52.194+0000 7f8e252f6a80 0 failed to update metadata sync status: (2) No such file or directory
(2) No such file or directory
failed to commit period: (2) No such file or directory
Environment:
OS (e.g. from /etc/os-release): Ubuntu 20.04.6 LTS (Focal Fossa)
I would also like to know the answers to the following questions.
I performed a manual failover to DC2 (using reference: https://docs.ceph.com/en/latest/radosgw/multisite/#setting-up-failover-to-the-secondary-zone) and then created a bucket and pushed some data into it on DC 2 cluster. However, that data was not synced back to DC1. To be sure, I pushed some data into the former master zone at DC1 and to my surprise it was getting synced back to DC2 cluster.
Is this the intended behavior?
Executing radosgw-admin sync status command is working only on DC2 and not on DC1
DC2 # radosgw-admin sync status
realm c0e21ab6-57a2-4339-96b8-9aa4b7c5ea52 (xshield-store)
zonegroup 63695c1b-316e-4836-a0a9-2021dc4c74b1 (xshield-store)
zone 78a175ba-62f0-4ba1-8217-aea003125c1e (xshield-store-zone-b)
current time 2024-03-22T15:52:46Z
zonegroup features enabled: resharding
disabled: compress-encrypted
metadata sync no sync (zone is master)
data sync source: f934eba3-d647-4543-aed1-216e317e31a2 (xshield-store)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
DC1 # radosgw-admin sync status
realm c0e21ab6-57a2-4339-96b8-9aa4b7c5ea52 (xshield-store)
zonegroup 63695c1b-316e-4836-a0a9-2021dc4c74b1 (xshield-store)
zone f934eba3-d647-4543-aed1-216e317e31a2 (xshield-store)
current time 2024-03-22T15:53:51Z
zonegroup features enabled: resharding
disabled: compress-encrypted
metadata sync failed to read sync status: (2) No such file or directory
2024-03-22T15:53:52.688+0000 7f338ed56a80 0 ERROR: failed to fetch datalog info
data sync source: 78a175ba-62f0-4ba1-8217-aea003125c1e (xshield-store-zone-b)
failed to retrieve sync info: (5) Input/output error
What does "pull the latest realm configuration" mean - just metadata pull or the data too ? Reference:
Even though the ceph part may work as per documentation. But here in Rook we also have CRDs. We may need to update CRDs as well because, from the Rook Operator's point of view, DC1 is still master. @alimaredia any thoughts??
Is this a bug report or feature request?
Deviation from expected behavior:
Ceph RGW Multisite -- reverting from failover is not working
Expected behavior:
How to reproduce it (minimal and precise):
xshield-store-keys
inrook-ceph
namespaceradosgw-admin sync status
on site 2)File(s) to submit:
cluster.yaml
, if necessaryHelm chart values for rook-ceph-cluster v1.13.7 for DC 1
Helm chart values for DC 2
Logs to submit:
Failing over back to DC1 is not working.
Environment:
uname -a
): Linux kubespray 5.4.0-174-generic jenkins: git clone should use local reference repo for acceleration #193-Ubuntu SMP Thu Mar 7 14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linuxrook version
inside of a Rook Pod): 1.13.7ceph -v
): ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)kubectl version
):Client Version: v1.27.7
Kustomize Version: v5.0.1
Server Version: v1.27.7
Bare metal with kubespray
ceph health
in the Rook Ceph toolbox):HEALTH_OK on both sites
The text was updated successfully, but these errors were encountered: