New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to set ceph dashboard rgw-api access/secret keys for multisite configuration #11047
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
unstale |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
any updates on this matter? |
@thotz Thoughts on this? |
BTW is there a way to set the endpoint for the manager, on which it talks to radosgw? or how is this info derived? |
IMO, the user needs to be created only on the master zone, not all on zones so that the conflict can be avoided. @bumarcell please check the ceph dashboard devs about the how endpoint is figured. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Do we want to keep this alive? |
I'm not sure.. I was taken away by many other things and haven't come back to this again 🙈 |
What should the feature do:
Set/update the ceph rgw api credentials used by the dashboard to access rgw instances. The equivalent of:
Ideally, this would be controlled by the creation of a
secret
within the same namespace as thecephcluster
. A secret percephbobjectstore
is not needed as the api credentials are "global" to the cluster. E.g.:What is use case behind this feature:
As of 1.10.1,
enableRGWDashboard()
first creates a system user and then sets the ceph dashboard rgw-api credentials to match that user. This is problematic for multisite setup of a non-master zone as 1) ceph, at least as of quincy, appears to be unable to hold credentials per rgw instance and the rgw-api credentials must be used to access all rgw instances in the ceph cluster and 2) a multisite zone has to be part of a realm and the realm replicates all users, including system users. This means that all rgw instances within a cluster need to to be accessible with the same system user credentials.It is theoretically possible this could work if the non-master zone is the first rgw instance in a cluster, and the realm sync actually works before rook attempts to create the
dashboard-admin
user. However, after more than a week of testing, this has always resulted in failure to sync the credentials from the existingdashboard-admin
user in the realm and rook never finishes zone configuration. Even if this is resolvable in terms of logic/order of operations changes, the ability to set/change the global rgw-api credentials is desirable within multisite in use.I can not explain why the multisite integration test run under gha with two cephclusters within the same k8s cluster is working. I have only tested with two physically different k8s test clusters, which always results in rook failing at
dashboard-admin
user creation. I'm not sure if this is because the rgw is already in a realm in which the user already exists or because the realm/zonegroup is in a non-working state because the period was not committed.This means that in order for the dashboard to be able to access a non-master (or non-first zone in the realm/zonegroup) rgw instance something needs to change over the current behavior. There are at least these possible solutions:
dashboard-admin
user to avoid collisions within the realm (I did not test hacking this into the operator).dashboard-admin
user when a new zone is create (I tested adding multiple sets of keys to thedashboard-admin
user after the zone was manually kicked into syncing).As all the users are replicated between rgw instances within the same realm, option \2 and \4 both result in the same set of keys being able to access all rgw instances. I think I slightly prefer option \4 as it results in only one set of credentials to need to consider / rotate.
Environment:
The text was updated successfully, but these errors were encountered: