-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot create CephObjectStore with external ceph cluster #13827
Comments
@achernya Did you create the object store with the object-external.yaml example? I suspect you create the object store with object.yaml, which is not for external cluster configuration. The failure looks like from attempting to create a CRUSH rule, which sounds like it's trying to fully create pools in the external cluster, which it doesn't have access to. See also the Connect to an external object store topic. |
I used It sounds like this is potentially an unsupported configuration, and I should instead provision rgw externally and then use |
I believe (but am not certain) that the configuration you describe is possible. It looks like the current issue may be that the Rook cluster might not have an admin key, which is necessary to set up things for running against an external cluster. It's also possible that there are some internal issues with Rook regarding When you ran this step, did you specify the Ceph admin key and keyring? https://rook.io/docs/rook/latest-release/CRDs/Cluster/external-cluster/?h=key#1-create-all-users-and-keys @parth-gr might have some additional thoughts about this as well. |
@BlaineEXE you are correct the admin key is not present. Or rather, what it's in I did not specify the admin key and keyring. I ran the export script with |
@BlaineEXE I tried passing
and there is no output change. |
@achernya you need to pass the |
@parth-gr as I mentioned in my initial comment I do not have an existing radosgw configuration for this external ceph cluster, and my goal is to get rook to provision the radosgw inside the k8s cluster. |
@achernya first of all I would like to ask why did you want this type confguration, if its external ceph it wont be rook responsibility to manage its daemons and I believe there would be checks in the code, if it is external then skip its management, And still, if you want to test something out of the box I would say this we don't support. But if you are interested in knowing how the creation can be possible, "mon": " allow *"... like this, Then the then I think the rgw pool creation will be success. |
currently the script requires to have both v2 and v1 port to enable v2 port, but that is not the necessary condition, so removing the chek, and enabling it only v2 is present to successfully configure with v2 only part-of: rook#13827 Signed-off-by: parth-gr <partharora1010@gmail.com>
sometimes user want to use the admin ower to create some resources in the external ceph cluster, so adding a way to use the admin privilege part-of: rook#13827 Signed-off-by: parth-gr <partharora1010@gmail.com>
currently the script requires to have both v2 and v1 port to enable v2 port, but that is not the necessary condition, so removing the chek, and enabling it only v2 is present to successfully configure with v2 only part-of: #13827 Signed-off-by: parth-gr <partharora1010@gmail.com> (cherry picked from commit 117bc76)
In my environment, I have a hyper-converged setup where the hypervisors have VMs with ceph-rbd storage, and I want the same ceph cluster to be used by the k8s environment. My underlying hypervisors (proxmox) don't set up rgw, as it would want/take advantage of loadbalancers. I was hoping to run the rgw portions of the system in k8s, where my loadbalancers already exist/can be easily set up.
From my strace in my initial report, the creation command was explicitly looking for the client.admin keyring. That leads me to believe that simply granting client.healthchecker these privileges are necessary, but not sufficient to make this work. |
sometimes user want to use the admin ower to create some resources in the external ceph cluster, so adding a way to use the admin privilege part-of: rook#13827 Signed-off-by: parth-gr <partharora1010@gmail.com>
@achernya can you share the logs again, what it is complaining for now? And restart the rook operator pod after this privileges changes, or sometimes it require node reboot where operator pod is running |
sometimes user want to use the admin ower to create some resources in the external ceph cluster, so adding a way to use the admin privilege part-of: rook#13827 Signed-off-by: parth-gr <partharora1010@gmail.com>
sometimes user want to use the admin ower to create some resources in the external ceph cluster, so adding a way to use the admin privilege part-of: rook#13827 Signed-off-by: parth-gr <partharora1010@gmail.com>
sometimes user want to use the admin ower to create some resources in the external ceph cluster, so adding a way to use the admin privilege part-of: rook#13827 Signed-off-by: parth-gr <partharora1010@gmail.com>
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Is this a bug report or feature request?
Deviation from expected behavior:
I have an external ceph cluster that I imported using the instructions at https://rook.io/docs/rook/v1.13/Getting-Started/intro/. The cluster has rbd and cephfs services installed and exposed, and those were imported successfully. However, this ceph cluster does not have an existing rgw running.
I then went and followed the instructions on https://rook.io/docs/rook/latest-release/CRDs/Object-Storage/ceph-object-store-crd/ to create a CephObjectStore, placing the resource in the
rook-ceph-external
namespace.This resulted in the operator having the following logs:
I wasn't sure what
exit status 13
meant, so I enabled debug logs, which didn't help asCephToolCommand.Run
doesn't seem to log its output anywhere I can tell insidecreateReplicationCrushRule
and I ended up strace'ing outside the operator container to figure out what ceph command the operator was running, which turned out to be
If I run that command myself, I get
Which makes sense, the external cluster only created
client.healthchecker
, notclient.admin
.None of the documentation makes it clear if this is a supported configuration, and the error reporting leaves a bit to be desired if it is not. It is not clear to me if I should just change my envvars to
import-external-cluster.sh
to setROOK_EXTERNAL_ADMIN_SECRET
, and what the downsides are of doing that could be.Expected behavior:
CephObjectStore is created successfully.
How to reproduce it (minimal and precise):
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Operator's logs, if necessary
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the
insert code
button from the Github UI.Read GitHub documentation if you need help.
Cluster Status to submit:
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, use
kubectl rook-ceph ceph status
For more details, see the Rook kubectl Plugin
Environment:
uname -a
): 6.1.0-18-cloud-amd64rook version
inside of a Rook Pod): rook: v1.13.3ceph -v
): ceph version 17.2.7 (2dd3854d5b35a35486e86e2616727168e244f470) quincy (stable)kubectl version
): v1.29.2ceph health
in the Rook Ceph toolbox): HEALTH_OKThe text was updated successfully, but these errors were encountered: