-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot make use of RADOS namespace for external cluster with EC block pool #13633
Comments
Any ideas? I would probably go with a replicated pool for block storage, otherwise. |
@bauerjs1 We currently do not support rados namespace with ec block pool, I think even not ceph,
That can be done, but its just the python script check, we need that support from the ceph backend. I you have any recommendations on how to use it please let us know. cc @travisn |
Yes, afaik Ceph does not support namespaced EC block pool.
I am not sure what you mean here. However, I am unsure if that construct of having a namespaced metadata pool but a non-namespaced data pool would make sense at all. I am far from being a Ceph expert and that just felt most intuitive to me, tbh. |
I tried creating radosnamesapce in ec pool and it was successful, then I created a ec Storageclass(updating the cluster-id throuh radosnamesapce status) and created the pvc, But I got this warning in pvc and still in pending state, not sure how it worked for you @bauerjs1
cc @Madhu-1 should this need fix in csi |
|
ohh isee now even the rados namespace is in failurre state so we cant create radosnamesapce on ec pool |
Yes, unfortunately Ceph doesn't support this. So the idea was to create the namespaces only in the metadata pool (which is always replicated and not erasure-coded). I can confirm that Rook's RADOS namespace objects are successfully created on the metadata pool: apiVersion: ceph.rook.io/v1
kind: CephBlockPoolRadosNamespace
metadata:
name: staging
namespace: storage
spec:
blockPoolName: ceph-block-metadata
name: staging
status:
info:
clusterID: 429b...
phase: Ready However, there is currently no possibility to pass this to the |
currently addded the support for rados namesapce for rbd ec pools upstream closes: rook#13633 Signed-off-by: parth-gr <partharora1010@gmail.com>
currently addded the support for rados namesapce for rbd ec pools upstream closes: rook#13633 Signed-off-by: parth-gr <partharora1010@gmail.com>
Sorry for the delay @parth-gr, I'm quite busy with different stuff in the past few weeks but I'm currently on it again. I will let you know as soon as I have tested it. |
I suppose it's a different problem, but I am now getting the error
so I am not yet able to tell if this works now. PR #8083 seems to be the origin of the error message but I have no clue, why this fails and how to resolve that. Since I enabled encryption, I passed the flag Networking config: network:
connections:
compression:
enabled: false
encryption:
enabled: true
requireMsgr2: false |
@bauerjs1 so you were able to test the original change for ec pools? |
Yes I tried to test it but I'm afraid I can't tell you whether the original issue is solved because the script still fails, as mentioned above |
@bauerjs1 can you show the output of |
This is the output: {
"election_epoch": 12,
"quorum": [
0,
1,
2
],
"quorum_names": [
"a",
"b",
"c"
],
"quorum_leader_name": "a",
"quorum_age": 505050,
"features": {
"quorum_con": "4540138322906710015",
"quorum_mon": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus",
"octopus",
"pacific",
"elector-pinging",
"quincy",
"reef"
]
},
"monmap": {
"epoch": 3,
"fsid": "49b5d4a6-a0bb-4c8f-9736-1f57ba3a5425",
"modified": "2024-02-22T12:29:01.352672Z",
"created": "2024-02-22T12:28:22.909053Z",
"min_mon_release": 18,
"min_mon_release_name": "reef",
"election_strategy": 1,
"disallowed_leaders: ": "",
"stretch_mode": false,
"tiebreaker_mon": "",
"removed_ranks: ": "",
"features": {
"persistent": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus",
"octopus",
"pacific",
"elector-pinging",
"quincy",
"reef"
],
"optional": []
},
"mons": [
{
"rank": 0,
"name": "a",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "10.233.5.247:3300",
"nonce": 0
}
]
},
"addr": "10.233.5.247:3300/0",
"public_addr": "10.233.5.247:3300/0",
"priority": 0,
"weight": 0,
"crush_location": "{}"
},
{
"rank": 1,
"name": "b",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "10.233.44.101:3300",
"nonce": 0
}
]
},
"addr": "10.233.44.101:3300/0",
"public_addr": "10.233.44.101:3300/0",
"priority": 0,
"weight": 0,
"crush_location": "{}"
},
{
"rank": 2,
"name": "c",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "10.233.24.101:3300",
"nonce": 0
}
]
},
"addr": "10.233.24.101:3300/0",
"public_addr": "10.233.24.101:3300/0",
"priority": 0,
"weight": 0,
"crush_location": "{}"
}
]
}
} I see that the mons have only v2 ports enabled but I did not find a way to change that. I thought it would be |
this is moreover a ceph configuration nothing to relate with external script, Currently external script either checks if v1 exists or both v1 and v2 exists, @travisn should we support where only v2 exists? |
Since this cluster is set up by rook, is there a possibility to enable v1 ports, e.g. in |
With the network encryption enabled, the v2 ports are enabled. If you change it to false, the v1 ports should be available.
|
So this is for the external mode, lets discuss |
Hm, I still don't understand, why v1 ports are required. Doesn't this render encryption unusable for external clusters? According to the docs I'd just have to use the |
Sorry for delay, I'll get to this in the next couple of days, hopefully. Thanks in advance for the effort! |
Sorry for the late feedback. I've commented on both PRs. Still having several issues and errors with the script. Since my source cluster is also created by Rook, I wonder if it is possible to do everything in a declarative way, which would be totally awesome! Afaik, the script mainly creates new users and keys. Can this be done by Rook CRs like |
No we dont have admin privilege with rook client in external mode so they cant create anything |
Yea, but the source cluster isn't running in external mode, only the consumer cluster. Of course I'd need to create the required CRs in the source cluster. Wouldn't that be possible? Is there any documentation on what the "consumer" Rook needs in the source cluster? I can also open a new issue on that, if you want, so we don't mix up too many topics here |
When using an erasure-coded data pool for RBDs, there must exist an additional replicated metadata pool. Since Ceph does not support RADOS namespaces in EC pools, I only created it in the metadata pool. When I pass the flags
--rbd-data-pool-name ceph-block --rbd-metadata-ec-pool-name ceph-block-metadata # afaik metadata pool can't be EC so the name is a bit misleading --rados-namespace staging
to
create-external-cluster-resources.py
, the script looks for namespacestaging
in the EC data poolceph-block
(where it cannot exist) instead of the metadata pool.How can I use tenant isolation with RADOS namespaces for external clusters when the data pool for RBDs is erasure-coded?
Thanks in advance!
Is this a bug report or feature request?
Deviation from expected behavior:
create-external-cluster-resources.py
script fails withExpected behavior:
To my understanding, the script should look for the provided namespace in the metadata block pool if the data pool is erasure-coded (please correct me if I am wrong here)
How to reproduce it:
Apply below resources in source cluster and try to extract information for a Rook external consumer cluster with
File(s) to submit:
Resources created in the source cluster:
Environment:
Source cluster:
5.15.0
1.13.3
18.2.1
1.25
The text was updated successfully, but these errors were encountered: