-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multus and CSI #5356
Comments
|
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
question: does this mean that if I'm running an fio pod using CSI that the fio pod would automatically add the multus annotation and receive an extra network interface on the fly? I wasn't sure if multus interfaces could be hot-plugged. So this means that I don't have to change application yamls to know about multus? If so, that would be great news. Thx for help from Sebastien and Rohan. -ben |
The fix for this issue will only apply multus annotations to the CSI component pods. |
Is the fio binary inside the csi image that you are using? @bengland2 |
the fio binary is inside the https://quay.io/repository/cloud-bulldozer/fio image, sorry, not clear to me what you mean by a "CSI image". fio does not know about Ceph, it is just using a mountpoint on a volume handed to it by Kubernetes, and the mountpoint is created from a storageclass, either rbd or cephfs. Isn't that what CSI does -- provide access to storage resources independent of the storageclass implementation? |
Another problem: OSD pods come up with 2 multus interfaces, 1 public 1 cluster, but do not use these interfaces. This is because the "cluster network" parameter appears not to be set. Rohan seems to think that rook is not dealing with the whereabouts NAD correctly. But there is a workaround - either using ceph_config_overrides configmap or using "ceph config-key" feature, we can set the "cluster network" parameter and restart the OSDs to force it to use them. I have actually seen this work, just that rook.io should be doing this automatically (according to Seb). Does this need a separate github issue or can we piggyback on top of this one ;-) |
today I got a multus cluster to technically come up (HEALTH_OK with 12 OSDs), but when I tried to run a fio workload, the PVC would not bind, I got this error, is this what you expect to happen? http://pastebin.test.redhat.com/885502 Also, I do not understand the mechanics of how kernel Ceph modules learn how to access the correct Multus subnet when they are not part of the openshift network namespace. Specifically,when I run the fio benchmark with a Ceph storage class, that means that the pod must access a kernel RBD or Cephfs mountpoint, and these are implemented by kernel modules that the Ceph CSI module does not directly control. But this problem exists already because the SDN network is part of a network namespace already isn't it? I understand better how Ceph works on baremetal but I haven't learned K8S networking yet. |
I tried just creating a cluster network and no public network, which is an unsupported configuration. It fails with this crash. Could we support this configuration because it would allow some benefit from Multus even before we resolve this issue. |
Two things:
|
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
Sebastien, if multus only specifies the cluster network, then the implication is that we continue to use the SDN network for the public network, but at least you still have > 1 network (and > 1 physical NIC port) that you could use this way. I would prefer to have the public network on Multus as well, but until Ceph-CSI can set that up, I thought this might be a short-term solution. |
Ok so basically, slow client network but fast replication network 🤔 , I believe it's an acceptable short-term solution. |
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
The current issue that we are facing while using multus and CSI is that the |
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI pods now utilize multus networking and connect to public network specified in the CephCluster CR. Closes: rook#5356 Signed-off-by: rohan47 <rohgupta@redhat.com>
CSI templates need to see their annotation updated is Multus is detected as a provider.
The text was updated successfully, but these errors were encountered: