Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMs running on RBD are very slow #48

Open
hani-s opened this issue Jun 5, 2017 · 5 comments
Open

VMs running on RBD are very slow #48

hani-s opened this issue Jun 5, 2017 · 5 comments

Comments

@hani-s
Copy link

hani-s commented Jun 5, 2017

Hi ,

I Just want to verify if anyone is facing very low speeds when running VMs on RBD backend using this plugin or the other plugin available .

I have done a test using 2 servers :
1- proxmox 4.3 server
2- xenserver 7.1 with this plugin

I connected both to same RBD pool and managed to create VM on both servers on this RBD pool.

The storage speeds I am getting on the KVM based VM on proxmox is more the 20 times faster than the one running on xenserver .

I disabled caching on proxmox KVM before doing the tests .

Is this expected ? or somthing is wrong with my setup

I thought I may loose 20 to 30% in speed compared to KVM based VMs but the speeds I am getting are unusable

Note: I tried to do caching on xenserver to improve the speed but that made the server heavily modified to the level I wont use it for anything serious anymore
Note 2 : I am using Infiniband 40GB connection to connect the client with CEPH (had to install the inifinband drivers on xenserver)

@hani-s hani-s changed the title MV running on RBD are very slow VMs running on RBD are very slow Jun 5, 2017
@rposudnevskiy
Copy link
Owner

Hi,
I don't have an experience of using xenserver with infiniband but it seems there are some issues of using openvswitch with infiniband
https://discussions.citrix.com/topic/383014-xenserver-70-mellanox-connectx-3-nic-infiniband/?p=1964813
I'm not sure that it will help you but you can try to change openvswitch network backend (which is default) to Linux Bridge
https://www.citrix.com/blogs/2011/12/23/how-to-change-xenserver-network-backend-to-linux-bridge/

@nate-byrnes
Copy link

While I am not using IB with my cluster, I found that when I doubled the RAM on my dom-0's I saw much better performance with my guests vm's running on RBD. YMMV....

@nate-byrnes
Copy link

nate-byrnes commented Nov 17, 2017 via email

@ghost
Copy link

ghost commented Nov 17, 2017

yeah I deleted my comment, because I found that :) i will try if this is making it better for me as well - thank you!

@starcraft66
Copy link

starcraft66 commented Dec 24, 2018

I am experiencing the same problem. When testing with the same Ceph cluster, disk I/O was much much much faster when running Proxmox.

For example, when installing debian onto a VM on RBD storage, dpkg is painfully slow because of all of the constant fsyncing it does. Despite this, doing the same on Proxmox results in a very fast installation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants