-
Notifications
You must be signed in to change notification settings - Fork 878
haxm much slower than kvm? #40
Comments
Also I noticed that haxm is much slower than VMWare on the 6820HQ machine. At least in the use case "let 7 java8 programs run with light network activity themself and to outside with Ubuntu 16.04 - without any graphics, not even a framebuffer, only strict textmode". On VMware, there is about 10% cpu utlisation (if not less), while on haxm I've 60-90% cpu utilization on the 2 virtual cores. |
Recent patches have changed the instruction emulator (#42) and reduced the number of feature checks while load/saving states while entering/exiting the VM (#63). Although the emulator won't probably improve performance, the feature check patches most certainly will. Out of sheer curiosity: If you still have the same hardware around, and building+installing the drivers is easy for you, could you rerun the tests? Just to see if we are below 16 minutes now. @raphaelning We could really use some set of microkernels to do benchmarks of HAXM. I assume the sources of performance penalties are:
These three are always triggered due to a VM-exit event. So we could have a set of microkernels, each repeatedly triggering a VM-exit with a specific exit reasons, and comparing average execution time against other hypervisors. |
Sadly I've no suitable build environment :-( But I've still access to the same hardware, so if you give me the binaries I could test them. |
Thanks for your reply. The driver binary based on the latest code has been provided as below. You may follow the steps to test the performance.
|
@mifritscher We really appreciate your time to run these tests. Note that the test driver installed by
and then re-enable the original/official driver:
@AlexAltea Thanks for the inspiration. I just had a look at the kvm-unit-tests project and found this: https://github.com/rhdrjones/kvm-unit-tests/blob/master/x86/vmexit.c It seems to implement the same idea, except that the same microkernel image can be used to test all VM exit reasons. Although we can't reuse the code, we can definitely learn a lot from its framework, which also enables writing other microkernel-based unit tests that can be run in QEMU, e.g.: https://github.com/rhdrjones/kvm-unit-tests/blob/master/x86/emulator.c |
Ok, I exectued the testcase "install 16.04 - in a fully automated environment (via preseed)". Big problem: The system seemed to work (loaded the preseed.conf, exited after reasonable time, the size of the image and the extracted files are ok as well), but no graphical output after "loading from rom". Command line:
|
Great to see a near 10% performance improvement. :-)
That's bad news. I'll look into it later as there's a non-zero chance that my emulator might have broken something: Just recently I found a EDIT: Patch is available at #67. @raphaelning Oh, I totally forgot about kvm-unit-tests, I'll definitely run some benchmarking tests with it. Thank you! |
If you update the binaries I would be keen on retesting ;-) |
Some fresh data (Setup of Ubuntu 18.04, additonal cleanup, additional creation of a squashfs-image): So it seems haxm got faster over the last months :-) |
Thanks, that's good to know! We (@junxiaoc for the most part) have identified some inefficient VMCS read/write logic that slows down the VM exit code path. After we fix that, hopefully you'll see even better performance data. |
Using Qemu 4.0.0 and haxm 7.5.1, it took 19:38, squashfs took 7:47. Note: This was done with only 1 guest CPU, as SMP 2 crashes early (see #205). |
PC with haxm: i7 6820HQ, 16 GB RAM, NVMe SSD - Windows 7, haxm 7.1, qemu 2.11 - 64 bit
PC with kvm: i7-5500U, 16 GB RAM, SATA SSH - Ubuntu 18.04, kernel 4.15, qemu 2.11 - 64 bit
A simple Ubuntu 16.04 installation lasts 16 minutes on the haxm-PC, but only 7 minutes on the kvm-PC...
Is there a technical reason for that or more a bug / inefficiency somewhere?
The text was updated successfully, but these errors were encountered: