-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Description
Before I dive into details of my performance test results I would to take this occasion to announce on this forum that firecracker is officially and fully supported by OSv unikernel as of latest 0.53.0 release (nickname "Firebird", for details please read here). It can boot in as low as 5ms per bootchart and 10ms per firecracker guest boot time measurement. Maybe it is worth mentioning on https://firecracker-microvm.github.io/ (section "What operating systems are supported by Firecracker?") that besides Linux, OSv can boot on firecracker as well ;-) ? From what I am aware of, OSv is the only unikernel and possibly the only other OS besides Linux that can claim this as of this point in time.
As far as performance comparison between OSv running on firecracker vs QEMU/KVM goes, first I must say that at least in one aspect firecracker beats QEMU - file I/O. I have not done any other elaborate file I/IO tests but for example mounting ZFS filesystem is at least 5 times faster on firecracker - on average 60ms on firecracker vs 260ms on QEMU.
Now as far networking goes, OSv performs a little worse on firecracker vs QEMU and it varies between 50-90% of the performance on QEMU in terms of requests per second depending mostly on number of vCPUs and type of the application I used to test.
My tests were focused of number of REST API requests handled per seconds by a typical microservice app implemented in Rust, using hyper, Golang and Java using vertx.io. Each app in essence implements simple todo REST api returning a json payload of 100-200 characters long.
The test setup looked like this:
Host:
- MacBook Pro with Intel i7 4 cores CPU with hyperthreading (8 cpus reported by lscpu) with 16GB of RAM with Ubuntu 18.10 on it
- firecracker 0.15.0
- QEMU 2.12.0
Client machine:
- similar to the one above with wrk as a test client firing requests using 10 threads and 100 open connections for 30 seconds in 3 series one by one (please see this test script - https://github.com/wkozaczuk/unikernels-v-containers/blob/master/test-restapi-with-wrk.sh).
The host and client machine were connected directly to 1 GBit ethernet switch and host exposed guest IP using a bridged TAP nic.
Here is a list of pure req/sec results:
Go 1 CPU - FC & QEMU
-------------------
Requests/sec: 16422.33
Requests/sec: 16540.24
Requests/sec: 16721.56
-------------------
Requests/sec: 23300.26
Requests/sec: 23874.74
Requests/sec: 24313.06
Go 2 CPU - FC & QEMU
-------------------
Requests/sec: 26676.68
Requests/sec: 28100.00
Requests/sec: 28538.35
-------------------
Requests/sec: 33581.87
Requests/sec: 35475.22
Requests/sec: 37089.26
Rust 1 CPU - FC & QEMU
-------------------
Requests/sec: 23379.86
Requests/sec: 23477.19
Requests/sec: 23604.27
-------------------
Requests/sec: 41100.07
Requests/sec: 43455.34
Requests/sec: 43927.73
Rust 2 CPU - FC & QEMU
-------------------
Requests/sec: 46128.15
Requests/sec: 46590.41
Requests/sec: 46973.84
-------------------
Requests/sec: 48076.98
Requests/sec: 49120.31
Requests/sec: 49298.28
Java 1 CPU - FC & QEMU
-------------------
Requests/sec: 20191.95
Requests/sec: 21384.60
Requests/sec: 21705.82
-------------------
Requests/sec: 41049.41
Requests/sec: 43622.81
Requests/sec: 44777.60
Java 2 CPU - FC & QEMU
-------------------
Requests/sec: 40625.69
Requests/sec: 40876.17
Requests/sec: 43766.45
-------------------
Requests/sec: 45746.48
Requests/sec: 46224.42
Requests/sec: 46245.95
For more detailed results please see the files where I captured full output from wrk - https://github.com/wkozaczuk/unikernels-v-containers/tree/master/test_results/remote/OSv_firecracker and https://github.com/wkozaczuk/unikernels-v-containers/tree/master/test_results/remote/OSv_qemu.
Would you have any insight of what might be the reason of relatively slower performance of firecracker? I think I have disabled the rate limiting which is what this script does - https://github.com/cloudius-systems/osv/blob/master/scripts/firecracker.py#L23-L97. It could be also that virtio-mmio implementation on OSv side is not very well optimized - with QEMU OSv uses virtio-pci.
Any help will be greatly appreciated.