Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
client: Support multiple connections
Reimplement io.copy() using multiple connections and thread pool.
Every worker thread opens a connection to the source and destination
backends, and process I/O requests submitted to a work queue by the main
thread.
Multiple connections can work only when accessing imageio daemon
reporting the number of allowed readers and writers. When accessing old
imageio daemon we use single connection for reading or writing. Reading
image extents is always done in a separate thread, so we can read data
from an image while getting the next extent.
Testing in the scale lab show up to 101% improvement compared with
imageio 2.0.6.
All tests done with 100 GiB image containing 48 GiB of data. Rate is
virtual copy rate (virtual size / seconds).
Used upload_disk.py and download_disk.py from ovirt-engine-sdk with
a small tweak to select the current host for the transfer.
The server has 64 CPUs and 250 GiB of RAM. During the test 47 idle VMs
were running bug cpu usage was mostly idle.
Storage is iSCSI over 10Gbit network. Management network is only 1Gbit
so I did only few tests for remote transfers. iSCSI multipathing was not
configured so this system can probably show better performance.
For reference I also include timing of uploading the image directly from
local storage to volume and downloading volume to local storage using
qemu-img convert.
Higher change(%) values are better.
version test trans workers time(s) rate(m/s) change(%)
============================================================================
2.0.6 upload raw unix 1 170.78 599.62 (baseline)
2.0.6 downld raw unix 1 320.38 319.62 (baseline)
----------------------------------------------------------------------------
2.0.8 upload raw unix 4 84.71 1208.32 +101
2.0.8 upload raw unix 2 94.86 1075.20 +79
2.0.8 upload raw unix 1 151.67 675.15 +12
2.0.8 upload qcow2 unix 4 89.25 1146.88 -
2.0.8 downld raw unix 4 168.02 609.45 +90
2.0.8 downld raw unix 1 296.69 345.14 +8
2.0.8 downld qocw2 unix 4 182.35 564.65 -
2.0.8 downld qocw2 unix 1 306.01 334.62 -
----------------------------------------------------------------------------
2.0.8 upload raw http[1] 4 471.47 217.19 -
2.0.8 downld raw http[1] 4 510.41 200.62 -
----------------------------------------------------------------------------
- convert up[2] from raw 8 138.29 740.47 +23
- convert up[2] from qcow2 8 90.74 1128.49 +88
- convert down[3] to raw 8 194.08 527.61 +65
- convert down[3] to qcow2 8 249.36 410.65 +28
[1] Using 1Gbit manangment network
[2] Convert image to qcow2 disk:
qemu-img convert -f {raw|qcow2} -O qcow2 -t none -W {image} {/dev/vg/lv}
[3] Convert qcow2 disk to image:
qemu-img convert -f qcow2 -O {raw|qcow2} -T none -W {/dev/vg/lv} {image}
Here are transfer stats from one of the workers uploading raw image with
4 workers on imageio side.
connection 1 ops, 84.369468 s
dispatch 138 ops, 84.253179 s
operation 138 ops, 84.217723 s
read 1558 ops, 7.878163 s, 12.06 GiB, 1.53 GiB/s
write 1558 ops, 75.615316 s, 12.06 GiB, 163.38 MiB/s
zero 27 ops, 0.639572 s, 13.09 GiB, 20.47 GiB/s
flush 1 ops, 0.017855 s
We can see that bottleneck is writing data to storage.
Change-Id: Id1ffad521ca5349da7cace4d49d261fde081f48d
Bug-Url: https://bugzilla.redhat.com/1591439
Signed-off-by: Nir Soffer <nsoffer@redhat.com>- Loading branch information