-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low performance when zpool is based on iSCSI disk based on zvol/zfs #4211
Comments
I don't understand why the write performance is (much) faster behind the iscsi export. Does this come from some write buffering? did you test with the fsync convention? |
@kaazoo Have you tried running iSCSI transfer for a long time (at least one hour)? Until ZIL is full you will get write performance boost, because data is being written to RAM, if not writing synchronously. And then you can experience sudden drop, because data has to be moved on HDD. It's called write throttle. You can read about it here: Moreover iSCSI can aggregate few operations from queue into one, so you get more IOPS at initiator side. Run full random test (fio) to exclude this. In context of slower reads: have you disabled read ahead at initiator side? |
@zielony360
That doesn't seem to have impact when testing with dd. |
On the target, I switched to a zpool with 2x raidz2 (8 disks each, ashift=12). dd locally on target:
dd on initiator:
As I have no experience with fio so far, I followed http://n4f.siftusystems.com/index.php/2013/07/14/zfs-zvol-blocksize-and-esxi/ . fio sequential write on initiator:
fio sequential read on initiator:
fio random write on initiator:
fio random read on initiator:
I'm not sure if my testing is correct, but it still seems that reading is performing worse than writing. sequential (buffered?) write: sequential (unbuffered?) write: random write: sequential (buffered?) read: sequential (unbuffered?) read: random read: Do you think these results are OK or should they be better? |
I order to have something to compare against, I created an ext4 filesystem instead of ZFS on the initiator. ext4 doesn't seem to support 128 KB blocksize (max seems to be 64 KB), so I just went with the standard of 4 KB., allthough this is not optimal.
I switched on readahead again:
dd on initiator:
fio sequential write on initiator:
fio sequential read on initiator:
fio random write on initiator:
fio random read on initiator:
sequential (buffered?) write: sequential (unbuffered?) write: random write: sequential (buffered?) read: sequential (unbuffered?) read: random read: The results of ZFS vs. ext4 differ a lot. |
I did further testing with 128 KB blocksize for zvol and fio. The following commands were use for fio:
Then I also tested fio with 4 KB blocksize on 128 KB zvol blocksize. The following commands were use for fio:
Ext4 seems to perform much better than ZFS when running on iSCSI disk. |
We're seeing this too, and what's interesting is that the read speeds seem indicate disk reads, not ARC as the raw ZFS on target results do. Could you post the contents of scst.conf and the saveconfig.json from LIO? |
Curious this is still open from 2016? It was the first few hits on Google for zfs and slow iscsi. |
A few recent commits seem to have smoothed out zvols a bit, but we are nowhere near the 0.6.4 #s these days. Zvols in general suffer from performance issues due to cow on binary targets, but they were much faster in the past.
|
Would love to see some realistic numbers for that reason. Right now I"m looking at a deadlocked zvol, happened as soon as I set sync=default. Why? Who knows, likely something I broke but don't even know where to start. Not only did it hang that zvol but the entire pool. There are I thought at least two timers that should have kicked the process by now but haven't. Point is I'd agree, zvols still have some rather odd bugs. |
I have a setup with a head node (iSCSI initiator) and multiple storage appliances (iSCSI targets). Each target is standard server hardware (16x / 24x HGST 8 TB SAS3). All systems run Ubuntu 14.04 and ZFS 0.6.5.4.
LIO (Linux IO target) is used for the target. Open-iSCSI is used for the initiator.
Target and initiator are connected via 10 GBit Ethernet.
On the target for example a single raidz2 vdev with 16 disks is configured.
With a single big zvol (for example volblocksize=128K) I get the following performance locally (compression disabled, dd 100 GB):
write: 1100-4900 IOPS, 140-600 MB/s
read: 5100-6200 IOPS, 650-780 MB/s
That zvol is exported via iSCSI and used as a vdev on the head node. According to 'zdb' ashift is set to 17. I get the following performance (compression disabled, dd 100 GB):
write: 5600-7300 IOPS, 710-910 MB/s
read: 1600-2300 IOPS, 205-300 MB/s
I already tried to change different parameters without improvement.
on target:
on initiator:
When I export raw disks on the iSCSI target instead of a single zvol, performance on the initiator side is as expected.
The text was updated successfully, but these errors were encountered: