-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
Hi all,
I have a raidz2 array with 4x 2TB drives, non AF.
At the beginning of the disks there is a small ext4 md-raid1 partition and the rest of the disks are assigned to the raidz2 pool. I have used ashift=12 at pool creation.
My workstation is a quad core at 3GHz with 4GB ram, running gentoo 64bit.
For some reason, zfs reads and writes are limited to around 78MB/s.
The same array was running md raid-5 giving aroung 150MB/s write speed and 250MB/s read speed.
I did this test:
create an 8GB file in zfs:
# rm /data/test; sync; dd if=/dev/zero of=/data/test bs=4k count=$((8*256*1024))
2097152+0 records in
2097152+0 records out
8589934592 bytes (8,6 GB) copied, 109,603 s, 78,4 MB/sread it back:
# dd if=/data/test of=/dev/null bs=4k count=$((8*256*1024))
2097152+0 records in
2097152+0 records out
8589934592 bytes (8,6 GB) copied, 109,499 s, 78,4 MB/stest the read performance of disks and controller:
dd if=/dev/sda2 of=/dev/null bs=4k count=$((2*256*1024)) &
dd if=/dev/sdb2 of=/dev/null bs=4k count=$((2*256*1024)) &
dd if=/dev/sdc2 of=/dev/null bs=4k count=$((2*256*1024)) &
dd if=/dev/sdd2 of=/dev/null bs=4k count=$((2*256*1024)) &
wait
524288+0 records in
524288+0 records out
2147483648 bytes (2,1 GB) copied, 17,4306 s, 123 MB/s
524288+0 records in
524288+0 records out
2147483648 bytes (2,1 GB) copied, 17,8363 s, 120 MB/s
524288+0 records in
524288+0 records out
2147483648 bytes (2,1 GB) copied, 18,5661 s, 116 MB/s
524288+0 records in
524288+0 records out
2147483648 bytes (2,1 GB) copied, 23,6243 s, 90,9 MB/sI guess this proves my controller can go higher.
I see though that my sdd is slower that the others.
I did try to get it out of the pool:
# zpool offline pool scsi-SATA_WDC_WD20EADS-00_WD-WCAVY1168223-part2
# zpool status
pool: pool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 28K in 0h0m with 0 errors on Sun Sep 8 14:03:53 2013
config:
NAME STATE READ WRITE CKSUM
pool DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
scsi-SATA_SAMSUNG_HD204UIS2H7JD6B102943-part2 ONLINE 0 0 0
scsi-SATA_SAMSUNG_HD204UIS2H7JD6B103015-part2 ONLINE 0 0 0
scsi-SATA_ST2000DL003-9VT_5YD3JJYZ-part2 ONLINE 0 0 0
scsi-SATA_WDC_WD20EADS-00_WD-WCAVY1168223-part2 OFFLINE 0 0 0
errors: No known data errors
# dd if=/data/test of=/dev/null bs=4k count=$((8*256*1024))
2097152+0 records in
2097152+0 records out
8589934592 bytes (8,6 GB) copied, 110,078 s, 78,0 MB/sThe read performance of the 3 disks gives exactly the same performance of the 4 disks.
I did try the same test to my md raid-1 array (first partition of the same disks):
# rm /test; sync; dd if=/dev/zero of=/test bs=4k count=$((8*256*1024))
2097152+0 records in
2097152+0 records out
8589934592 bytes (8,6 GB) copied, 87,5838 s, 98,1 MB/s
# dd if=/test of=/dev/null bs=4k count=$((8*256*1024))
2097152+0 records in
2097152+0 records out
8589934592 bytes (8,6 GB) copied, 71,0247 s, 121 MB/sAny ideas why zfs is limiting my I/O to about 78MB/s?
Kind regards,
Costa