You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fedora 19, zfs 0.6.2 release, 6 disks in RAIDZ2 with SSD as L2ARC device.
This setup will incur a 50% overhead in pool allocation (4 data disks, 2 checksum)
Allocated pool space before experiment:
8996065210368
So here's the thing. Each 8k volume block has its own parity disks. That's how RAID-Z works. If you have 4k AF disks (zdb shows "ashift: 12") then what happens is each 8k block consists of 2x disks of data, and 2x disks of parity. You're expecting that generally there's 4x disks of data and 2x disks of parity all the way across the pool. There isn't.
Switch to using a block size of 16k or larger and you should be in better shape for space usage, at the expense of having a worse read-copy-write cycle for when smaller writes occur on the volume. With a filesystem's default of 128k you're already covered there.
Thanks for the quick explanation @dweeezil and @DeHackEd. Since this is just a matter of documentation I've marked it as such and will close the issue.
Test setup:
Fedora 19, zfs 0.6.2 release, 6 disks in RAIDZ2 with SSD as L2ARC device.
This setup will incur a 50% overhead in pool allocation (4 data disks, 2 checksum)
Allocated pool space before experiment:
8996065210368
Create a 10G zvol:
zfs create -V 10G -o compression=off tank/backup/test
Fully write the volume once:
dd_rescue -Z 0 /dev/zvol/tank/backup/test
Allocated pool space after write:
9028783779840
As can be seen the difference is 32718569472 bytes (~30GB), double the expected amount of ~15GB.
"zfs get all" output of the zvol after writing:
As a second test a 10GB file was created on a zfs filesystem on the same pool.
Allocated pool space before file creation:
8996065517568
Allocated pool space after file creation:
9012204183552
The difference is 16138665984 bytes (~15GB), the expected amount.
The text was updated successfully, but these errors were encountered: