You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The usable capacity of a dRAID pool as shown by zfs list is lower than expected.
Example pool is draid2:6d:24c:0s with 1TB drives.
ms_count is 1396, ms_size is 16GiB.
deflate_ratio should be 0.75 (6/8)
This gives us 17987323035648 bytes before slop space.
Subtracting 128GiB from this is 17849884082176 bytes.
zfs list -p shows 15835678638080 bytes (USED+AVAIL).
root@truenas[~]# zfs list -p test
NAME USED AVAIL REFER MOUNTPOINT
test 2095104 15835676542976 523776 /test
If we go back and set deflate_ratio to 4/6 (0.666...), apply SPA_MINBLOCKSHIFT to get 0.6660, we get 15973117591552 bytes, minus 128GiB is 15835678638080, what is shown by zfs list -p
Is vdc_ndata and vdc_groupwidth getting swapped somewhere?
Even looking at human readable zfs list output, result seems off:
root@truenas[~]# zfs list test
NAME USED AVAIL REFER MOUNTPOINT
test 2.00M 14.4T 512K /test
You would expect closer to 16TiB from this configuration.
Describe how to reproduce the problem
w/ 24x 1TB disks zpool create test draid2:6d:24c:0s -o ashift=12 /dev/loop{0..23}
zfs list test -p
The text was updated successfully, but these errors were encountered:
With dRAID there are some additional non-obvious considerations about how zfs list reports available space. The discrepancy here is caused by the the requirement that dRAID (unlike raidz) always writes full stripes and the internal assumption that the average block size will be 128K.
Thank you, that corrected it. By following the same logic you outlined in the linked issue, I was able to get to the same 0.6660 deflate_ratio with 6 data disks.
System information
Describe the problem you're observing
The usable capacity of a dRAID pool as shown by
zfs list
is lower than expected.Example pool is
draid2:6d:24c:0s
with 1TB drives.ms_count
is 1396,ms_size
is 16GiB.deflate_ratio
should be 0.75 (6/8)This gives us 17987323035648 bytes before slop space.
Subtracting 128GiB from this is 17849884082176 bytes.
zfs list -p
shows 15835678638080 bytes (USED+AVAIL).If we go back and set
deflate_ratio
to 4/6 (0.666...), applySPA_MINBLOCKSHIFT
to get 0.6660, we get 15973117591552 bytes, minus 128GiB is 15835678638080, what is shown byzfs list -p
Is
vdc_ndata
andvdc_groupwidth
getting swapped somewhere?Even looking at human readable
zfs list
output, result seems off:You would expect closer to 16TiB from this configuration.
Describe how to reproduce the problem
w/ 24x 1TB disks
zpool create test draid2:6d:24c:0s -o ashift=12 /dev/loop{0..23}
zfs list test -p
The text was updated successfully, but these errors were encountered: