Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dRAID usable capacity underestimate #15467

Closed
edgarsuit opened this issue Oct 30, 2023 · 2 comments
Closed

dRAID usable capacity underestimate #15467

edgarsuit opened this issue Oct 30, 2023 · 2 comments
Labels
Type: Documentation Indicates a requested change to the documentation

Comments

@edgarsuit
Copy link

System information

Type Version/Name
Distribution Name TrueNAS SCALE
Distribution Version v23.10
Kernel Version 6.1.55-production+truenas
Architecture x86
OpenZFS Version 2.2.0

Describe the problem you're observing

The usable capacity of a dRAID pool as shown by zfs list is lower than expected.

Example pool is draid2:6d:24c:0s with 1TB drives.

ms_count is 1396, ms_size is 16GiB.

deflate_ratio should be 0.75 (6/8)

This gives us 17987323035648 bytes before slop space.

Subtracting 128GiB from this is 17849884082176 bytes.

zfs list -p shows 15835678638080 bytes (USED+AVAIL).

root@truenas[~]# zfs list -p test
NAME     USED           AVAIL   REFER  MOUNTPOINT
test  2095104  15835676542976  523776  /test

If we go back and set deflate_ratio to 4/6 (0.666...), apply SPA_MINBLOCKSHIFT to get 0.6660, we get 15973117591552 bytes, minus 128GiB is 15835678638080, what is shown by zfs list -p

Is vdc_ndata and vdc_groupwidth getting swapped somewhere?

Even looking at human readable zfs list output, result seems off:

root@truenas[~]# zfs list test
NAME   USED  AVAIL  REFER  MOUNTPOINT
test  2.00M  14.4T   512K  /test

You would expect closer to 16TiB from this configuration.

Describe how to reproduce the problem

w/ 24x 1TB disks
zpool create test draid2:6d:24c:0s -o ashift=12 /dev/loop{0..23}
zfs list test -p

@edgarsuit edgarsuit added the Type: Defect Incorrect behavior (e.g. crash, hang) label Oct 30, 2023
@behlendorf behlendorf added Type: Documentation Indicates a requested change to the documentation and removed Type: Defect Incorrect behavior (e.g. crash, hang) labels Oct 30, 2023
@behlendorf
Copy link
Contributor

With dRAID there are some additional non-obvious considerations about how zfs list reports available space. The discrepancy here is caused by the the requirement that dRAID (unlike raidz) always writes full stripes and the internal assumption that the average block size will be 128K.

@edgarsuit
Copy link
Author

Thank you, that corrected it. By following the same logic you outlined in the linked issue, I was able to get to the same 0.6660 deflate_ratio with 6 data disks.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Documentation Indicates a requested change to the documentation
Projects
None yet
Development

No branches or pull requests

2 participants