New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Freespace >2TB not supported in 32-bit userspace #7122
Comments
|
I confirmed the reverse on a dev box - running 64-bit root with 64-bit kernel, simply running a 32-bit df command demonstrates the issue. |
|
In the Lustre client, we dynamically scale the The |
|
@adilger thanks for the pointer, would you mind reviewing the proposed fix in #7937. It doesn't incorporate the kernel thread detection since that wasn't an issue in this case, it was primarily being caused by overflowing the |
|
@mrjoel Assuming that coreutils andthe nfsd userland utilities are built with -D_FILE_OFFSET_BITS=64, you are likely affected by this kernel bug: |
|
@ryao as I understand it this issue is specifically for the 32-bit user space case (all of user space).
|
When handling a 32-bit statfs() system call the returned fields, although 64-bit in the kernel, must be limited to 32-bits or an EOVERFLOW error will be returned. This is less of an issue for block counts since the default reported block size in 128KiB. But since it is possible to set a smaller block size, these values will be scaled as needed to fit in a 32-bit unsigned long. Unlike most other filesystems the total possible file counts are more likely to overflow because they are calculated based on the available free space in the pool. In order to prevent this the reported value must be capped at 2^32-1. This is only for statfs(2) reporting, there are no changes to the internal ZFS limits. Reviewed-by: Andreas Dilger <andreas.dilger@whamcloud.com> Reviewed-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#7927 Closes openzfs#7122 Closes openzfs#7937
When handling a 32-bit statfs() system call the returned fields, although 64-bit in the kernel, must be limited to 32-bits or an EOVERFLOW error will be returned. This is less of an issue for block counts since the default reported block size in 128KiB. But since it is possible to set a smaller block size, these values will be scaled as needed to fit in a 32-bit unsigned long. Unlike most other filesystems the total possible file counts are more likely to overflow because they are calculated based on the available free space in the pool. In order to prevent this the reported value must be capped at 2^32-1. This is only for statfs(2) reporting, there are no changes to the internal ZFS limits. Reviewed-by: Andreas Dilger <andreas.dilger@whamcloud.com> Reviewed-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#7927 Closes openzfs#7122 Closes openzfs#7937
When handling a 32-bit statfs() system call the returned fields, although 64-bit in the kernel, must be limited to 32-bits or an EOVERFLOW error will be returned. This is less of an issue for block counts since the default reported block size in 128KiB. But since it is possible to set a smaller block size, these values will be scaled as needed to fit in a 32-bit unsigned long. Unlike most other filesystems the total possible file counts are more likely to overflow because they are calculated based on the available free space in the pool. In order to prevent this the reported value must be capped at 2^32-1. This is only for statfs(2) reporting, there are no changes to the internal ZFS limits. Reviewed-by: Andreas Dilger <andreas.dilger@whamcloud.com> Reviewed-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue #7927 Closes #7122 Closes #7937
When handling a 32-bit statfs() system call the returned fields, although 64-bit in the kernel, must be limited to 32-bits or an EOVERFLOW error will be returned. This is less of an issue for block counts since the default reported block size in 128KiB. But since it is possible to set a smaller block size, these values will be scaled as needed to fit in a 32-bit unsigned long. Unlike most other filesystems the total possible file counts are more likely to overflow because they are calculated based on the available free space in the pool. In order to prevent this the reported value must be capped at 2^32-1. This is only for statfs(2) reporting, there are no changes to the internal ZFS limits. Reviewed-by: Andreas Dilger <andreas.dilger@whamcloud.com> Reviewed-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#7927 Closes openzfs#7122 Closes openzfs#7937
System information
Distribution Name | Debian
Distribution Version | stretch/9.3 (plus some stretch-backports)
Linux Kernel | 4.14.0-0.bpo.3-amd64
Architecture | i386 (32-bit userspace, 64-bit kernel)
ZFS Version | 0.7.5-1~bpo9+1
SPL Version | 0.7.5-1~bpo9+1
Describe the problem you're observing
An apparent size/type mismatch is present when using 32-bit userspace, a 64-bit kernel, and a filesystem is present with greater than 2TB of free space. It prevents the NFS server from properly exporting the filesystem, and df generates the error below.
df: /srv: Value too large for defined data typeDescribe how to reproduce the problem
Import and mount a ZFS filesystem with greater than 2TB available space. From a 64-bit userspace, execute the df command. From the same running system, in a 32-bit userspace, execute the df command and observe the error. The particular system I'm seeing this on is 32-bit root, and I'm testing direct 64-bit binary for confirmation. I'm unable to test to confirm right now, but I expect that with a 64-bit root and 32-bit df userspace the same issue would be exhibited.
Just to confirm isolation of the issue to ZFS, I created a sparse file backed 3TB loopback mounted filesystem to present greater than 2TB free space, and it worked fine with ext4.
As a workaround for now, I've created a 900GB dummy space-hogging zvol which forces my available free space under 2TB. Once that is done, things all seem to start working again. The total size and used values seem to be mapped correctly, only the avail threshold causes the error.
The text was updated successfully, but these errors were encountered: