- 
                Notifications
    
You must be signed in to change notification settings  - Fork 1.9k
 
Description
System information
Distribution Name       |  Debian
Distribution Version    |  stretch/9.3 (plus some stretch-backports)
Linux Kernel                 |  4.14.0-0.bpo.3-amd64
Architecture                 |  i386 (32-bit userspace, 64-bit kernel)
ZFS Version                  |  0.7.5-1~bpo9+1
SPL Version                  | 0.7.5-1~bpo9+1
Describe the problem you're observing
An apparent size/type mismatch is present when using 32-bit userspace, a 64-bit kernel, and a filesystem is present with greater than 2TB of free space. It prevents the NFS server from properly exporting the filesystem, and df generates the error below.
df: /srv: Value too large for defined data type
Describe how to reproduce the problem
Import and mount a ZFS filesystem with greater than 2TB available space. From a 64-bit userspace, execute the df command. From the same running system, in a 32-bit userspace, execute the df command and observe the error. The particular system I'm seeing this on is 32-bit root, and I'm testing direct 64-bit binary for confirmation. I'm unable to test to confirm right now, but I expect that with a 64-bit root and 32-bit df userspace the same issue would be exhibited.
Just to confirm isolation of the issue to ZFS, I created a sparse file backed 3TB loopback mounted filesystem to present greater than 2TB free space, and it worked fine with ext4.
As a workaround for now, I've created a 900GB dummy space-hogging zvol which forces my available free space under 2TB. Once that is done, things all seem to start working again. The total size and used values seem to be mapped correctly, only the avail threshold causes the error.