You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because it is hard to take back specualtive preallocation, cases
where there are large slow growing log files on a nearly full
filesystem may cause premature ENOSPC. Hence as the filesystem nears
full, the maximum dynamic prealloc size іs reduced according to this
table (based on 4k block size):
freespace max prealloc size
>5% full extent (8GB)
4-5% 2GB (8GB >> 2)
3-4% 1GB (8GB >> 3)
2-3% 512MB (8GB >> 4)
1-2% 256MB (8GB >> 5)
<1% 128MB (8GB >> 6)
This should reduce the amount of space held in speculative
preallocation for such cases.
This breaks du command and here is why.
By default ./mc cp resumes your previously aborted downloads. Once the download is finished you see the eventual file size reported would be bigger than the actual data on disk.
master!mc *> du -sh newfile
400M newfile
master!mc *> ls -lh newfile
-rw-rw-r-- 1 harsha harsha 256M Mar 11 13:19 newfile
So you don't really have 400M worth of data on disk, but surely 400MB worth of blocks are used to allocate 256MB worth of data. So the file on disk now is a sparse file, which is what the XFS dynamic speculative preallocation supposed to do.
For large objects this can lead to ENOSPC conditions prematurely even when your object size did not exceed your available disk space.
TO see the actual size used on disk ignoring the sparse nature of the file on disk, use the following option for du
master!mc *> du -sh newfile --apparent-size
256M newfile
The text was updated successfully, but these errors were encountered:
XFS kernel patch ---> xfs: dynamic speculative EOF preallocation
Snippet from what really happens in this scenario
This breaks du command and here is why.
By default
./mc cp
resumes your previously aborted downloads. Once the download is finished you see the eventual file size reported would be bigger than the actual data on disk.For example as shown below
So you don't really have 400M worth of data on disk, but surely 400MB worth of blocks are used to allocate 256MB worth of data. So the file on disk now is a sparse file, which is what the XFS dynamic speculative preallocation supposed to do.
For large objects this can lead to ENOSPC conditions prematurely even when your object size did not exceed your available disk space.
TO see the actual size used on disk ignoring the sparse nature of the file on disk, use the following option for
du
The text was updated successfully, but these errors were encountered: