Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support fallocate(2) #326

Open
behlendorf opened this issue Jul 18, 2011 · 48 comments
Open

Support fallocate(2) #326

behlendorf opened this issue Jul 18, 2011 · 48 comments
Labels

Comments

@behlendorf
Copy link
Contributor

@behlendorf behlendorf commented Jul 18, 2011

Observed by xfstests 075, fallocate(2) is not yet supported

"fallocate is used to preallocate blocks to a file. For filesystems which support the fallocate system call, this is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeros."

QA output created by 075
brevity is wit...

-----------------------------------------------
fsx.0 : -d -N numops -S 0
-----------------------------------------------
fsx: main: filesystem does not support fallocate, disabling
: Operation not supported

-----------------------------------------------
fsx.1 : -d -N numops -S 0 -x
-----------------------------------------------

-----------------------------------------------
fsx.2 : -d -N numops -l filelen -S 0
-----------------------------------------------
fsx: main: filesystem does not support fallocate, disabling
: Operation not supported

-----------------------------------------------
fsx.3 : -d -N numops -l filelen -S 0 -x
-----------------------------------------------                                             
@adilger
Copy link
Contributor

@adilger adilger commented Jul 29, 2011

It is potentially difficult to meaningfully implement fallocate() for ZFS, or any true COW filesystem. The intent of fallocate is to pre-allocate/reserve space for later use, but with a COW filesystem the pre-allocated blocks cannot be overwritten without allocating new blocks, writing into the new blocks, and releasing the old blocks (if not pinned by snapshots). In all cases, having fallocated blocks (with some new flag that marks them as zeroed) cannot be any better than simply reserving some blocks out of those available for the pool, and somehow crediting a dnode with the ability to allocate from those reserved blocks.

@behlendorf
Copy link
Contributor Author

@behlendorf behlendorf commented Jul 30, 2011

Exactly. Implementing this correctly would be tricky and perhaps not that valuable since fallocate(2) is Linux-specific. I would expect most developers to use the more portable posix_fallocate() which presumably falls back to an alternate approach when fallocate(2) isn't available. I'm not aware of any codes which will be to inconvenienced by not having fallocate(2) available... other than xfstests apparently.

@RedBeard0531
Copy link

@RedBeard0531 RedBeard0531 commented Aug 30, 2011

Well you could in theory do something tricky like just creating a sparse file of the correct size. This would avoid the wasted space of storing the zeroed-out data that wouldn't be reusable anyway due to COW. It would unfortunately break the contract that you won't get ENOSPC, but you can't give that guarantee with COW and you would be less likely to hit that after using an enhanced posix_fallocate() since it wouldn't be wasting space on the zeroed pages. Out of curiousity, would there be any difference in final on-disk layout of a sparse file that is filled in vs a file that is first allocated by zero-filling?

I work on mongodb and we use posix_fallocate to quickly allocate large files that we can then mmap. It seems to be the quickest way to preallocate files and have a high probability of contiguous allocations (which again isn't possible due to COW). While I doubt anyone will try to run mongodb on zfs-linux anytime soon (my interest in the project is for a home server), I just wanted to give feedback from a user-space developer's point of view.

@ryao
Copy link
Contributor

@ryao ryao commented Apr 23, 2012

Commit cb2d190 should have closed this.

@behlendorf
Copy link
Contributor Author

@behlendorf behlendorf commented Apr 24, 2012

I was leaving this issue open because the referenced commit only added support for FALLOC_FL_PUNCH_HOLE. There are still other fallocate flags which are not yet handled.

@cwedgwood
Copy link
Contributor

@cwedgwood cwedgwood commented Nov 26, 2012

@dechamps this doesn't seem to be working for 3.6.x. Looking at your patch for this it looks like this is expected. Is there an update for recent kernels?

11570 open("holes", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0644) = 3
11570 write(3, "\252\252"..., 4194304) = 4194304
11570 fallocate(3, 03, 65536, 196608)   = -1 EOPNOTSUPP (Operation not supported)

3 = fd
03 = mode, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE

@dechamps
Copy link
Contributor

@dechamps dechamps commented Nov 26, 2012

My patch only implements FALLOC_FL_PUNCH_HOLE alone, which is not a valid call to fallocate(). It never worked on any kernel, and will never work until someone implements FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE. Right now it's just a placeholder, basically.

@cwedgwood
Copy link
Contributor

@cwedgwood cwedgwood commented Nov 26, 2012

@dechamps thanks for that clarification

even with FALLOC_FL_PUNCH_HOLE (only):

12260 open("holes", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0644) = 4
12260 write(4, "\252\252"..., 4194304) = 4194304
12260 fallocate(4, 02, 65536, 196608)   = -1 EOPNOTSUPP (Operation not supported)

02 = mode, FALLOC_FL_PUNCH_HOLE

@RJVB
Copy link

@RJVB RJVB commented Jan 14, 2014

Apparently fallocate is still not supported on zfs?
But could that be the reason that I cannot seem to use fallocate at all on my systems that have a zfs root, not even on the /boot partition which is on a good ole ext3 slice?

@behlendorf
Copy link
Contributor Author

@behlendorf behlendorf commented Jan 14, 2014

@RJVB fallocate() for the zfs filesystems has not yet been implemented, however the won't have any impact on an ext3 filesystem.

@morsik
Copy link

@morsik morsik commented Apr 4, 2014

You can always use:
dd if=/dev/zero of=bigfile bs=1 count=0 seek=100G
Works immediately.

@behlendorf
Copy link
Contributor Author

@behlendorf behlendorf commented Oct 3, 2014

As of 0.6.4 the FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE behavior of fallocate(2) is supported. But as noted above, for a variety of reasons implementing a meaningful fallocate(2) to reserve space is problematic for a COW filesystem.

@CMCDragonkai
Copy link

@CMCDragonkai CMCDragonkai commented Nov 7, 2017

I'm using 0.7.2-1, and I noticed that if you run posix_fallocate on a file with the same size as the length specified, it returns with EBADF. This doesn't happen when I do it on tmpfs.

//usr/bin/env make -s "${0%.*}" && ./"${0%.*}" "$@"; s=$?; rm ./"${0%.*}"; exit $s

#include <fcntl.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>

int main () {
  int fd = open("./randomfile", O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR);
  if (fd == -1) {
    perror("open()");
  }
  int status = posix_fallocate(fd, 0, 100);
  if (status != 0) {
    printf("%s\n", strerror(status));
  }
  return 0;
}

Running the above on an empty or non-existent file works fine, as soon as you run it again, it fails with EBADF. This is bit strange behaviour.

@behlendorf
Copy link
Contributor Author

@behlendorf behlendorf commented Nov 7, 2017

@CMCDragonkai that does seem odd. Can you please open a new issue with the above comment so we can track and fix this issue.

@pandada8
Copy link

@pandada8 pandada8 commented Nov 12, 2017

Is allocating disk space (set mode = 0) supported now ?
I notice fallocate still return EOPNOTSUPP

BTW, will fallocate generate less fragmentation than just truncate in random write scenes ?

@DeHackEd
Copy link
Contributor

@DeHackEd DeHackEd commented Nov 12, 2017

No, because ZFS's copy-on-write semantics just plain don't allow that.

@CMCDragonkai
Copy link

@CMCDragonkai CMCDragonkai commented Nov 13, 2017

@shodanshok
Copy link

@shodanshok shodanshok commented Jun 1, 2018

@behlendorf While it is not possible (due to CoW) to have a fully working fallocate, it would be preferable to have at least a partially-working implementation: some applications[1] use fallocate to create very big files and on non-fallocation filesystem this is a very slow operation. Granted that ZFS and its CoW defeat one of the main fallocate feature (ie: to really reserve space in advance), paying the slow (and SSD-wearing) "fill entire file with 0" behavior is also quite bad.

How would you consider to implement a "fake" fallocation, where fallocate returns success but no real allocation is done? After all, even after a "real" fallocate, reserved space is not guaranteed, as any snapshot can eat into the really available disk space.

[1] One of such application is virt-manager: RAW disk images are, by default, fully fallocated. This, depending on disk size, mean GB or TB of null data (zeroes) written to HDDs/SSDs.

@RJVB
Copy link

@RJVB RJVB commented Jun 1, 2018

@shodanshok
Copy link

@shodanshok shodanshok commented Jun 1, 2018

@RJVB On filesystem supporting fallocate, the filesystem reserves len/blocksize blocks and marks them as uninitialized. This has the following consequences:

  • as blocks are marked as reserved/allocated, the user space application which called fallocate is sure that sufficient space is available to write to all such blocks;

  • as no user data are written (and only some very terse metadata are flushed to disk), fallocate returns almost immediately, enabling very fast file allocations.

Point n.1 (space reservation) is at odds with ZFS because as a CoW filesystems, by its very nature, it continuously allocate new data blocks while keeping track of past ones via snapshots. This means that you can't really count on fallocate to guarantee sufficient disk space to write all blocks, unless you tap into the reservation and/or quota properties. However, if I remember correctly, these properties only apply to an entire dataset, rather than to a single file.

And here come point n.2 - fast file allocation. On platform where fallocate is not natively supported, both the user space application and the libc function can force a full file allocation by writing zeroes for all its length. This is very slow, cause unnecessary wear on SSD and is basically useless on ZFS. Hence my suggestion to always return "true" for fallocate, even when doing nothing (ie: faking a successfull fallocate).

Opinions are welcomed!

@RJVB
Copy link

@RJVB RJVB commented Jun 2, 2018

@shodanshok
Copy link

@shodanshok shodanshok commented Jun 2, 2018

@RJVB

And as to COW systems being at odds with space reservation: btrfs supports it and AFAIK that's a COW filesystem too

fallocate on BTRFS behave differently than on non-CoW filesystems: while it really allocates blocks for the selected file, any rewrite (after the first write) triggers a new block allocation. This means that file fragmentation is only slightly reduced, and can potentially expose some rough corner with a near-full filesystem.

Why is it basically useless? You still get the space reservation, no?

If you tap onto existing quota/reservation system (which, anyway, operate on dataset rather than single file), yes, I'll end with working space reservation. But if you only count on the fallocated reserved block, any snapshot pinning old data can effectively cause a out of space condition even when writing on fallocated files. Something similar to that:

  • fallocate a file writing all zeroes to it
  • create a snapshot
  • rewrite your fallocated file
  • new space is allocated, which can result in ENOSP condition.

My point is that doing the write in the driver might be more efficient. I don't disagree with your suggestion but software that nowadays falls back to the brute-force method because fallocate() fails might start behaving unexpectedly. Maybe a driver parameter that can be controlled at runtime could activate a low-level actual write-zeroes-to-disk implementation?

I really fail to see why an user space application should fail when presented with a sparse file rather than a preallocated one. However, as you suggest, simple let the option be user selectable. In short, while a BTRFS-like fallocate would be the ideal solution, even a fake (user-selectable) implementation would be desiderable.

@panlinux
Copy link

@panlinux panlinux commented Oct 8, 2019

I just hit this on the upcoming Ubuntu Eoan 19.10 while creating a VM using virt-manager. I noticed it was super slow compared to my Bionic 18.04 system, so I checked and noticed that the qemu-img call was now using fallocate. The command line was something like this:

qemu-img create -f qcow2 -o preallocation=falloc with-prealloc-image.qcow2 40G

That on an ext4 system takes half a second. On a system with zfs (tried both 0.7.x and 0.8.x) it takes over a minute.

@rlaager
Copy link
Contributor

@rlaager rlaager commented Oct 9, 2019

TL;DR: Make zpl_fallocate_common() return 0 (success) when mode == 0, at least when compression is enabled.

Since this issue was opened, there are several new modes to fallocate(2). FALLOC_FL_COLLAPSE_RANGE and FALLOC_FL_INSERT_RANGE are probably the hardest to implement. They are not the point of my comment today, so I'm not going to discuss those further in this comment.

FALLOC_FL_ZERO_RANGE sounds the same as FALLOC_FL_PUNCH_HOLE except that it leaves a preallocation in place (and FALLOC_FL_KEEP_SIZE is optional). If my proposal is adopted, it is likely that FALLOC_FL_ZERO_RANGE could be handled the same as FALLOC_FL_PUNCH_HOLE (with the caveat about FALLOC_FL_KEEP_SIZE). This is not the main point of my comment, so I won't discuss this further in this comment either.

I'm mainly here to discuss the mode == 0 case for pre-allocating disk space. Currently, ZFS returns EOPNOTSUPP in this case. An application has three choices of how to handle this:

  • FAIL: The application fails, cleanly or uncleanly. Real (i.e. non-test) applications don't do this, as fallocate() is not implemented on anywhere near all filesystems. Therefore, we can ignore this scenario.
  • IGNORE: The application ignores the failure. It is treating fallocate() as a hint.
  • ZERO: The application falls back to pre-allocating the space by writing data. In practice, the data will always be all zeros, because that is what a filesystem would return if the fallocate() was successful. This is the behavior implemented by glibc's posix_fallocate() wrapper.

I'm proposing that ZFS fake the pre-allocation. I know that lying to userspace feels wrong, but...

For applications in the IGNORE case, faking the pre-allocation is irrelevant. They gave a hint and would ignore the failure anyway. So there is absolutely no change in that scenario.

The scenario that changes is the ZERO case. In the ZERO case, the application will then fall back to writing zeros, which is expensive. When ZFS has compression enabled, those zeros will immediately be thrown away, so the application isn't achieving anything at all with the fallback. ZFS performance suffers dramatically relative to other filesystems, leading to complaints e.g. with virtual machine disk allocation.

Thus, at a minimum, there should be zero downside to faking the pre-allocation when compression is turned on. I propose we do at least that.

When compression is off, the pre-allocation is still worthless at best and counterproductive at worst. The pre-allocation is intended to increase performance (by allocating a contiguous range of blocks, or a contiguous extent) and/or guarantee that writes will not fail with ENOSPC. With ZFS, the attempt to allocate contiguous space is pointless, as it will not improve the placement of future (over)writes. Likewise, writing data does not guarantee that (over)writes can succeed, and in fact my be counterproductive, as it can lead to the filesystem running out of space/quota if the zeros are retained in a snapshot.

The only reason I can see for not faking the preallocation in the non-compression case is technical correctness. If we really want to maintain traditional semantics, even though they're useless or even counterproductive, then we can limit the fake preallocation to the compression case. But if it were my call, I'd fake it in both cases. I believe that is the pragmatic solution.

If there is some dream of eventually supporting real preallocation in some way, that's fine. But let's not let perfect and someday be the enemy of good enough right now. We can always replace the fake preallocation with real preallocation if someone figures out a way to do it in ZFS.

@behlendorf
Copy link
Contributor Author

@behlendorf behlendorf commented Oct 9, 2019

@rlaager you make an excellent case. While I'd rather not lie to user space, I do have to agree that it is the pragmatic thing to do for the mode == 0 case. Not supporting this functionality has been causing more harm than good. If someone is interested in authoring a PR for this I'd be happy to help with any design work and code review.

@shodanshok
Copy link

@shodanshok shodanshok commented Oct 9, 2019

@rlaager I 100% agree with your proposal, which is a more detailed version of what I proposed here. The same dilemma applied: should we lie to userspace? In this specific case (and the specific constrains you described) I think the answer is "yes".

@behlendorf what do you think about having a zfs property or module option to control the new fallocate-facking behavior?

@adilger
Copy link
Contributor

@adilger adilger commented Oct 9, 2019

@rlaager it probably makes sense to go further and do nothing for fallocate(mode=0) regardless of whether compression is enabled or not, with a fallback of a per-fileset tunable (default enabled) to control whether fallocate() will fake the preallocation or return EOPNOTSUPP in case someone finds a strange corner case that is harmed by this behavior.

My reasoning is that even if there was a mechanism to have fallocate() reserve space for that file in the pool for preallocated blocks, this would only work for the first write. After the first write to the file, the "space reservation" aspect would be gone and any subsequent overwrite would be no different than if fallocate() had done nothing at all (beyond modifying the file size if FALLOCATE_FL_KEEP_SIZE is not specified).

The other main reason to use fallocate(mode=0) is to avoid fragmentation of allocated blocks in the file. However, since ZFS is COW there is again no benefit to having the blocks preallocated, since overwriting them will again result in new block allocations.

It probably makes sense to return ENOSPC if the fallocate(mode=0) size is larger than the amount of free space in the filesystem (or maybe 95%) to avoid cases where someone tries to reserve a 1TB file in a 100GB pool. That is probably the best that could be done, given that even elaborate space reservations in the pool tied to a specific file would fail if one or more snapshots is pinning the reserved space.

I'd also agree that FALLOCATE_FL_ZERO_RANGE is not really different than PUNCH_HOLE for ZFS, so could easily be added. I don't know of any users of INSERT_RANGE and COLLAPSE_RANGE, and they are complex to implement so probably do not deserve much attention.

@rlaager
Copy link
Contributor

@rlaager rlaager commented Oct 9, 2019

with a fallback of a per-fileset tunable (default enabled) to control whether fallocate() will fake the preallocation or return EOPNOTSUPP in case someone finds a strange corner case that is harmed by this behavior.

I recommend against a tunable to turn it off. If we add a tunable, it's really, really hard to know that nobody is using it, and thus really, really hard to remove it. If we wait until a problem presents itself, we can solve it then, by adding a tunable or something else (depending on the problem). Also, a tunable requires more work and more test cases (to ensure it actually works).

If we do add a tunable, I'd suggest a module parameter, as those present less of a user-visible compatibility concern, at least in my mind, than a filesystem parameter.

It probably makes sense to return ENOSPC if the fallocate(mode=0) size is larger than the amount of free space in the filesystem (or maybe 95%) to avoid cases where someone tries to reserve a 1TB file in a 100GB pool.

Yes, that makes sense. I hadn't considered that. Thanks!

@CAFxX
Copy link

@CAFxX CAFxX commented Oct 10, 2019

My reasoning is that even if there was a mechanism to have fallocate() reserve space for that file in the pool for preallocated blocks, this would only work for the first write. After the first write to the file, the "space reservation" aspect would be gone and any subsequent overwrite would be no different than if fallocate() had done nothing at all (beyond modifying the file size if FALLOCATE_FL_KEEP_SIZE is not specified).

Well, unless userspace calls fallocate() before subsequent writes. I think there's an argument to be made about actually reserving free space for the first write, so as to not lie to userspace.

@shodanshok
Copy link

@shodanshok shodanshok commented Oct 10, 2019

@CAFxX I think it would be great to leave the implementation as simple as possible. On ZFS, even reserving space for the first write will be practically useless, as snapshots can "eat" into the available space without notice.

@rlaager what is the general logic for module option vs zfs property? As stand now, it seems to me that we have some confusion about the two (eg: cache is controller via a property, while prefetch via a module option).

@CAFxX
Copy link

@CAFxX CAFxX commented Oct 10, 2019

On ZFS, even reserving space for the first write will be practically useless, as snapshots can "eat" into the available space without notice.

This is definitely the case right now, but I would expect that a "real" reservation of free space - if properly implemented - wouldn't allow snapshots to consume the reserved space.

I understand the desire to keep things simple, at least at the beginning, but lying to userspace is a pretty slippery slope, especially if the argument goes "we're already lying now in other cases, we may as well lie in this one".

Moreover, the argument that "we can add it later if someone figures out how to do it in ZFS" is pretty bogus, as at that point the API contract is broken and some user most likely will have come to depend on the broken contract, so you will likely need to bolt on the correct behavior via a different API altogether. To be pragmatic, this wouldn't apply if the "broken" behavior was non-default, behind some configuration.

@shodanshok
Copy link

@shodanshok shodanshok commented Oct 10, 2019

@CAFxX I understand your concerns.

However, due to snapshots/subvolume, the API contract is already broken on CoW filesystems (even on BTRFS which actually honors preallocation for the first write): posix_fallocate has a "write-all-zeroes" fallback, which is not sufficient to guarantee that free space will always exists for that very same preallocation (especially when compression is on).

So, at the moment, we have the worst of both worlds:

  • preallocating is very slow (and can cause unnecessary wear on SSD if compression=off);
  • but no preallocation is really done!

On ZFS, if one really want to preallocate some space (ie: to be sure it will be always enough free space to handle the preallocation), he must use quota and/or reservation/refreservation properties.

@CAFxX
Copy link

@CAFxX CAFxX commented Oct 10, 2019

I understand and I'm not saying we should not do this. I am wholeheartedly in favor of having this as the non-default behavior, so we get the best of all worlds:

  • a workaround now (by opting-in to the broken behavior) for users that need it
  • a reliable path forward (by not further tainting public APIs with incompatible behaviors)
@rlaager
Copy link
Contributor

@rlaager rlaager commented Oct 10, 2019

the argument that "we can add it later if someone figures out how to do it in ZFS" is pretty bogus, as at that point the API contract is broken and some user most likely will have come to depend on the broken contract

That's a risk with API contracts in general, but not likely in this case.

I'm not sure how an application would come to rely on ZFS faking fallocation(2) pre-allocation, especially in a world where other (far more popular) filesystems actually implement pre-allocation. If an application suddenly breaks if ZFS truly implements fallocate(2) preallocation, it would already be broken on ext4 and XFS today, and those are the default filesystems in major distros.

@GregorKopka
Copy link
Contributor

@GregorKopka GregorKopka commented Oct 11, 2019

IMHO zfs should collapse full block zero writes into a hole anyway, always, regardless of compression being on or not. Simply as I see no point in storing null data as an on-disk block in a CoW system.

For fallocate to reserve space it would need to be turned into increasing the reservation/refreservation properties of the filesytem, this is a can of worms that I wouldn't like to open as it implies an interesting decision tree revolving around what magic needs to be attached to that file to reduce the reservation in case of truncating, deletion (or whatever else might make sense in this context). While magic might be useful at times... I would guess adding this would just increase support requests without adding any benefits to the user.

Failing the fallocate in case there isn't enough zfs AVAIL sounds like a reasonable plan.

@RJVB
Copy link

@RJVB RJVB commented Oct 11, 2019

@theAkito
Copy link

@theAkito theAkito commented Oct 22, 2019

Interesting.

@von-copec
Copy link

@von-copec von-copec commented Nov 13, 2019

If I may: Consider what happens in the case of another filesystem (ie ext4) on a zvol, whether directly or indirectly (an hvm); I would objectively consider the semantics of doing native_fallocate in that case, correct.

@rlaager
Copy link
Contributor

@rlaager rlaager commented Nov 13, 2019

@von-copec We are only talking about fallocate() in the ZFS POSIX layer (ZPL) filesystems. If someone puts ext4 on top of a zvol or a file, then ext4 still behaves exactly as it always has.

@von-copec
Copy link

@von-copec von-copec commented Nov 13, 2019

@von-copec We are only talking about fallocate() in the ZFS POSIX layer (ZPL) filesystems. If someone puts ext4 on top of a zvol or a file, then ext4 still behaves exactly as it always has.

I understand, I was attempting to emphasize that the behavior of another filesystem layer versus the ZPL would be considered correct when it is on top of a (sparse) ZVOL, and so the ZPL doing the same thing would be the "same amount of correctness".

@adilger
Copy link
Contributor

@adilger adilger commented Jun 5, 2020

An update on this topic. In the course of implementing fallocate(mode=0) for Lustre-on-ZFS (https://review.whamcloud.com/36506) the dmu_prealloc() function was being used to implement the preallocation. While this patch isn't working yet, it is informative on this topic. The dmu_prealloc() function has been in ZFs for a long time, but is only used on Illumos to preallocate space in a ZVOL for a core dump, so that the core can later write directly into the ZVOL blocks without invoking ZFS, in case ZFS is itself broken by the crash:

zvol_dump_init->zvol_prealloc->dmu_prealloc->dmu_buf_will_not_fill()

This sets DB_NOFILL on every dbuf. The interesting thing is that dmu_prealloc() actually preallocates the blocks on disk, and the DB_NOFILL->WP_NOFILL results in the leaf (data) blocks being marked in dmu_write_policy() with ZIO_COMPRESS_OFF and ZIO_CHECKSUM_OFF. This is essentially what fallocate(mode=0) wants, namely to have reserved space that is not compressed and can be overwritten (at least once, anyway) without running out of space.

Several open questions exist, since there is absolutely no documentation anywhere about this code:

  • what does dmu_prealloc() do to blocks that were previously allocated? fallocate(mode=0) must not modify existing blocks, only allocate new blocks.
  • the DB_NOFILL appears to make reads of these buffers return zero, which seems correct, so long as it is cleared when the blocks are overwritten. Otherwise, it would not be good if normal writes cannot be read back.
  • can the DB_NOFILL buffers be overwritten by normal DMU writes, clearing the DB_NOFILL state?
  • does the ZIO_CHECKSUM_OFF flag persist if the block is overwritten via normal DMU IO? That would be unfortunate, as it means dmu_prealloc() blocks would not be safe for user data, but could be fixed with a new WP_* flag.

This seems to be a path toward implementing fully-featured ZFS fallocate(mode=0), possibly with some digging in the guts of the code if the semantics are not quite as needed.

If this doesn't work out, it still seems practical to go the easy route, for which I've made a simple patch that implements what was previously described here and could hopefully be landed with a minimum of fuss. I don't have any idea how long it would take the dmu_prealloc() approach to finish, but it would need the changes in my patch anyway.

@RJVB
Copy link

@RJVB RJVB commented Jun 5, 2020

@adilger
Copy link
Contributor

@adilger adilger commented Jun 5, 2020

Several open questions exist, since there is absolutely no documentation anywhere about this code:
Probably an open door, but have you tried to answer your questions by poking around under an Illumos implementation?

Yes, the Illumos implementation references this function exactly once, in the code path referenced above, but no actual comments exist in the code that describe these functions.

@RJVB
Copy link

@RJVB RJVB commented Jun 5, 2020

@shodanshok
Copy link

@shodanshok shodanshok commented Jun 5, 2020

@adilger I am right that this preallocation would use the preallocated blocks for the first write only? If so, this seems somewhat similar to BTRFS approach. If so, I am missing why an application (Lustre, in this case) should expect fallocate being really honored considering that:

  • a snapshot can easily eat into the pool free space, leading to ENOSPC even if prellocation was successful

  • rewriting the file will cause ongoing fragmentation, negating any performance benefit from the previous allocation (with some time)

Disabling compression and checksum seems a way too high price to pay for the very limited benefit (if any) which can be obtained by "true" preallocation on ZFS.

Considering how posix_fallocate simply write zeroes on a filesystem not supporting fallocate, and that these zeroes would be converted to a sparse file if compression is enabled, I would simply suggest to create a module option/pool property/flag "faking" true fallocate (returning 0 but ignoring the operation entirely). I know this sounds bad because it broke the "contract" of the fallocate API itself; however, no such guarantees exists for compressing, CoW filesystem (which seems similar to what you are proposing here, right? #10408)

@adilger
Copy link
Contributor

@adilger adilger commented Jun 5, 2020

@shodanshok, I understand and agree that all of those issues exist.

Lustre is a distributed parallel filesystem that layers on top of ZFS, so it isn't the thing that is generating the fallocate() request. It is merely passing on the fallocate() request from a higher-level application down to ZFS, after possibly remapping the arguments appropriately.

"I would simply suggest to create a module option/pool property/flag
"faking" true fallocate (returning 0 but ignoring the operation entirely)"

I've essentially done exactly that with my PR#10408. However, while this probably works fine for a large majority of use cases, it would fail if eg. an application is trying to fallocate multiple files in advance of writing, or in parallel, but there is not actually enough free space in the filesystem. In that case, each individual fallocate() call would verify enough space is available, but the aggregate of those calls is not available. Fixing this would need "write once" semantics for reserved blocks (similar to what dmu_prealloc() provides), or at least an in-memory grant that reserves space from statfs() for the dnode that is released as dbufs are written to the dnode. That would at least avoid obvious multi-file issues, but not prevent other writers from consuming this space. It would also get tricky with fallocate() over a non-sparse file, and whether writes are overlapping, etc. so would not be the preferred solution IMHO.

behlendorf pushed a commit that referenced this issue Jun 18, 2020
Implement semi-compatible functionality for mode=0 (preallocation)
and mode=FALLOC_FL_KEEP_SIZE (preallocation beyond EOF) for ZPL.

Since ZFS does COW and snapshots, preallocating blocks for a file
cannot guarantee that writes to the file will not run out of space.
Even if the first overwrite was guaranteed, it would not handle any
later overwrite of blocks due to COW, so strict compliance is futile.
Instead, make a best-effort check that at least enough free space is
currently available in the pool (with a bit of margin), then create
a sparse file of the requested size and continue on with life.

This does not handle all cases (e.g. several fallocate() calls before
writing into the files when the filesystem is nearly full), which
would require a more complex mechanism to be implemented, probably
based on a modified version of dmu_prealloc(), but is usable as-is.

A new module option zfs_fallocate_reserve_percent is used to control
the reserve margin for any single fallocate call.  By default, this
is 110% of the requested preallocation size, so an additional 10% of
available space is reserved for overhead to allow the application a
good chance of finishing the write when the fallocate() succeeds.
If the heuristics of this basic fallocate implementation are not
desirable, the old non-functional behavior of returning EOPNOTSUPP
for calls can be restored by setting zfs_fallocate_reserve_percent=0.

The parameter of zfs_statvfs() is changed to take an inode instead
of a dentry, since no dentry is available in zfs_fallocate_common().

A few tests from @behlendorf cover basic fallocate functionality.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Arshad Hussain <arshad.super@gmail.com>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Andreas Dilger <adilger@dilger.ca>
Issue #326
Closes #10408
lundman added a commit to openzfsonosx/openzfs that referenced this issue Jun 19, 2020
Implement semi-compatible functionality for mode=0 (preallocation)
and mode=FALLOC_FL_KEEP_SIZE (preallocation beyond EOF) for ZPL.

Since ZFS does COW and snapshots, preallocating blocks for a file
cannot guarantee that writes to the file will not run out of space.
Even if the first overwrite was guaranteed, it would not handle any
later overwrite of blocks due to COW, so strict compliance is futile.
Instead, make a best-effort check that at least enough free space is
currently available in the pool (with a bit of margin), then create
a sparse file of the requested size and continue on with life.

This does not handle all cases (e.g. several fallocate() calls before
writing into the files when the filesystem is nearly full), which
would require a more complex mechanism to be implemented, probably
based on a modified version of dmu_prealloc(), but is usable as-is.

A new module option zfs_fallocate_reserve_percent is used to control
the reserve margin for any single fallocate call.  By default, this
is 110% of the requested preallocation size, so an additional 10% of
available space is reserved for overhead to allow the application a
good chance of finishing the write when the fallocate() succeeds.
If the heuristics of this basic fallocate implementation are not
desirable, the old non-functional behavior of returning EOPNOTSUPP
for calls can be restored by setting zfs_fallocate_reserve_percent=0.

The parameter of zfs_statvfs() is changed to take an inode instead
of a dentry, since no dentry is available in zfs_fallocate_common().

A few tests from @behlendorf cover basic fallocate functionality.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Arshad Hussain <arshad.super@gmail.com>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Andreas Dilger <adilger@dilger.ca>
Issue openzfs#326
Closes openzfs#10408
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.