-
-
Notifications
You must be signed in to change notification settings - Fork 732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error "file inode changed" on ZFS snapshot #6652
Comments
Thanks for the interesting report. Yeah, guess just mounting the stuff beforehands is the best way to fix this. Would be bad if we would have to add an additional stat for each directory just to work around this. And then maybe even have more timing issues if the mount takes some times and the next stat still would not yield the final/stable values. |
Considering this likely happens with any "auto mounter", guess we should add this to docs / FAQ. |
Previously, we would use .zfs/snapshot, but it seems there's some kind of error caused by how zfs automounts those paths, see borgbackup/borg#6652. So this PR moves away from backing up the paths in .zfs/snapshot, and instead we now explicitly mount the snapshots. As an added benefit, this gives me a way to do some cool things: - I can now tag zfs volumes with net.prussin:backup and they will be automatically backed up, rather than having to update my nixos config to add volumes to the backup - I can now structure the borg backup to match my zfs volume layout instead of matching the structure of where the zfs volumes are mounted, which makes more sense to me and makes consuming the backups simpler
I got the same error when migrating from borgbackup 1.17 to 1.21 with a cifs mount via autofs. After changing back to version 1.17 the error does not occur any more. |
@mwalliczek that's because 1.1.x works a lot based on filenames and doesn't check much (so it's quite open to all sorts of race conditions, but also tolerant to autofs / automounter i guess). 1.2.x works based on fd (file descriptors) and makes pretty sure that it only opens what it intended to open and not something different suddenly appearing at the same place. good to avoid race conditions, bad for automounter. can you just mount before running borg? |
Is it too much to disable this behavior, i.e use filenames instead of inodes, with a file system option during borg create? |
@iansmirlis the problem is the magic behaviour of the mountpoint and the solution was already found, see above posts. |
@ThomasWaldmann, sure, thanks for clarification. See, my issue is that automount is there for a reason, and it behaves like this, magic or not, for a reason, too. borg also has good reasons to work on fd, however in this case, I have to manually taking care mounting and unmounting snapshots, without any actual gain. i.e. I do not see a way to have a race condition on a read-only zfs snapshot. Imho, it would be more convenient for me to have the option to disable this behavior, instead of manipulating mounting. Having said that, I will not insist. You are far more experienced to see if this is a clean behavior. |
I propose a command line switch to select the behavior Can't the whole issue be solve by adding an option like --nofdcheck and |
I'm so glad you posted this. Until I found this, I had no clue as to what was causing this. I thought about it a little bit, and instead of going to the trouble of mounting to some other location, I tried the following -- and it worked:
All it took was a chdir to the target location (e.g., laziness. that's the stuff. |
@fbicknel thanks for adding that here. maybe even a |
I did try an `ls ${TARGET}`, but that didn't work. --Not sure if I did
something wrong there or what.-- EDIT: See posts below. A trailing `/` will allow this to work.
And if I had known GH was ignoring my markdown because I replied by email, I never would have done that. :)
Frank Bicknell
…On Thu, Jan 19, 2023 at 4:16 PM TW ***@***.***> wrote:
@fbicknel <https://github.com/fbicknel> thanks for adding that here.
maybe even a ls -d ${TARGET} would work (just one command and not
changing the cwd)?
—
Reply to this email directly, view it on GitHub
<#6652 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMZGGFYWFK34GOX67WJOK3WTGVLRANCNFSM5UKUDFOQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@fbicknel What about |
I tried it. It works, too. So I guess take your choice. |
I'd like to backup directly from the snapshot location Is it indeed the case that file inodes will be unstable when doing backups from say Is the only alternative then to use I'm particularly concerned about the answer to these two FAQs I am seeing ‘A’ (added) status for an unchanged file!? and It always chunks all my files, even unchanged ones!. |
|
Perfect! I checked the inode of one single file in two different snapshots and they are the same. However, it is difficult to tell if it'll always be the case. In any case, point 2 means that I should backup from the same path even when using relative backups as I was doing. |
Just to add here why I didn't use that approach (or |
Have you checked borgbackup docs, FAQ, and open Github issues?
Yes. #6650 is about the same problem, but I'm not backing up a network filesystem.
Is this a BUG / ISSUE report or a QUESTION?
BUGQUESTIONSystem information. For client/server mode post info for both machines.
Your borg version (borg -V).
borg 1.2.0
Operating system (distribution) and version.
Ubuntu 20.04.4 LTS
Hardware / network configuration, and filesystems used.
Core i3 7320
2x 16GB Kingston DDR4-2400 ECC RAM
Filesystem for backup source: ZFS (snapshot)
Filesystem for backup target: ext4
Backup is not sent over network
How much data is handled by borg?
Around 2 TB per archive, around 152 TB in repository (2.5 TB deduplicated)
Full borg commandline that lead to the problem (leave away excludes and passwords)
Describe the problem you're observing.
UPDATE: As it turns out that is probably intentional behaviour by ZFS. While investigating this I found out, that when accessing snapshots via the hidden .zfs directory, a temporary mount is created. That leads to the error described below, because the inode changes between the unmounted and the mounted snapshot.
My fix is to not rely on the automount feature, but instead mount the snapshot manually. I changed my backup script to mount all snapshots I want to backup into
/mnt/zfs/<snapshot name>
and the problem is fixed.Maybe this info helps someone that's why I didn't delete the issue, but changed it to be a question.
Original Text:
As far as I can see starting with borgbackup 1.2.0 the backup keeps failing with "file inode changed" errors:
Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.
Yes. It doesn't happen for all snapshots on very invocation as zfs seems to recycle inodes. But for each invocation it happens for 2 to 6 of the 10 snapshots I'm backupping.Include any warning/errors/backtraces from the system logs
The text was updated successfully, but these errors were encountered: