Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please support read-write snapshots #105

Closed
rrthomas opened this issue Aug 4, 2014 · 4 comments
Closed

Please support read-write snapshots #105

rrthomas opened this issue Aug 4, 2014 · 4 comments

Comments

@rrthomas
Copy link

rrthomas commented Aug 4, 2014

I want to use snapper to create a "Time Machine" that can take me arbitrarily far back in time. This works fine provided I don't run out of disk space. In my early trials, I ran out of disk space rapidly because of large, changing files. I reorganized my user directory so that most of these now live in ~/.local/var, which is a symlink to /usr/local/var/$HOME, which is outside my @home volume, but unfortunately, I still find myself filling my (reasonably capacious) disk from time to time with "mistakes", and currently all I can do is erase all my snapshots, and reboot the Time Machine.

If I could use read-write snapshots (as snapper once supported, before btrfs added support for read-only snapshots), then I would be able to fix my mistakes by hand: potentially tedious, but at least I'd have the choice. Deleting a few large files in a number of snapshots is not so bad after all.

@aschnell
Copy link
Member

Read-write snapshots have several disadvantages, e.g. already accessing the snapshot can cause atime updates which cause disk space allocation. Also the filelist cache of snapper gets out of sync.
So I do not intent to implement creation of read-write snapshots.

But with newer btrfs tools you can set the read-only status of a subvolume, e.g. btrfs property set /.snapshots/1/snapshot ro false. The kernel has the ioctl since a long time. For your use-case that should be fine.

@rrthomas
Copy link
Author

Thanks very much for the tip. Typically, I will want to run "find" on all my snapshots to discover in which a particular large file occurs, the make all of them read-write and delete the file. Obviously, it would be easier if they were all read-write in the first place (which I guess I could do by suitable extensions to my snapper config). Does it make a difference if I run with noatime, and what is the implication of the filelist cache getting out of sync?

@aschnell
Copy link
Member

Mounting with noatime should avoid the problem with disk space allocation just by looking at snapshots.

The filelist cache getting out of sync will of course make the snapper status and diff command display wrong data. During undochange this can lead to error e.g. when snapper wants to copy a file from a snapshot to the system when the file does not exist in the snapshot anymore.

@rrthomas
Copy link
Author

Thanks very much. Since I do not intend to use these snapshots for rollback (at least, not using the rollback command: typical use is to roll back manually a single file, by copying if necessary), the problem with the cache is not serious for me, so noatime should help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants