-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to exclude a dataset from a recursive snapshot... #68
Comments
That would be nice. I have a backup data set there which shouldn't be snapshotted because the pool would be depleted in no time. |
@sean- @otraupe I agree, this would be a very handy feature. I currently just work around it by calling targets in one call. However, while this would be nice, I don't see a clear path towards making this work when creating snapshots; destroying snapshots shouldn't be a problem. How do you see the syntax working for something like this? ---Alex |
Alex, thanks for the reply! Could you script-wise translate the recursive call to the zfs Ole Am 17.11.2015 um 11:04 schrieb Alex Waite:
|
Hello all, we would love to see and exclude list option, even if we know we can destroy those no require snapshot with a simple script, but having an option to exclude some dataset from zfssnap would be a very big plus for us 👍 . An option could be something similar to rsync --exclude see rsync man page This would be great, because we would be able to provision a single ascii file matching dataset that we don't want snapshot for. Thanks Guillaume. |
the other script uses the |
I was just looking for such an option in zfsnap. I would fully support @bougui 's suggestion. That syntax (but without the equal sign) is used also by tar, tarsnap. From FreeBSD's tar(1):
Would be great to have this also in zfsnap! |
We have a zpool that has a ton of datasets that come and go, but we also have swap on the same zpool. We want to use the recursive snapshot feature without snapping the swap dataset. I thought this existed, but maybe not?
The text was updated successfully, but these errors were encountered: