Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

znapzendzetup disable on child #154

Closed
odoucet opened this issue Jun 10, 2015 · 21 comments
Closed

znapzendzetup disable on child #154

odoucet opened this issue Jun 10, 2015 · 21 comments
Labels

Comments

@odoucet
Copy link

odoucet commented Jun 10, 2015

I've created a znapzend policy on 'mypoolname/user1' with recursive=ON.
I have several children on it :

mypoolname/user1/webdata
mypoolname/user1/sqldata
mypoolname/user1/olddata

I would like to disable (temporarily or not) znapzend on a specific child, let's say "olddata". This cannot be done :

znapzendzetup disable mypoolname/user1/olddata
ERROR: cannot disable backup config for mypoolname/user1/olddata. Did you create it?

If I try to edit the property myself it does not work :

zfs set org.znapzend:enabled=off mypoolname/user1/olddata

And now "disable" shows an other message :

znapzendzetup disable mypoolname/user1/olddata
ERROR: property recursive not set on backup for mypoolname/user1/olddata
@hadfl
Copy link
Collaborator

hadfl commented Jun 10, 2015

by now, znapzend does not support to exclude child datasets when running w/ a recursive policy. you'll have to set up all child datasets individually (non recursive) if you want to exclude some of them....

@odoucet
Copy link
Author

odoucet commented Jun 10, 2015

this can be labeled as "feature-request" so, because I see znapzend is already looping on each child datasets for zfs send/recv. It may be simple to skip one if org.znapzend:enabled=off
I'll dig into the source code and see if I can make a pull request on this.

EDIT - the snapshot itself is recursive and will be performed on the pointed child ... Will see what can be done

@jjlawren
Copy link

@odoucet, did you get anywhere with this?

I think the request would be interesting even if it only applied to overriding the DST option. I can live with local snapshots for transient datasets as long as they're not sent to a backup server.

@odoucet
Copy link
Author

odoucet commented Apr 11, 2016

did not make any progress at all... I was busy with something else .. Then totally forgot about this one.

@morph027
Copy link
Contributor

morph027 commented Nov 1, 2016

Right now, i'm keeping 2 datasets for this.

e.g.

tank/critical
tank/non-critical

tank/critical is being znapped recursively, the other won't get znapped anyway.

zfs rename -p tank/critical/disable-me tank/non-critical/disable-me

@johnramsden
Copy link

This would be a killer feature.

@flixman
Copy link
Contributor

flixman commented Sep 10, 2018

I am not good in perl, but I understand that the way to go is to do the regular snapshot and, right after that (if recursive), check which child ZFS have the flag, e.g., org.znapzend:enabled set to off.... nope?

@oetiker
Copy link
Owner

oetiker commented Sep 11, 2018

yes, the send operations are 'per fs' anyway, so I think the change should not be all the big

@flixman
Copy link
Contributor

flixman commented Sep 23, 2018

@oetiker: I have written a possible patch addressing this issue, but I do not have around a setup to test it. Do you? How can I send it to you (the whole diff is 50 lines, but I am not very skilled in Perl, and I guess it can be made smaller)? If you do not have any testbed around I will create a VM and so... but yeah, Sunday morning, I guess... :-D

@oetiker
Copy link
Owner

oetiker commented Sep 23, 2018

the tests we have are all included with the package ... they work by using simulated version of the zpoo,zfs,ssh commands ...

@oetiker
Copy link
Owner

oetiker commented Sep 23, 2018

if you create a PR I will be glad to review

oetiker pushed a commit that referenced this issue Oct 2, 2018
As the sending of the ZFS streams is per stream, we can disable children ZFS from being snapshot/sent by setting the property "org.znapzend:enabled" to "off" on them. For the dataset being processed, in case the recursive flag is set, its children will be checked and their recently created snapshots removed.
@flixman
Copy link
Contributor

flixman commented Oct 3, 2018

I have discovered a bug on my commit: Seems that, when znapzend gets reloaded, it refreshes all the plans. As it finds the property org.znapzend:enable on a child ZFS, it considers that to be a dataset... but as there is only that property, it reports it and fails. I see two ways to solve this, maybe you can advise for one (or propose another)?

a) On Config.pm, line 71. Instead of "die" if a dataset has properties missing, I feel more inclined to notify it as a warning, and continue with the next dataset.

b) For the children ZFS, to define their backup plans. Might this pose a problem if the parent dataset has the recursive flag set, and not the children ZFS?

@speedytom
Copy link

Hello everyone,
I have a new solution based on the item a) from the previous comment. From my point of the view there is the best compromise how to do it by adding a new flag in our situation. It is a kind of the workaround when this source code isn't in objects. I used "invisible=on" for a inherit child in my patch. I attached it here:

0001-The-second-patch-for-znapzendzetup-disable-on-child-.patch.txt

Please, look at it. I am not a perl geek.
Thanks, for your feedbacks.

@oetiker
Copy link
Owner

oetiker commented Oct 9, 2018

hi @speedytom please create a PR, then it is much simpler to look at your idea

@flixman
Copy link
Contributor

flixman commented Oct 10, 2018

@speedytom Hi, one question: Doesn't your solution require to fully populate those datasets that want to be skipped? The idea, AFAIU, is to discard the descendant datasets with the minimum required actions, and I think that making full use of recursive datasets is a must.

I checked in a patch few days ago that addresses that by:
a) if any dataset is missing properties, this is not an error anymore but a warning. Is reported and that dataset is skipped (and this happens only when znapzend is run or it is reloaded, so is not very intrusive).
b) In the snapshoting process: after doing the snapshot, on a dataset with "recursive" flag on, descendant datasets are checked for the "enabled" flag to "off", and those are set for removal (And, thus, not send).

I think a cache should be populated with this information (the disabled descendants of a recursive dataset), so that only when checking the backup these are traversed but... yeah, still no time for it.

@oetiker
Copy link
Owner

oetiker commented Oct 10, 2018

@flixman how about detecting that the enabled = no property is the only one set and then NOT complaining about that fact as it is a valid state

@speedytom
Copy link

Thanks guys for your quick feedback. I appreciate it. I will try explain it in detail my intent. I wanted distinguish between parents, child and other datasets. Znapzend works without any problem with the first patch from my point of view. I wanted fix znapzendzetup. I would like see parent and children datasets when I run command "znapzendzetup list". My patch should suggest lets say a new feature. It was the first step to add new flag to all children datasets. Maybe the new flag "invisible=on" hasn't correct name. What about name "children"?

I have to say that I am beginner with ZFS and znapzend. Your work is really great. Is it possible send you some patches from time to time?

@flixman
Copy link
Contributor

flixman commented Oct 10, 2018

@oetiker Good! I fill prepare a PR for tomorrow afternoon :-)

@stale
Copy link

stale bot commented Jun 28, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 28, 2021
@MrKich
Copy link

MrKich commented Jun 29, 2021

Still a wanted feature.

@stale stale bot removed the stale label Jun 29, 2021
@stale
Copy link

stale bot commented Aug 28, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Aug 28, 2021
@stale stale bot closed this as completed Sep 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

9 participants