Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Destroying zero-sized snapshots is unexpected and undocumented #33

Open
adaugherity opened this issue Jan 6, 2016 · 6 comments
Open

Comments

@adaugherity
Copy link

I am using this on FreeBSD (installed via ports) to replace the Solaris auto-snapshot functionality, and it seems like a nice drop-in replacement, except for one thing: some snapshots unexpectedly disappeared, breaking my backup/replication. Example:

$ zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
backup               949G   894G   256K  /export/backup
backup/mysql         422G   894G  75.4G  /export/backup/mysql
backup/www           112G   894G  48.9G  /export/backup/www
...

I replicate this offsite via an incremental zfs send/receive of the root FS (zfs send -R -I snap1 backup@snap2, for details see https://github.com/adaugherity/zfs-backup). The child FSes (mysql, www) change frequently but the root fs does not, and zfs-auto-snapshot was (un)helpfully removing these empty snapshots, preventing me from using them to easily replicate all children.

After looking at the source I discovered the -k option which does exactly what I need, but this is not mentioned in the README, nor is it said that removing empty snapshots is the default (which is a difference from Solaris, AFAICT). I can see why someone might want this (and I don't expect you to change the default just to match Solaris), but it it should be documented.

@bdrewery
Copy link
Owner

bdrewery commented Jan 6, 2016

I originally put it in as I thought the OpenSolaris version was doing it. A few months ago I was reviewing the old OpenSolaris code and trying to see if it really was doing this. I started to setup a system to test it on and haven't gotten back to it yet.

My intent was to stay very close to the original implementation. I don't like changing defaults after this long though. I understand the need for this and will ensure it is documented.

An alternative I believe is to use bookmarks in the backup script which will remove the problem with missing snapshots. Though I'm not sure if they can be recursive to fix the problem you're noting.

Thanks for the tip on your project. I've been wanting to integrate an automatic snapshot backup feature (#15) and haven't had time to do so. I may just make it simpler and documented to use yours.

@bdrewery
Copy link
Owner

bdrewery commented Jan 6, 2016

bookmarks feature: https://www.illumos.org/issues/4369

@adaugherity
Copy link
Author

To be fair, they may have changed it in OpenSolaris or Solaris 11 after I installed it on my Solaris 10 server -- I never upgraded to Solaris 11 and am now moving to FreeBSD.

I installed version 0.12 of SUNWzfs-auto-snapshot (with minor patches for Sol10), or specifically hg changset 41:175684ca36d7 (2009-06-25). Since hg.opensolaris.org is gone, that may be meaningless now, but if you have the hg repo and feel like digging through the history, there you go.

@dlangille
Copy link

Would this situation explain why my 24 hourly snapshots are spread across 26 days?

tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-08-31-06h00    13.8M      -  9.80G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-01-13h00    1.91M      -  9.79G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-01-14h00    1.93M      -  9.74G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-02-18h00    53.3M      -  9.78G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-02-19h00    53.3M      -  9.78G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-03-05h00    63.3M      -  9.79G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-05-14h00    2.08M      -  9.69G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-05-19h00    1.94M      -  9.58G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-05-20h00    2.58M      -  9.58G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-05-21h00    1.94M      -  9.58G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-06-15h00    1.96M      -  9.58G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-06-16h00     468K      -  9.65G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-06-18h00     468K      -  9.65G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-07-04h00    4.18M      -  9.50G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-08-04h00    53.4M      -  9.50G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-15-05h00    3.28M      -  9.52G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-17-05h00    42.2M      -  9.81G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-19-05h00     648K      -  9.70G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-19-11h00     296K      -  9.70G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-19-13h00    2.05M      -  9.70G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-20-11h00    2.08M      -  9.70G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-20-13h00    2.06M      -  9.70G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-22-11h00     240K      -  9.73G  -
tank_fast/poudriere/data/packages@zfs-auto-snap_hourly-2019-09-26-14h00        0      -  9.73G  -

@brenc
Copy link

brenc commented Nov 2, 2019

@dlangille - I've seen this too so I've started using -k to keep zero-sized snapshots. Otherwise this can be confusing.

@dlangille
Copy link

Whereas, I prefer that the zero-size snapshots are deleted. For me, there is no reason to keep them. I would rather than N snapshots back in time, as opposed to N snapshots all identical.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants