Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: You must initialize tank/virt for zrep after already sending data #9

Closed
darkpixel opened this issue Sep 30, 2015 · 13 comments
Closed
Assignees

Comments

@darkpixel
Copy link

I just spent several days backing up ~200 GB off-site using zrep init. Now that it's finished, I tried zrep sync and it's throwing Error: You must initialize tank/virt for zrep

Here's the full command:

root@usvansdnas01:~# zrep -t zrep-offsite sync tank/virt
DEBUG: overiding stale lock on tank/virt from pid 3501
zrep_sync could not find sent snap for tank/virt.
Error: You must initialize tank/virt for zrep
root@usvansdnas01:~# 

On the source:

root@usvansdnas01:# zfs list -rt snapshot tank/virt | grep zrep
tank/virt@zrep-local_000000 1.49G - 132G -
tank/virt@zrep-offsite_000000 1.28G - 133G -
tank/virt@zrep-local_000001 39.6M - 133G -
root@usvansdnas01:
# zfs get all tank/virt | grep zrep
tank/virt zrep-local:savecount 5 local
tank/virt zrep-local:master yes local
tank/virt zrep-local:src-fs tank/virt local
tank/virt zrep-local:src-host usvansdnas01 local
tank/virt zrep-offsite:savecount 5 local
tank/virt zrep-offsite:src-host usvansdnas01 local
tank/virt zrep-local:dest-host localhost local
tank/virt zrep-offsite:lock-time 20150930150628 local
tank/virt zrep-offsite:dest-fs backup-pool/usvansd/virt local
tank/virt zrep-offsite:dest-host uslog00nas03.-redacted-.local local
tank/virt zrep-offsite:lock-pid 6418 local
tank/virt zrep-offsite:master yes local
tank/virt zrep-offsite:src-fs tank/virt local
tank/virt zrep-local:dest-fs backup-pool/virt local
root@usvansdnas01:~#


On the dest:

root@uslog00nas03:# zfs list -rt snapshot backup-pool/usvansd/virt
NAME USED AVAIL REFER MOUNTPOINT
backup-pool/usvansd/virt@zrep-offsite_000000 0 - 205G -
root@uslog00nas03:
# zfs get all backup-pool/usvansd/officeshare | grep zrep
backup-pool/usvansd/officeshare zrep-offsite:src-fs tank/officeshare local
backup-pool/usvansd/officeshare zrep-offsite:src-host usvansdnas01 local
backup-pool/usvansd/officeshare zrep-offsite:savecount 5 local
backup-pool/usvansd/officeshare zrep-offsite:dest-host uslog00nas03.-redacted-.local local
backup-pool/usvansd/officeshare zrep-offsite:dest-fs backup-pool/usvansd/officeshare local
root@uslog00nas03:~#

You'll notice I have another backup to a 'local' pool, and that is working fine.
@darkpixel
Copy link
Author

Also from the source NAS:

root@usvansdnas01:~# zrep -t zrep-offsite status
tank/virt                                      last synced [NEVER]
root@usvansdnas01:~# zrep -t zrep-local status
tank/virt                                      last synced Wed Sep 30 13:14 2015
root@usvansdnas01:~# 

@ppbrown
Copy link
Member

ppbrown commented Oct 1, 2015

hMM...

How are you sure it successfully completed sync for zrep-offsite?

On Wed, Sep 30, 2015 at 3:14 PM, Aaron C. de Bruyn <notifications@github.com

wrote:

I just spent several days backing up ~200 GB off-site using zrep init.
Now that it's finished, I tried zrep sync and it's throwing Error: You
must initialize tank/virt for zrep

Here's the full command:

root@usvansdnas01:# zrep -t zrep-offsite sync tank/virt
DEBUG: overiding stale lock on tank/virt from pid 3501
zrep_sync could not find sent snap for tank/virt.
Error: You must initialize tank/virt for zrep
root@usvansdnas01:
#

On the source:

root@usvansdnas01:# zfs list -rt snapshot tank/virt | grep zrep
tank/virt@zrep-local_000000 1.49G - 132G -
tank/virt@zrep-offsite_000000 1.28G - 133G -
tank/virt@zrep-local_000001 39.6M - 133G -
root@usvansdnas01:
# zfs get all tank/virt | grep zrep
tank/virt zrep-local:savecount 5 local
tank/virt zrep-local:master yes local
tank/virt zrep-local:src-fs tank/virt local
tank/virt zrep-local:src-host usvansdnas01 local
tank/virt zrep-offsite:savecount 5 local
tank/virt zrep-offsite:src-host usvansdnas01 local
tank/virt zrep-local:dest-host localhost local
tank/virt zrep-offsite:lock-time 20150930150628 local
tank/virt zrep-offsite:dest-fs backup-pool/usvansd/virt local
tank/virt zrep-offsite:dest-host uslog00nas03.-redacted-.local local
tank/virt zrep-offsite:lock-pid 6418 local
tank/virt zrep-offsite:master yes local
tank/virt zrep-offsite:src-fs tank/virt local
tank/virt zrep-local:dest-fs backup-pool/virt local
root@usvansdnas01:~#

On the dest:

root@uslog00nas03:# zfs list -rt snapshot backup-pool/usvansd/virt
NAME USED AVAIL REFER MOUNTPOINT
backup-pool/usvansd/virt@zrep-offsite_000000 0 - 205G -
root@uslog00nas03:
# zfs get all backup-pool/usvansd/officeshare | grep
zrep
backup-pool/usvansd/officeshare zrep-offsite:src-fs tank/officeshare local
backup-pool/usvansd/officeshare zrep-offsite:src-host usvansdnas01 local
backup-pool/usvansd/officeshare zrep-offsite:savecount 5 local
backup-pool/usvansd/officeshare zrep-offsite:dest-host
uslog00nas03.-redacted-.local local
backup-pool/usvansd/officeshare zrep-offsite:dest-fs
backup-pool/usvansd/officeshare local
root@uslog00nas03:~#

You'll notice I have another backup to a 'local' pool, and that is working fine.


Reply to this email directly or view it on GitHub
#9.

@darkpixel
Copy link
Author

Based on the dataset being on the destination along with the zrep-offsite_000000 snapshot being there.
If a transfer fails part way through, zfs removes the incomplete snapshot.

@ppbrown
Copy link
Member

ppbrown commented Oct 1, 2015

It would be nice to see the output of the initial (failed) sync if you have
it.

On Thu, Oct 1, 2015 at 7:04 AM, Aaron C. de Bruyn notifications@github.com
wrote:

Based on the dataset being on the destination along with the
zrep-offsite_000000 snapshot being there.
If a transfer fails part way through, zfs removes the incomplete snapshot.


Reply to this email directly or view it on GitHub
#9 (comment).

@ppbrown
Copy link
Member

ppbrown commented Oct 1, 2015

also, i'd like to know if the remote sent snapshot, has the
zrep-offsite:sent property set.

On Thu, Oct 1, 2015 at 9:40 AM, Philip Brown phil@bolthole.com wrote:

It would be nice to see the output of the initial (failed) sync if you
have it.

On Thu, Oct 1, 2015 at 7:04 AM, Aaron C. de Bruyn <
notifications@github.com> wrote:

Based on the dataset being on the destination along with the
zrep-offsite_000000 snapshot being there.
If a transfer fails part way through, zfs removes the incomplete snapshot.


Reply to this email directly or view it on GitHub
#9 (comment).

@darkpixel
Copy link
Author

Another tech ran the initial sync, he told me it completed.

Here's the snapshot on the receiving end:

root@uslog00nas03:~# zfs get all backup-pool/usvansd/virt@zrep-offsite_000000
backup-pool/usvansd/virt@zrep-offsite_000000  zrep-offsite:savecount  5                          inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000  zrep-offsite:src-host   usvansdnas01               inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000  zrep-offsite:dest-host  uslog00nas03.-redacted-.local  inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000  zrep-offsite:dest-fs    backup-pool/usvansd/virt   inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000  zrep-offsite:src-fs     tank/virt                  inherited from backup-pool/usvansd/virt
root@uslog00nas03:~# 

@darkpixel
Copy link
Author

  • The other tech is fairly new to Linux, so I can't vouch for it completing successfully.

I should be able to manually sync the two filesystems if zrep can't recover from this error state--I'm mostly curious if I can set a few properties after the manual sync is complete to convince zrep to take over syncing again. ;)

@ppbrown
Copy link
Member

ppbrown commented Oct 1, 2015

HE LIIIEEESSS! :p

in future you should save initial sync output.
Particularly from that tech :p

If you look at the _sync() function, you will see that it attempts an ssh
to set the :sent property and does an EXPLICIT status check.
Surely, it failed, since the property isnt set. The script will clearly
complain if it fails.

The only other possibility is that you have your systems silently failing,
yet reporting success, on calls such as

ssh uslog00nas03 zfs set zrep-offsite:sent=123455
backup-pool/usvansd/virt@zrep-offsite_000000

if that is happening, I suggest you be really, really worried.

On Thu, Oct 1, 2015 at 10:40 AM, Aaron C. de Bruyn <notifications@github.com

wrote:

Another tech ran the initial sync, he told me it completed.

Here's the snapshot on the receiving end:

root@uslog00nas03:# zfs get all backup-pool/usvansd/virt@zrep-offsite_000000
backup-pool/usvansd/virt@zrep-offsite_000000 zrep-offsite:savecount 5 inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000 zrep-offsite:src-host usvansdnas01 inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000 zrep-offsite:dest-host uslog00nas03.-redacted-.local inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000 zrep-offsite:dest-fs backup-pool/usvansd/virt inherited from backup-pool/usvansd/virt
backup-pool/usvansd/virt@zrep-offsite_000000 zrep-offsite:src-fs tank/virt inherited from backup-pool/usvansd/virt
root@uslog00nas03:
#


Reply to this email directly or view it on GitHub
#9 (comment).

@ppbrown
Copy link
Member

ppbrown commented Oct 1, 2015

Funny you should ask.

I just finished writing an update. i havent officially published it yet,
but you can grab it from git.

https://github.com/bolthole/zrep/raw/master/zrep

and then use the zrep -t blah sentsync blahblah@blah_000000

to mark it as good.
I think.
lemme know how it goes.

On Thu, Oct 1, 2015 at 10:42 AM, Aaron C. de Bruyn <notifications@github.com

wrote:

  • The other tech is fairly new to Linux, so I can't vouch for it
    completing successfully.

I should be able to manually sync the two filesystems if zrep can't
recover from this error state--I'm mostly curious if I can set a few
properties after the manual sync is complete to convince zrep to take over
syncing again. ;)


Reply to this email directly or view it on GitHub
#9 (comment).

@ppbrown ppbrown self-assigned this Oct 6, 2015
@darkpixel
Copy link
Author

Tried running sentsync:

root@usvansdnas01:~# zfs list -rt snapshot tank/virt | grep zrep
tank/virt@zrep-local_000000                       1.49G      -   132G  -
tank/virt@zrep-offsite_000000                     1.28G      -   133G  -
tank/virt@zrep-local_000001                       1.84G      -   133G  -
tank/virt@zrep-local_000002                       1.82G      -   133G  -
tank/virt@zrep-local_000003                       89.7M      -   133G  -
tank/virt@zrep-local_000004                       5.38M      -   133G  -
root@usvansdnas01:~# zrep -t offsite  sentsync tank/virt@zrep-offsite_000000
Error: tank/virt@zrep-offsite_000000 does not follow zrep naming standards. Cannot continue
root@usvansdnas01:~# 

@ppbrown
Copy link
Member

ppbrown commented Oct 22, 2015

well. yeah.. like it said you didnt follow naming standards :(
needs to be zrep_000000
or rather, since you made the tag be "offsite", it should be
@offsite_000000

On Wed, Oct 21, 2015 at 3:33 PM, Aaron C. de Bruyn <notifications@github.com

wrote:

Tried running sentsync:

root@usvansdnas01:# zfs list -rt snapshot tank/virt | grep zrep
tank/virt@zrep-local_000000 1.49G - 132G -
tank/virt@zrep-offsite_000000 1.28G - 133G -
tank/virt@zrep-local_000001 1.84G - 133G -
tank/virt@zrep-local_000002 1.82G - 133G -
tank/virt@zrep-local_000003 89.7M - 133G -
tank/virt@zrep-local_000004 5.38M - 133G -
root@usvansdnas01:
# zrep -t offsite sentsync tank/virt@zrep-offsite_000000
Error: tank/virt@zrep-offsite_000000 does not follow zrep naming standards. Cannot continue
root@usvansdnas01:~#


Reply to this email directly or view it on GitHub
#9 (comment).

@darkpixel
Copy link
Author

Aah. I thought it was looking for the snapshot name including 'zrep-'.

@darkpixel
Copy link
Author

Ended up manually re-syncing, didn't have time to test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants