Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Having trouble with syncoid error on line 787 #143

Closed
ledoktre opened this issue Sep 6, 2017 · 12 comments
Closed

Having trouble with syncoid error on line 787 #143

ledoktre opened this issue Sep 6, 2017 · 12 comments

Comments

@ledoktre
Copy link

ledoktre commented Sep 6, 2017

I have 2 servers in this little lab. I run KVM on both servers, have raid-z (3 x SSD) and mirror (2 x 4TB) on each. I have each configured with sanoid to create snapshots as well.

My intended use case is to syncoid all of rpool0 on vserver1 to rpool1/backup/vserver1 on both boxes, and likewise all of rpool0 on vserver2 to rpool1/backup/vserver2 on both boxes.

Two issues with my little plan thus far.

  1. When I issue syncoid "recursively", I end up with errors on subsequent runs :

**_could not find any snapshots to destroy; check snapshot names.

CRITICAL ERROR: Target rpool1/backup/vserver1/pool0 exists but has no snapshots matching with rpool0/pool0!
Replication to target would require destroying existing
target. Cowardly refusing to destroy your existing target._**

I figured out that this is because rpool0/pool0 are not included in sanoid, but subfolders are. This is where I create folders for the virtual machines and store the qcow2 files. This behaviour also happens on /var/tmp, which I am not including in sanoid either. For now, I just wrote a little bash script to syncoid just the folders that have snapshots. The error went away, but optimally I would like to just use recursive?

  1. I am now getting a different error, and it appears to be when using syncoid to transfer to the opposite box, though I am not sure if that part is pertinent. Error :

WARNING: /usr/bin/ssh -c chacha20-poly1305@openssh.com,arcfour -p 11022 -S /tmp/syncoid-root-root@10.4.8.12-1504656237 root@10.4.8.12 " /sbin/zfs destroy rpool1/backup/vserver1/var@syncoid_vserver1_2017-09-05:19:03:42; /sbin/zfs destroy rpool1/backup/vserver1/var@syncoid_vserver1_2017-09-05:18:57:44" failed: 256 at /opt/apps/sanoid/syncoid line 787.

I probably get 6 of them every time I run the script. I have no idea what is causing that.

Any help would be most appreciated.

Thanks,

@ledoktre
Copy link
Author

ledoktre commented Sep 6, 2017

Appears on issue 2 above it is not just over SSH. Here is another error but local :

could not find any snapshots to destroy; check snapshot names.
could not find any snapshots to destroy; check snapshot names.
WARNING: /sbin/zfs destroy rpool1/backup/vserver1/var/spool@syncoid_vserver1_2017-09-05:19:04:03; /sbin/zfs destroy rpool1/backup/vserver1/var/spool@syncoid_vserver1_2017-09-05:19:03:47 failed: 256 at /opt/apps/sanoid/syncoid line 787.

@jimsalterjrs
Copy link
Owner

First things first: if you're getting an error 256, I believe you're using an older version of Syncoid. Update that first, and then let's try again, please.

Also, when pasting in errors, please paste the actual command being run as well as the error message.

@ledoktre
Copy link
Author

ledoktre commented Sep 7, 2017

For the error 256, I cloned maybe about a week ago, a tad longer. According to VERSION on both boxes, both are 1.4.16? I'll check to see if there is a recent update.

On the other, the command I am using is one of these two :

/opt/apps/sanoid/syncoid --recursive rpool0 rpool1/backup/vserver2
/opt/apps/sanoid/syncoid --recursive rpool0 -sshport=11022 root@10.24.28.11:rpool1/backup/vserver2

This is run from vserver2, first line intending to sync to a second pool locally, second intending to backup same datasets to a secondary pool on the opposite box.

Thanks,

UPDATE : I just updated both, same version yet but sanoid.spec file and README.md file both updated on both boxes. FWIW.

@yilmazn
Copy link

yilmazn commented Aug 30, 2018

I have single script initially takes snapshots with sanoid --cron after that send taken snapshots to a remote host with syncoid --no-sync-snap but I'm getting this error message

CRIT: --no-sync-snap is set, and getnewestsnapshot() could not find any snapshots on source!

#!/bin/bash

DATE=date +%Y_%m_%d
LOG_FILE=/opt/zfs_scripts/logs/zfs_replicate_$DATE.txt

echo "$DATE - Creating Snapshots on rdsski01 > $LOG_FILE
/usr/local/sbin/sanoid -r --cron >> $LOG_FILE 2>&1
echo "" >> $LOG_FILE 2>&1

echo "*** Replicating ski dataset ***" >> $LOG_FILE 2>&1
/usr/local/sbin/syncoid --no-sync-snap pool/ski root@rdsski02:pool/ski >> $LOG_FILE 2>&1
echo "" >> $LOG_FILE 2>&1

cat $LOG_FILE | mail -s "Replication:INFO messages at rdsski01" root

@phreaker0
Copy link
Collaborator

@yilmazn please post the output of "sanoid -r --cron --debug"

@yilmazn
Copy link

yilmazn commented Sep 3, 2018

@phreaker0 phreaker0 @jimsalterjrs

[root@rdsski01 ~]# sanoid -r --cron --debug
DEBUG: initializing $config{pool/ski} with default values from /etc/sanoid/sanoid.defaults.conf.
DEBUG: overriding hourly on pool/ski with value from user-defined template template_production.
DEBUG: overriding daily on pool/ski with value from user-defined template template_production.
DEBUG: overriding monthly on pool/ski with value from user-defined template template_production.
DEBUG: overriding yearly on pool/ski with value from user-defined template template_production.
DEBUG: overriding autosnap on pool/ski with value from user-defined template template_production.
DEBUG: overriding autoprune on pool/ski with value from user-defined template template_production.
****** CONFIGS ******
$VAR1 = {
'pool/ski' => {
'autoprune' => 1,
'autosnap' => 1,
'capacity_crit' => '95',
'capacity_warn' => '80',
'daily' => '7',
'daily_crit' => '32',
'daily_hour' => '23',
'daily_min' => '59',
'daily_warn' => '28',
'hourly' => '5',
'hourly_crit' => '360',
'hourly_min' => '0',
'hourly_warn' => '90',
'min_percent_free' => '10',
'monitor' => 1,
'monitor_dont_crit' => 0,
'monitor_dont_warn' => 0,
'monthly' => '0',
'monthly_crit' => '35',
'monthly_hour' => '0',
'monthly_mday' => '1',
'monthly_min' => '0',
'monthly_warn' => '32',
'path' => 'pool/ski',
'yearly' => '0',
'yearly_crit' => '0',
'yearly_hour' => '0',
'yearly_mday' => '1',
'yearly_min' => '0',
'yearly_mon' => '1',
'yearly_warn' => '0'
}
};

Filesystem pool/ski has:
Use of uninitialized value in division (/) at /usr/local/sbin/sanoid line 496.
0 total snapshots (newest: 0.0 hours old)

INFO: taking snapshots...
taking snapshot pool/ski@autosnap_2018-09-03_17:17:04_daily
taking snapshot pool/ski@autosnap_2018-09-03_17:17:04_hourly
INFO: cache expired - updating from zfs list.
INFO: pruning snapshots...
[root@rdsski01 ~]#
[root@rdsski01 ~]#
[root@rdsski01 ~]#
[root@rdsski01 ~]# /usr/local/sbin/syncoid --no-sync-snap pool/ski root@rdsski02:pool/ski


Warning!
This is a private system. Unauthorized access to or use of this system is
strictly prohibited. By continuing, you acknowledge your awareness of and
concurrence with the Acceptable Use Policy of Case Western Reserve University.
Unauthorized users may be subject to criminal prosecution under the law and
are subject to disciplinary action under University policies.


NEWEST SNAPSHOT: syncoid_rdsski01.cwru.edu_2018-09-03:13:27:30
INFO: no snapshots on source newer than syncoid_rdsski01.cwru.edu_2018-09-03:13:27:30 on target. Nothing to do, not syncing.
[root@rdsski01 ~]#

@phreaker0
Copy link
Collaborator

Interesting, so apparently the snapshot taking is not working. Can you manually run "zfs snapshot pool/ski@autosnap_2018-09-03_17:17:04_daily" and verify it was taken with "zfs list -t snapshot pool/ski"?

@yilmazn
Copy link

yilmazn commented Sep 4, 2018

@phreaker0
[root@rdsski01 zfs_scripts]# zfs snapshot pool/ski@autosnap_2018-09-03_17:17:04_daily
[root@rdsski01 zfs_scripts]#

[root@rdsski01 zfs_scripts]# zfs list -t snapshot pool/ski
cannot open 'pool/ski': missing '@' delimiter in snapshot name
[root@rdsski01 zfs_scripts]#

[root@rdsski01 zfs_scripts]# zfs list -t snapshot pool/ski@autosnap_2018-09-03_17:17:04_daily
NAME USED AVAIL REFER MOUNTPOINT
pool/ski@autosnap_2018-09-03_17:17:04_daily 0B - 205K -
[root@rdsski01 zfs_scripts]#

@phreaker0
Copy link
Collaborator

sry, the last command missed the "-r" flag, but you figured it out. This is weird, sanoid doesn't report any error on taking the snapshots but they aren't there? Which sanoid version are you using?

@yilmazn
Copy link

yilmazn commented Sep 4, 2018

@phreaker0 I'm using the latest version

@phreaker0
Copy link
Collaborator

:-D just noticed you used "-r" option for sanoid, which probably translates to "--readonly" which explains the behaviour. Remove the flag and it will take snapshots :-)

@phreaker0
Copy link
Collaborator

I guess this is resolved? If not feel free to reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants