Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zrep init doesn't work in FreeBSD 12.1 if ZREP_R=-R is used #153

Closed
maurizio-emmex opened this issue Mar 14, 2020 · 10 comments
Closed

zrep init doesn't work in FreeBSD 12.1 if ZREP_R=-R is used #153

maurizio-emmex opened this issue Mar 14, 2020 · 10 comments

Comments

@maurizio-emmex
Copy link

maurizio-emmex commented Mar 14, 2020

# zfs create zdata/test
# zfs create zdata/test/test
# ZREP_R=-R  ZREP_OUTFILTER="lz4 -c" ZREP_INFILTER="lz4 -d" zrep init zdata/test  clover-nas2 pool2tb/test
Setting zrep properties on zdata/test
Warning: zfs recv lacking -o readonly
Creating readonly destination filesystem as separate step
Creating snapshot zdata/test@zrep_000000
Sending initial replication stream to clover-nas2:pool2tb/test
cannot mount 'pool2tb/test': mountpoint or dataset is busy
Destroying any zrep-related snapshots from zdata/test
Removing zrep-related properties from zdata/test
Error: Error transferring zdata/test@zrep_000000 to clover-nas2:pool2tb/test. Resetting

Regards
Maurizio

@ppbrown
Copy link
Member

ppbrown commented Mar 14, 2020

Hi Maurizio,
I will need more info.
for example, you showing a manual "zfs send -R" and recv -R actually work on those systems.

@maurizio-emmex
Copy link
Author

maurizio-emmex commented Mar 14, 2020

zrep works without problems on already initialized fs on these systems. This is the VirtualBox VM filesytem that I have just synchronized:

# zrep list -v zdata/vbox
zdata/vbox:
compression     on
atime   off
aclmode discard
aclinherit      restricted
dedup   off
sync    disabled
zrep:src-host   clover-nas4
zrep:dest-host  clover-nas2
zrep:dest-fs    pool2tb/vbox
zrep:savecount  14
zrep:src-fs     zdata/vbox
zrep:master     yes
last snapshot synced: zdata/vbox@zrep_00017f

@ppbrown
Copy link
Member

ppbrown commented Mar 14, 2020

I need to know how the commands zrep is using, differ from what you are using successfully at the low level.
from init only.

after initialization, it doesnt matter

@maurizio-emmex
Copy link
Author

maurizio-emmex commented Mar 15, 2020

I have written a little script for testing zfs send/recv:

#!/bin/sh
set -x
ssh root@clover-nas2 'zfs destroy -r pool2tb/test'
zfs destroy -r zdata/test
zfs create zdata/test
zfs create zdata/test/test
zfs snapshot -r zdata/test@snap
zfs send -R zdata/test@snap | ssh clover-nas2 'zfs recv -F pool2tb/test'
set +x

It works:

# ssh root@clover-nas2 'zfs list pool2tb/test'
NAME           USED  AVAIL  REFER  MOUNTPOINT
pool2tb/test   234K  1.71T   117K  /mnt/pool2tb/test

@ppbrown
Copy link
Member

ppbrown commented Mar 15, 2020

Thank you for providing that information, and showing that the simple case works.

I'd certainly like to update zrep with what works for your system.
To do that, however.. I'm going to need to know specificaly what ISNT working for your system:)

So you'll need to set some debugging in the zrep script to find out what specific arguments its trying to use.
I see you are familiar with using "set -x" already, so hopefully you'll be able to track this down.
Please let me know if you get stuck.

Keep in mind that you will need to add "set -x" in the appropriate function.
Just setting it at the top level, doesnt carry down into functions.

@jbreitman
Copy link

jbreitman commented Jun 11, 2020

I resolved the issue by commenting out the line that creates the readonly filesystem on the destination server and know I have to change the destination filesystem to readonly after the sync has completed. I only used the "special" version of the script when running the initial recursive sync.

# diff -u /tmp/zrep-1.8.0 /tmp/zrep-1.8.0-special
--- /tmp/zrep-1.8.0 2020-04-22 14:32:19.206398000 -0400
+++ /tmp/zrep-1.8.0-special 2020-04-22 15:12:14.389547000 -0400
@@ -1231,7 +1231,7 @@
 READONLYPROP=""
 print Warning: zfs recv lacking -o readonly
 print Creating readonly destination filesystem as separate step
- zrep_ssh $desthost zfs create $ZREP_CREATE_FLAGS -o readonly=on $vflags $destfs || zrep_errquit "Cannot create $desthost:$destfs"
+ # zrep_ssh $desthost zfs create $ZREP_CREATE_FLAGS -o readonly=on $vflags $destfs || zrep_errquit "Cannot create $desthost:$destfs"
 fi

@Peter2121
Copy link

Peter2121 commented Oct 26, 2020

I hit exactly the same problem. Disabling the line, recommended by @jbreitman solved the issue - the initial sync is OK. I checked the 'readonly' attribute of all datasets synced - is is already 'on', so I don't need to put them to 'readonly' manually.
I use zrep from Git on FreeBSD 12.1.

@ppbrown
Copy link
Member

ppbrown commented Oct 26, 2020

Hmm...
well thats a bit disturbing.

...oh.
right. that actually makes sense I guess. Kinda.
the manual hack to create a filessytem, isnt -R aware. SO it makes the top level filesystem.
Then the recv tries to "create" a sub filesystem.
but cant, because "its read only".
sigh.

Thanks for the feedback guys. I've updated the github code to hopefully handle it automatically now.
Please let me know if there are any problems.

@Peter2121
Copy link

Peter2121 commented Nov 2, 2020

So... I tested it configuring another servers for replication.
The initial replication passed correctly, I could send some updates too.
There is something strange in this config though ;)
I configured the recursive replication as follows:
zrep -i zroot/jais backup-server zdata/usbwd2/flytrace-host/jails
zdata/usbwd2/flytrace-host/jails was not existing before the replication.
On the backup-server I received the dataset correctly with the following options:

NAME                              PROPERTY    VALUE                        SOURCE
...
zdata/usbwd2/flytrace-host/jails  mountpoint  /usbwd2/flytrace-host/jails  inherited from zdata/usbwd2
...

I have some sub-datasets, for example zroot/jais/mail
This dataset was received too on the backup-server, but the mountpoint seems to be strange, it is the original one:

NAME                                   PROPERTY    VALUE                             SOURCE
zdata/usbwd2/flytrace-host/jails/mail  mountpoint  /jails/cbsd/jails-data/mail-data  received

Is it normal? I supposed to have /usbwd2/flytrace-host/jails/cbsd/jails-data/mail-data here, so if I want to mount this dataset on the destination host - it goes into a subfolder of /usbwd2/flytrace-host/jails where the root synced dataset is mounted. Maybe, it could be an option to put during the initial sync, something like 'adjust child's mountpoints' ;)

@ppbrown
Copy link
Member

ppbrown commented Nov 2, 2020

yeah thats normal.
there might be an option, I vaguely recall someone asking for somethjinmg similar, and I think I put something in.
but you can change t4hings like properties on the destination filesystgem, and it wont affect the sync. so not really a big deal :)

thanks for the followup. i appreciate it.
closing this as
fixed in 64d36ab

@ppbrown ppbrown closed this as completed Nov 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants