-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Backup server" mode; init when data already exists #53
Comments
Top level should work now according to #48 ... I'm using latest zrep directly from git. |
On Sun, Jun 25, 2017 at 11:20 AM, Joachim Tingvold ***@***.*** > wrote:
$ZREP_PATH init backup/client1/rpool client1.foo.bar rpool
This results in zrep trying to create the remote dataset, which fails
(since the remote dataset/pool already is present, and have lots of data);
This right here is the issue.
If you have an existing dataset, then you cant use zrep init from the other
side. you have to push it from where the data is.
So, one way or another, you need to take a full zfs send stream from the
client, and get it over to the backup server.
If you cant temporarily grant ssh privs from client to server, then you
will have to do it manually, then follow the webpage on zrep for how to
deal with converting existing dataset to zrep
|
Okay, so if I manually send the entire I guess the inverted behavior of the "backup server" mode sets me a bit off here (but then again the instructions in the documentation isn't that straight forward regarding this topic (-: ). |
… On Sun, Jun 25, 2017 at 1:53 PM, Joachim Tingvold ***@***.***> wrote:
Okay, so if I manually send the entire rpool tree from client1.foo.bar
into the backup/client1/rpool tree on backup1.foo.bar, then what? It's
not really clear to me what zrep commands needs to be issued on client1
and backup1, and in what order, for this to work.
I guess the inverted behavior of the "backup server" mode sets me a bit
off here (but then again the instructions in the documentation isn't
*that* straight forward regarding this topic (-: ).
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#53 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABpK-Tfy4oAh20U0QJEgplXY4jjjOgRWks5sHsjagaJpZM4OErcg>
.
|
But that section needs to be reversed due to how "backup server" mode works, yes? Then it would be something like this;
I'm sorry if I'm asking dumb questions here. Even if I have a somewhat OK understanding on how ZFS works, I feel that it wasn't really "straight forward" how to do this by just reading the documentation for zrep (the combination of "srchost", "dsthost", and the "reverse behaviour", and with no real examples when having pre-existing file systems when in "backup server" mode). I'd be happy to make a PR explaining the steps in detail once I hammer this down, but until then; please bear with me (-: |
meta step 0: make sure you have FULL ssh root trust enabled from backup
server, to client.
meta step 1:
do a manual (zfs send) from client to backup server.
THIS IS NOT A "COPY"!!
THIS INVOLVES A SNAPSHOT, so your "step 2" is WRONG!!!
you also need to change snapshot name on both sides. I guess I need to
clarify the docs a bit.
other than that, the rest of your steps look correct to me
…On Mon, Jun 26, 2017 at 9:06 AM, Joachim Tingvold ***@***.***> wrote:
But that section needs to be reversed due to how "backup server" mode
works, yes?
Then it would be something like this;
1. Copy the ZFS tree manually from client1 to backup1
2. Make first snapshot (matching the zrep naming scheme) manually on
client1 or backup1?
3. Then we need to set the configuration, in "reverse". On client1
(aka "srchost"?): zrep changeconfig -f rpool backup1.foo.bar
backup/client1/rpool. On backup1 (aka "dsthost"?): zrep changeconfig
-f -d backup/client1/rpool client1.foo.bar rpool. Yes? No?
4. Then we do zrep sentsync ***@***.***_snapnamehere on client1?
5. Then we do zrep failover backup/client1/rpool on backup1 to
"failover" to client1?
6. Then we do zrep refresh backup/client1/rpool on backup1 every time
we want to take a new backup?
I'm sorry if I'm asking dumb questions here. Even if I have a somewhat OK
understanding on how ZFS works, I feel that it wasn't really "straight
forward" how to do this by just reading the documentation for zrep (the
combination of "srchost", "dsthost", and the "reverse behaviour", and with
no real examples when having pre-existing file systems when in "backup
server" mode). I'd be happy to make a PR explaining the steps in detail
once I hammer this down, but until then; please bear with me (-:
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#53 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABpK-Q8zKlUhCYI9wy1alf2qUUz6f4p5ks5sH9bqgaJpZM4OErcg>
.
|
So, there's still something funky with the order of dsthost/srchost. This is what I've done so far;
At this point, if I try to failover, the following happens;
It already thinks it's in read-only mode, and that
Now it suddenly says it's master mode, while trying to failover it says it's not master? Sigh. edit: fixed typo (extra |
This is the status after doing the above commands... looks correct to me, except that it's saying both
|
oops.
I didnt nitpick your prior list of commands.
I thought you had "takeover". but instead you had "failover".
thats your problem.
you have to run the takeover subcommand form the backup side, to take the
master role.
|
But the |
doh... no, when you set it up you were supposed to let the actual master
sender, have the master. just like it says in the docs.
failover, takes care of *everything*, if the system was properly set up
before that.
zfs readonly property, makes it so that even if you are naughty enough to
mount it, you cant write to it with normal methods.
Which is important because if you DO write to it, you screw up incremental
zfs replication.
But interestingly, you can still zfs recv to it.
Note that administrative actions such as taking snapshots on the
destination side, may ALSO screwup replication, depending on your
implementation of zfs.
…On Tue, Jun 27, 2017 at 3:13 PM, Joachim Tingvold ***@***.***> wrote:
But the backup1 is already master? Or does the takover command remove the readonly
on flag? And on that topic, what does the readonly flag "mean"? Is readonly
on == "we should never do zfs recv on this host"?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#53 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABpK-ZmN2yw0Ann80-_4poXmz5FsLXDeks5sIX52gaJpZM4OErcg>
.
|
But there is still something wrong here? You say that Your code clearly states the following;
... and as such, my previous steps should be correct (skipping the "copy the ZFS tree manually beforehand", as that is now solved);
This results in the following;
Which seems correct to me? Source is At this point, according to the documentation you linked to, we need to do a At this point;
I feel I have tested all possible combinations now, but it still doesn't seem clear to me how this is supposed to work. In stead of all this back-and-forth, can't you just give the exact commands you mean is needed to set this up from scratch given the test values I've provided? (see below). Should be 3 or 4 commands, based on what you've said so far (two
|
Due to lack of answer, and not getting this to work (I tried other things not mentioned in this issue), I gave up. Tried to figure out where the logic might be broken in the code, but never got that far. Ended up using znapzend. |
Huh.
Sorry I dropped the ball on this one.
Reading the notes you wrote in June:
The properties for src and dest seem fine.
The steps after that, that you would need, I believe are:
- on backup1, run sentsync.
This sets master flag on backup1, but this is only temporary.
- now on backup1, run failover
This makes client the "master".
And now you should be clear to run zrep refresh, on backup1
|
Hi, You clearly didn't read my latest post (Jun 28th), which basically covers every single combination (as far as I can see). Specifically this part;
Again, as I asked in my latest update, if you believe this should work, can you please provide the commands needed, given the following;
|
i've updated the docs for a full step by step walkthrough for your original
situation, in theory
…On Wed, Sep 13, 2017 at 4:44 PM, Joachim Tingvold ***@***.***> wrote:
Hi,
You clearly didn't read my latest post (Jun 28th), specifically this part;
- If I set zrep sentsync on backup1, then both zrep failover and zrep
takeover fails;
***@***.***:~# $ZREP_PATH takeover backup/client1/rpool
Error: backup/client1/rpool is already master. Cannot takeover
***@***.***:~# $ZREP_PATH failover backup/client1/rpool
Setting readonly on local backup/client1/rpool, then syncing
ERROR: we are not master host for backup/client1/rpool
print master is client1.foo.bar, we are backup1
Error: zrep_sync could not create new snapshot for backup/client1/rpool
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#53 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABpK-frsvJPT-GKWne1_fUYfEMOQBUsqks5siGjdgaJpZM4OErcg>
.
|
After updating the code with a new flag, yes. I won't be surprised if it works now, tbh (-: |
Hi,
I'm trying to use the "backup server" mode of zrep, but I cannot get it to work.
Setup as follows;
Goal; to use "backup server" mode to take recursive backup of existing, with existing data, pool
rpool
onclient1
to existing, but empty, dataset/destinationbackup/client1/rpool
on backup serverbackup1
.Tried the following on
backup1
;$ZREP_PATH init backup/client1/rpool client1.foo.bar rpool
This results in zrep trying to create the remote dataset
rpool
, which fails (since the remote dataset/pool already is present, and have lots of data);Trying to do different variations of
$ZREP_PATH changeconfig
doesn't seem to do the trick. One of the errors was as follows;Error: backup/client1/rpool not master. Cannot fail over
export ZREP_R=-R
is set (on both sides, if relevant).I'm probably brainfarting hard here, but...
The text was updated successfully, but these errors were encountered: