Skip to content

Repl auto resync #387

Open
wants to merge 1 commit into from

3 participants

@jzawodn
jzawodn commented Mar 14, 2012

This is described in this redis-db mailing list thread:

http://groups.google.com/group/redis-db/browse_thread/thread/6badf6abf8f44eb0

It adds an option to the config "repl_auto_resync" which defaults to "yes" and controls whether or not a slave will automatically try to resync with the master when the connection between them fails.

Test suite passes with this which is based on 2.4. And the code runs for me in a live environment where I tested as well.

Jeremy Zawodny add repl_auto_resync option
this controls whether or not a slave will try to automatically resync
with the master when it loses the connection
be22df6
@hirose31

Do you plan to merge this patch into head of 2.4 and 2.6?

I think "repl_auto_resync off" is absolute required for master/slave replication which servers are no persistance setting (save "" and appendonly no).

When master process crash and shortly up, slave will reconnect and resync to master. But unfortunately freshly booted master does not have data, slave's data also vanish after finished resync.

It is not good for durability, I think.

So I need to prevent auto resync feature.

@antirez
Owner
antirez commented Jan 25, 2013

Hello all, I think this is a good feature, I'm waiting to finish and merge PSYNC (partial resync) before adding this as I want to add it in a way like this:

slave-resync-with-master always | psync | never

So basically this will serve users that are willing to say, if it's the same master and we are able to sync without a full-resync but just with a partial resync, then I want partial resynchronization, otherwise I do not want it to auto resync. And this is what it happens when this configuration option is set to psync. I think it is what most users will want.

Opening a new issue linking to this PR documenting the new semantic. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.