Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A bit lost in the documentation #4

Closed
zhenghao1 opened this issue Sep 13, 2013 · 3 comments
Closed

A bit lost in the documentation #4

zhenghao1 opened this issue Sep 13, 2013 · 3 comments
Labels

Comments

@zhenghao1
Copy link

When you specify the command line interface of the monitor, which redis server is the master and which are the slaves? Does this setup only allow one Master?

What if I have 4 masters, each having 2 slaves? How would I setup the monitor options as well as the RedisFailover python initialization?

python start_monitor.py -p /redis/cluster -z localhost:2181,localhost:2182,localhost:2183 -r localhost:6379,localhost:6389,localhost:6399 -s 30

Is 6379 the master and 6389 & 6399 the slaves?

@mkjsix
Copy link
Collaborator

mkjsix commented Sep 13, 2013

Hello,

Redis by design allows only one master, AFAIK you cannot have multiple masters, only multiple slaves of one single master.
The start_monitor.py script is only able to acknowledge your Redis existing master-slave configuration, it is not intended to set the configuration by itself.

The high level algorithm works like this:

  1. you configure your Redis replication with one master and one or more slaves, as you would do normally (http://redis.io/topics/replication)
  2. You pass to start_monitor.py a list of ZooKeeper nodes and a list of Redis nodes
  3. the monitor will self-discover which is the master and what are the slaves, start monitoring the replication configuration. The added benefit here is that the monitoring infrastructure is able to detect a failing master and promote a slave to be the new master, while also excluding for the replica set a dead slave or insert into the replica set a resurrecting node automatically (either a previously failed master or slave).

This much like how the new Redis Sentinel works (http://redis.io/topics/sentinel). We have autonomously implemented an automatic failover mechanism, but in our case we rely on an external tool, namely Apache ZooKeeper, for the management of distributed configurations and locks. ZooKeeper is very reliable and tested.

Anyway, as mentioned you have to configure the master-slave replica set by yourself before you start the monitor, we wanted to avoid to create any dependence between Redis and our tool, so that it's very easy to add our monitoring infrastructure to any existing Redis configuration, even running old Redis versions. Our tool will discover your configuration by itself, but the configuration must be consistent, of course.

On the other hand, you have to explicitly pass the list of ZooKeeper and Redis nodes to both our client and monitor, but this is how any high-availabilty configuration would work (except, maybe, for some more exotic auto-discovering distributed systems, which could use multicast and auto-discovery also of configurations, e.g Jini).

Hope this clarifies a bit more your doubts.

Regards,
Maurizio

@zhenghao1
Copy link
Author

Thank you very much for the very detailed explanation.

Is it safe to say that this method was created before Redis came up with the Sentinel feature? Because these two things seem to do the (almost) exact same thing.

@mkjsix
Copy link
Collaborator

mkjsix commented Sep 13, 2013

Yes, exactly.
At the time we started the project Redis Sentinel was not much more than a specification on paper and some experimental software, and we couldn't wait for a stable implementation. On the other hand, we decided to use Apache ZooKeeper while Sentinel tries to be self-contained. As ZooKeeper is very mature and robust, I'd say this specific aspect of our implementation is especially reliable, at the cost of some added infrastructural complexity.

@mkjsix mkjsix closed this as completed Sep 15, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants