Skip to content


Insert the "root_znode" path before "master_redis_node_manager_lock" and expose via accessor #52

merged 1 commit into from

3 participants


We were having issues while attempting to use the node-manager across two clusters. The first cluster would start up, properly pick master and correctly handle failover however for the second cluster a master would never be promoted and though failover seemed to work the cluster status was not being properly tracked by node-manager.

This fix has been applied to our staging environment and appears to be doing the right thing (now). In attempting to test this I found that there was going to be a lot involved and wanted to get feedback on what would make sense to test, at this point, if anything.


This looks great! Thanks for the fix.

@ryanlecompte ryanlecompte merged commit 2e51021 into ryanlecompte:master

1 check passed

Details default The Travis build passed

Glad to help.

@jzaleski jzaleski deleted the unknown repository branch

+1 any chance of a gem release for this?


Just released redis_failover 1.0.2 with these changes. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Feb 13, 2013
  1. Insert the "root_znode" path before "master_redis_node_manager_lock" …

    Jonathan W. Zaleski committed
    …and expose via accessor
This page is out of date. Refresh to see the latest.
Showing with 6 additions and 1 deletion.
  1. +6 −1 lib/redis_failover/node_manager.rb
7 lib/redis_failover/node_manager.rb
@@ -459,6 +459,11 @@ def redis_nodes_path
+ # @return [String] root path for current node manager lock
+ def current_lock_path
+ "#{@root_znode}/master_redis_node_manager_lock"
+ end
# @return [String] the znode path used for performing manual failovers
def manual_failover_path
@@ -631,7 +636,7 @@ def node_from(node_string)
# Executes a block wrapped in a ZK exclusive lock.
def with_lock
- @zk_lock ||='master_redis_node_manager_lock')
+ @zk_lock ||=
Something went wrong with that request. Please try again.