2012-10-03 16:13:45,653 - INFO [ProcessThread:-1:PrepRequestProcessor@419] - Got user-level KeeperException when processing sessionid:0x13a273564e2000f type:delete cxid:0x506c65f9 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/_zklocking/__redis_failover_nodes_lock Error:KeeperErrorCode = Directory not empty for /_zklocking/__redis_failover_nodes_lock
It doesn't seem to effect functionality, but it'll fill up the logs pretty fast. Any ideas?
Hello Max! Which version of redis_failover are you using? Do you have more than one redis_node_manager daemon running?
Hey! 0.9.7 and I'm running 3 instances of redis_node_manager. One per-server.
Thanks. It looks like the log message is happening due to the use of an ephemeral lock to ensure only a single redis_node_manager is the 'primary' at any given moment. @slyphon, would you expect this log message to be logged every second or so in the ZooKeeper logs? Is it normal given the way the ephemeral lock is implemented in ZK 1.6?
@maxjustus, I tracked this down with @slyphon. Each redis_node_manager that's not the current primary will attempt to acquire the exclusive lock every couple seconds. Each time it attempts this and fails, a log entry will be written to the ZK logs. I will need to restructure how we acquire the lock in Redis::Failover::NodeManager#with_lock so that this doesn't happen.
Ah okay. Thanks for figuring it out!
For the time being I just piped the zookeeper logging output through grep before shipping it to papertrail.
( /usr/local/bin/zkServer start-foreground & echo $! >/var/run/zookeeper.pid ) | grep -v 'Directory not empty for /_zklocking/__redis_failover_nodes_lock' | /usr/bin/logger -t zookeeper &
Stop repeated attempts to acquire exclusive lock in Node Manager (#36)
I just checked in a fix that should address this issue in master. @maxjustus, feel free to give latest master a shot and see if the errors go away for you. I'll put out a release later tonight if all goes well.
Awesome! That fix did the trick. Thanks!
Sweet! I'll release a new version soon.
Just released 0.9.7.1 with this fix.