A connection will not be re-established if Zookeeper loses its data #42

diranged opened this Issue Dec 14, 2012 · 20 comments


None yet
5 participants

diranged commented Dec 14, 2012

If you lose your Zookeeper database entirely (lets say you have 1 server, and you shut it down, delete the data directory and start the service back up), the Kazoo client will never ever re-establish its connection. It sits in a re-try loop forever using the old session ID.

Steps to reproduce:

  1. Start Zookeeper with a fresh DB
  2. Connect to Zookeeper with Kazoo
  3. Stop Zookeeper... watch Kazoo start retrying.
  4. Start Zookeeper... watch Kazoo work
  5. Stop Zookeeper... delete db... start Zookeeper ...
  6. Kazoo loops forever. Do not collect $200, go directly to jail.

hannosch commented Jan 2, 2013

This got tests now (currently skipped via raise SkipTest('Patch missing')) - but lacks a patch.


hannosch commented Jan 2, 2013

And the issue is even a bit worse. It also happens if you simply shut down all the ZK server nodes. I'm guessing the session information is purely held in-memory (verified only for a standalone ZK server). So currently a client can only survive a session move to different ZK nodes, but doesn't recover after a total ZK ensemble loss.


hannosch commented Jan 3, 2013

IIRC this isn't completely straight-forward to fix. The client sees a SessionExpired exception in two cases: One for a real "server side session is dead and gone", but also for a "ZK thinks this client session is dead, but the server side session lives on". See https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ#FAQ-HowshouldIhandleSESSIONEXPIRED? for more details.

If we want to preserve the session (and all its associated ephemeral nodes and watches), we shouldn't just forget the session id after getting the first SessionExpired exception.

I don't think this is a bug. It's impossible to happen that operators totally shutdown Zookeeper in production environment. It's too strict to ask Kazoo to deal with this problem. It's a similar situation that the MySQL Cluster totally down while clients are still requesting for database records.


diranged commented Jan 8, 2013

I can see how there might be different behaviors desired by different people in this situation. Perhaps it might be best to create some way to register a callback in the event of "total catastrophic failure" of the connection? Any time in which Kazoo completely bails on the connection and just stops trying, it could call these callbacks ... that way the user can define their own behavior?

I use "try" statements at every line of Zookeeper communicational code to handle exceptions. It's too hard to define the "total catastrophic failure". It's a business domain related notion.


bbangert commented Jan 8, 2013

Actually, I think this is a bug in the client. Retrying is fine, maybe the session is alive somewhere, however, what should happen, is that after it has retried for more than the ZK's set session expiration period, it should automatically ditch any old session ID's. This way it won't keep trying to use the old session longer than it could possibly be valid for, which should alleviate this behavior while still attempting to properly restore its session should a server have it.

Sorry, I'm wrong. It's a bug! I reproduce this problem in my virtual cluster and find out the reason why your code trapped by a infinite loop. Kazoo version is 0.9.
if max_retries parameters was passed, it will throw out exception. if max_retries is None, it will lead to a infinite loop.

Traceback (most recent call last):
  File "main.py", line 37, in <module>
    zk.retry(queue.put, "b", 100)
  File "/usr/lib/python2.7/site-packages/kazoo/retry.py", line 113, in __call__
  File "/usr/lib/python2.7/site-packages/kazoo/retry.py", line 54, in increment
    raise Exception("Too many retry attempts")
Exception: Too many retry attempts

"max_retries=None", is it correct?

class KazooClient(object):
    """An Apache Zookeeper Python client supporting alternate callback
    handlers and high-level functionality.

    Watch functions registered with this class will not get session
    events, unlike the default Zookeeper watches. They will also be
    called with a single argument, a
    :class:`~kazoo.protocol.states.WatchedEvent` instance.

    def __init__(self, hosts='',
                 timeout=10.0, client_id=None, max_retries=None,
                 retry_delay=0.1, retry_backoff=2, retry_jitter=0.8,
                 retry_max_delay=3600, handler=None, default_acl=None,
                 auth_data=None, read_only=None, randomize_hosts=True):

My test code.

import sys
import logging
import time
import signal
import traceback

from kazoo.client import KazooClient
from kazoo.client import KazooState

zk = None

def signal_handler(signal, frame):
    print 'You pressed Ctrl+C!'
signal.signal(signal.SIGINT, signal_handler)

def my_listener(state):
    if state == KazooState.LOST:
        print state
    elif state == KazooState.SUSPENDED:
        print state
        print state

if __name__ == "__main__":
    zk = KazooClient(hosts="master2,node1,node2", 
                     max_retries=2)   # The default value of max_retries is None !
    queue = zk.Queue("/test/queue")
    def watch(data):
        print len(data)

    while True:
            zk.retry(queue.put, "b", 100)
        except Exception, e:
            print traceback.format_exc()
        print time.time()

bbangert commented Jan 9, 2013

Sorry, not quite clear with the retries on where the bug is. Yes, if you have no retry limit, it retries forever... exactly what its supposed to do. The real bug is that it should stop using a session ID after the session timeout period it negotiated.

There is an ambiguity on max_retries. May be we should add a line in comment "None means retry forever.."

How to handle session expire really depends on the code context...
See https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ#FAQ-HowshouldIhandleSESSIONEXPIRED

Library writers should be conscious of the severity of the expired state and not try to recover from it. Instead libraries should return a fatal error. Even if the library is simply reading from ZooKeeper, the user of the library may also be doing other things with ZooKeeper that requires more complex recovery.


bbangert commented Jan 9, 2013

Yea, a lot of people weren't fans of kazoo making their app shut down. :)

If you want that behavior though, register a listener for session expired that calls sys.exit.

nekto0n commented Jan 9, 2013

What if I want read/write request to fail after max_retries, but that should not stop connection loop - it should go on trying to reestablish connection.
Right now I have to manually watch if writer is stopped (gevent version):


bbangert commented Jan 9, 2013

The best way would be to use a KazooRetry object, which is technically what client.retry is, but the client.retry defaults to the same retry parameters as the connection uses. Sigh, and I see that I totally forgot to document the retry module and KazooRetry class, argh. I'll file a separate bug for that, you can use a KazooRetry directly with the options that work best for you, here's the code:

Here's an example of using it for the retry policy for a command:

from kazoo.retry import KazooRetry

kr = KazooRetry(max_tries=3, ignore_expire=False)
result = kr(client.get, "/some/path")

Note that you can choose whether or not you want to ignore session expiration with ignore_expire. This is the intended method of wrapping client calls to catch the connection drops, and I'll fix the documentation to reflect this.

Also, I think the KazooClient instantiation should probably let you pass in a KazooRetry instance if you'd like a custom one for client.retry instead of the same options used to handle the connection itself. I've noticed that 13 arguments to the client instantiation is a bit nutty, so its likely that instead of all those args, you'll pass in the host connection policy, and command retry policy.


bbangert commented Jan 9, 2013

I have filed issue #48 and #49 to cover the two retry issues I noted. Note that this doesn't address the underlying issue in this report which is that the client should stop trying to use a session id after it has expired during reconnects.

nekto0n commented Jan 9, 2013

Thanks a lot! #49 is exactly what I need.


diranged commented May 20, 2013

So any thoughts on whether this will ever be taken care of? If not, perhaps it should just be documented and closed?


bbangert commented Jul 19, 2013

Does this still happen? When I follow the directions.... it reconnects and gets a new session ID. Even if I stop, delete zookeeper db, and reconnect. It says session expired on reconnect, drops, then connects with a new session.


bbangert commented Jul 22, 2013

I'm unable to reproduce this with the latest kazoo, upon deleting the zookeeper data and restarting it, Kazoo sees that it no longer has a valid session, expires it, and connects fresh. If this is still a valid issue please re-open with updated instructions on how to reproduce it, thanks!

bbangert closed this Jul 22, 2013


hannosch commented Jul 23, 2013

@bbangert I wrote tests for this bug and they are still failing. Could you have a look and see why the tests might be wrong? They are in test_client.TestSessions and currently skipped, so you need to comment out the "raise SkipTest('Patch missing')" lines.


bbangert commented Jul 23, 2013

Yea, I didn't quite understand what they were testing. When I follow the steps as raised in the issue, delete dir, etc. then it reconnects fine still. Can you replicate the steps as well?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment