Connection pooling #21

chase-seibert opened this Issue Apr 17, 2013 · 12 comments


None yet

3 participants


Connection pooling should use a separate pool API, not be completely embedded inside the happybase.Connection class.


When using happybase in the context of a web application, it would be useful to re-use connections between page requests. A connection pooling solution should take a MIN, MAX and IDLE count as parameters, and open connections as needed by the application.



import happybase

pool = happybase.ConnectionPool(

# block == wait until a connection is available 
# versus raise an exception
connection = pool.get_connection(

The pool could be instantiated manually per-process in the setup flow of a in a web server framework. For example, in Django, this could be done in with AUTOCONNECT=False so that connections are not established until the first calls to get_connection().


If a connection cannot be established, or is terminated (ie by a timeout), it would attempt to re-establish after RETRY_MS milliseconds.


ConnectionPool could thrown an error right away if it can't establish MIN connections immediately. Otherwise, a call to pool.get_connection, will raise various exceptions for things like pool exhaustion (if BLOCK=False), cannot connect to Thrift endpoint, etc.

Other thoughts:

I can't see how we could support connection pooling between multiple python processes except by implementing a separate process to connect through, similar to pgpool.


Tests would be implemented by mocking out the Connection class so that no actual sockets need to be opened.


I have a prototype solution working. I'm going to battle test it on production for a week before I come with an official patch.

import time
import random
import contextlib
import happybase
from socketpool import ConnectionPool
from socketpool.conn import TcpConnector

class HappybaseConnectionPool(object):
    ''' singleton to share a connection pool per process '''

    pool = None
    _instance = None

    def __new__(cls, *args, **kwargs):
        if not cls._instance:
            cls._instance = super(HappybaseConnectionPool, cls).__new__(cls, *args, **kwargs)
        return cls._instance

    def __init__(self, host, **options):
        if not self.pool:
            options['host'] = host
            self.pool = ConnectionPool(
                max_size=options.get('max_size', 10),

    def connection(self, **options):
        return self.pool.connection(**options)

    def table(self, table_name):
        with self.pool.connection() as connector:
            yield connector.table(table_name)

class HappybaseConnector(TcpConnector):

    def __init__(self, host, port, pool=None, **kwargs): = host
        self.port = port
        self.connection = happybase.Connection(, self.port)
        self._connected = True
        # use a 'jiggle' value to make sure there is some
        # randomization to expiry, to avoid many conns expiring very
        # closely together.
        self._life = time.time() - random.randint(0, 10)
        self._pool = pool
        self.logging = kwargs.get('logging')

    def is_connected(self):
        if self._connected and self.connection.transport.isOpen():
                # isOpen is unreliable, actually try to do something
                return True
        return False

    def handle_exception(self, exception):
        if self.logging:
            print exception

    def invalidate(self):
        self._connected = False
        self._life = -1

    def open(self):

    def close(self):

    def __getattr__(self, name):
        if name in ['table', 'tables', 'create_table', 'delete_table',
                'enable_table', 'disable_table', 'is_table_enabled', 'compact_table']:
            return getattr(self.connection, name)
            raise AttributeError(name)

You use it like this:

pool = HappybaseConnectionPool('localhost', '9090')
with pool.connection() as connection:

shouldn't this support multiple thrift servers? pycassa has support for that.


I'm hitting a bunch of Thrift instances behind a load balancer, which I think makes sense to run externally. If we did load balancing in process, it would mean implementing options like round-robin, least connection, etc. Not sure how you would deal with least-connection between various python processes; they would all be keeping their own connection counts, exclusive of each other.

I think it's better left to an external load balancer.


well long term you could have it aware of regionserver splits for performance. Netflix has a cassandra client that does this


I think I agree with Chase. Connection pooling is hard, and it adds quite a bit of complexity. Other solutions like load balancers are actually designed to handle this problem on a network level (instead of a process level).

wbolster commented May 2, 2013

I actually had a go at this since it also seems the way to go for multi-threading support. I've pushed my current code to a feature branch, which can be seen here:

Copy/paste from the (w-i-p) docs:

Thread-safe connection pool.

A connection pool allows multiple threads to share connections. The
`size` parameter specifies how many connections this pool manages.
The pool is lazy; it opens new connections when requested.

To ensure that connections are actually returned to the pool after
use, connections can only be obtained using Python's context manager
protocol, i.e. the ``with`` statement. Example::

    pool = ConnectionPool(size=3, host='...')
    with pool.connection() as connection:

When a thread asks for a connection using
:py:meth:`ConnectionPool.connection`, it is granted a lease, during
which the thread has exclusive access to the obtained connection. To
avoid starvation, connections should be returned as quickly as
possible. In practice this means that the amount of code included
inside the ``with`` block should be kept to an absolute minimum.

The connection pool is designed so that any thread can hold at most
one connection at a time. This does not require any coordination
from the application: when a thread holds a connection and asks for
a connection for a second time (e.g. because a called function also
wants to use a connection), the same connection instance it already
holds is returned. Ultimately, once the outer ``with`` block (which
may be in a function up in the call stack) terminates, the
connection is returned to the pool.

Additional keyword arguments are passed unmodified to the
:py:class:`happybase.Connection` constructor, with the exception of
the `autoconnect` argument, since maintaining connections is the
task of the pool.

:param int size: the maximum number of concurrently open connections
:param kwargs: keyword arguments passed to

What do you think? I'd appreciate comments/flames/feedback!


Looks good to me. Probably makes more sense than including a dependency. It would be cool if there was a way of getting a single pool object w/o passing it around everywhere. That's what I'm using a singleton for; but I suppose you could always layer that on top of what you have.

@wbolster wbolster added a commit that referenced this issue May 20, 2013
@wbolster Add thread-safe connection pool
See issue #21.

Okay, I have landed a Connection Pool implementation in the master branch. Please try it out. Comments on the design and API are most welcome.

See the API docs at for more information and example usage.

I'm leaving this ticket open since I need to refactor the tutorial/user guide to incorporate some information on the connection pool.


Fwiw, the feature branch is gone now that this feature has landed on master. I'll need to expand the docs (working on it already) before I consider this issue closed.

I'll also cook up a 0.5 release soonish with this feature and some other unreleased enhancements from the master branch.


Oh, I forgot to mention that I have (privately) received positive test reports about the connection pool, so I have confidence the current implementation is ready for public release. :-)

@wbolster wbolster closed this May 24, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment