Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConnectionPool doesn't reap timeout'ed connections #306

Open
rslinckx opened this issue Dec 6, 2012 · 6 comments

Comments

Projects
None yet
7 participants
@rslinckx
Copy link

commented Dec 6, 2012

The current ConnectionPool implementation doesn't disconnect redis connections until they are used again and found dead.

This means that if i have a huge peak of concurrent connections, then a few seconds later the redis server will close them for inactivity, but the tcp connection will still live within the process, and end up in CLOSE_WAIT state, which can cause a too many open files later on.

Maybe the release() method could check if there are any old connections and close them if appropriate (by storing a last_used timestamp along with each connection for example).

Or any other solution preventing a lot of CLOSE_WAIT connections until i restart my process. Any idea?

@LotharSee

This comment has been minimized.

Copy link

commented Jan 8, 2015

I'm facing the same issue and I see it was already reported.

A simpler solution would be to provide a connection_timeout parameter (that should match the timeout attribute in Redis configuration) then we keep track of the last time we used each connection in the pool. If one is too old, we just close/drop it from the pool without raising error.

Another alternative would be to catch and hide these connection timeouts (again, behind a parameter such as drop_timeout_conections or whatever).

In both case, one difficulty is be to be able to catch exactly this kind of exception (timeout of a connection from the pool), and not to mix them with other ConnectionError.

@harshadptl

This comment has been minimized.

Copy link

commented Feb 2, 2015

Same issue with me.

I think the get_connection method should check and weed out old connections.

@pranjal5215

This comment has been minimized.

Copy link

commented Feb 2, 2015

class CustomConnectionPool(redis.ConnectionPool):
    """ 
    Custom overridden ConnectionPool to handle server 
    timeouts. Check in get_connection whether connection 
    is timed out from server, if so make a new one.
    """
    def __init__(self, **kwargs):
        super(CustomConnectionPool, self).__init__(**kwargs)

    def get_connection(self, command_name, *keys, **options):
        "Get a connection from the pool and check for a timeout."
        self._checkpid()
        try:
            connection = self._available_connections.pop()
        except IndexError:
            connection = self.make_connection()
        try:
            connection.send_command("PING")
            connection.read_response()
        except redis.ConnectionError:
            connection.disconnect()
            connection = self.make_connection()
        self._in_use_connections.add(connection)
        return connection

We tried to override get_connection() method of class redis.ConnectionPool. In case server has timed out the response PING raises ConnectionError, in such event we create a new connection and insert it back to pool

@scosgrave

This comment has been minimized.

Copy link

commented May 9, 2015

Same issue here. I think the solution from @pranjal5215 looks pretty good, except for a few things.

  1. I think you can move the send_command and read_response into the first try and have 2 excepts
  2. You should decrement _created_connections right after the disconnect() in the except otherwise you will hit the maximum number of allowed connections
  • self._created_connections -= 1
@ngo

This comment has been minimized.

Copy link

commented May 25, 2016

We also faced this issue recently. Is there any timeline on merging the pull request? It's been a year.

@taylor-cedar

This comment has been minimized.

Copy link

commented Nov 12, 2017

Running into this issue as well. Looks like #886 fixes the issue. What is the process for getting it merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.