Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConnectionPool doesn't reap timeout'ed connections #306

rslinckx opened this issue Dec 6, 2012 · 8 comments

ConnectionPool doesn't reap timeout'ed connections #306

rslinckx opened this issue Dec 6, 2012 · 8 comments


Copy link

rslinckx commented Dec 6, 2012

The current ConnectionPool implementation doesn't disconnect redis connections until they are used again and found dead.

This means that if i have a huge peak of concurrent connections, then a few seconds later the redis server will close them for inactivity, but the tcp connection will still live within the process, and end up in CLOSE_WAIT state, which can cause a too many open files later on.

Maybe the release() method could check if there are any old connections and close them if appropriate (by storing a last_used timestamp along with each connection for example).

Or any other solution preventing a lot of CLOSE_WAIT connections until i restart my process. Any idea?

Copy link

I'm facing the same issue and I see it was already reported.

A simpler solution would be to provide a connection_timeout parameter (that should match the timeout attribute in Redis configuration) then we keep track of the last time we used each connection in the pool. If one is too old, we just close/drop it from the pool without raising error.

Another alternative would be to catch and hide these connection timeouts (again, behind a parameter such as drop_timeout_conections or whatever).

In both case, one difficulty is be to be able to catch exactly this kind of exception (timeout of a connection from the pool), and not to mix them with other ConnectionError.

Copy link

Same issue with me.

I think the get_connection method should check and weed out old connections.

Copy link

class CustomConnectionPool(redis.ConnectionPool):
    Custom overridden ConnectionPool to handle server 
    timeouts. Check in get_connection whether connection 
    is timed out from server, if so make a new one.
    def __init__(self, **kwargs):
        super(CustomConnectionPool, self).__init__(**kwargs)

    def get_connection(self, command_name, *keys, **options):
        "Get a connection from the pool and check for a timeout."
            connection = self._available_connections.pop()
        except IndexError:
            connection = self.make_connection()
        except redis.ConnectionError:
            connection = self.make_connection()
        return connection

We tried to override get_connection() method of class redis.ConnectionPool. In case server has timed out the response PING raises ConnectionError, in such event we create a new connection and insert it back to pool

Copy link

Same issue here. I think the solution from @pranjal5215 looks pretty good, except for a few things.

  1. I think you can move the send_command and read_response into the first try and have 2 excepts
  2. You should decrement _created_connections right after the disconnect() in the except otherwise you will hit the maximum number of allowed connections
  • self._created_connections -= 1

Copy link

ngo commented May 25, 2016

We also faced this issue recently. Is there any timeline on merging the pull request? It's been a year.

Copy link

taylor-cedar commented Nov 12, 2017

Running into this issue as well. Looks like #886 fixes the issue. What is the process for getting it merged?

Copy link

Looks like the health_check_interval parameter introduced in 3.3 may help resolve this?

Copy link

github-actions bot commented Dec 3, 2020

This issue is marked stale. It will be closed in 30 days if it is not updated.

@github-actions github-actions bot added the Stale label Dec 3, 2020
@github-actions github-actions bot closed this as completed Jan 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet

No branches or pull requests

8 participants