Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConnectionPool doesn't reap timeout'ed connections #306

Closed
rslinckx opened this issue Dec 6, 2012 · 8 comments
Closed

ConnectionPool doesn't reap timeout'ed connections #306

rslinckx opened this issue Dec 6, 2012 · 8 comments
Labels

Comments

@rslinckx
Copy link

@rslinckx rslinckx commented Dec 6, 2012

The current ConnectionPool implementation doesn't disconnect redis connections until they are used again and found dead.

This means that if i have a huge peak of concurrent connections, then a few seconds later the redis server will close them for inactivity, but the tcp connection will still live within the process, and end up in CLOSE_WAIT state, which can cause a too many open files later on.

Maybe the release() method could check if there are any old connections and close them if appropriate (by storing a last_used timestamp along with each connection for example).

Or any other solution preventing a lot of CLOSE_WAIT connections until i restart my process. Any idea?

@LotharSee
Copy link

@LotharSee LotharSee commented Jan 8, 2015

I'm facing the same issue and I see it was already reported.

A simpler solution would be to provide a connection_timeout parameter (that should match the timeout attribute in Redis configuration) then we keep track of the last time we used each connection in the pool. If one is too old, we just close/drop it from the pool without raising error.

Another alternative would be to catch and hide these connection timeouts (again, behind a parameter such as drop_timeout_conections or whatever).

In both case, one difficulty is be to be able to catch exactly this kind of exception (timeout of a connection from the pool), and not to mix them with other ConnectionError.

@harshadptl
Copy link

@harshadptl harshadptl commented Feb 2, 2015

Same issue with me.

I think the get_connection method should check and weed out old connections.

@pranjal5215
Copy link

@pranjal5215 pranjal5215 commented Feb 2, 2015

class CustomConnectionPool(redis.ConnectionPool):
    """ 
    Custom overridden ConnectionPool to handle server 
    timeouts. Check in get_connection whether connection 
    is timed out from server, if so make a new one.
    """
    def __init__(self, **kwargs):
        super(CustomConnectionPool, self).__init__(**kwargs)

    def get_connection(self, command_name, *keys, **options):
        "Get a connection from the pool and check for a timeout."
        self._checkpid()
        try:
            connection = self._available_connections.pop()
        except IndexError:
            connection = self.make_connection()
        try:
            connection.send_command("PING")
            connection.read_response()
        except redis.ConnectionError:
            connection.disconnect()
            connection = self.make_connection()
        self._in_use_connections.add(connection)
        return connection

We tried to override get_connection() method of class redis.ConnectionPool. In case server has timed out the response PING raises ConnectionError, in such event we create a new connection and insert it back to pool

@scosgrave
Copy link

@scosgrave scosgrave commented May 9, 2015

Same issue here. I think the solution from @pranjal5215 looks pretty good, except for a few things.

  1. I think you can move the send_command and read_response into the first try and have 2 excepts
  2. You should decrement _created_connections right after the disconnect() in the except otherwise you will hit the maximum number of allowed connections
  • self._created_connections -= 1

@ngo
Copy link

@ngo ngo commented May 25, 2016

We also faced this issue recently. Is there any timeline on merging the pull request? It's been a year.

@taylor-cedar
Copy link

@taylor-cedar taylor-cedar commented Nov 12, 2017

Running into this issue as well. Looks like #886 fixes the issue. What is the process for getting it merged?

@phillbaker
Copy link

@phillbaker phillbaker commented Dec 3, 2019

Looks like the health_check_interval parameter introduced in 3.3 may help resolve this?

@github-actions
Copy link

@github-actions github-actions bot commented Dec 3, 2020

This issue is marked stale. It will be closed in 30 days if it is not updated.

@github-actions github-actions bot added the Stale label Dec 3, 2020
@github-actions github-actions bot closed this Jan 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants