Skip to content


Redis leaks connections #6

loki42 opened this Issue · 2 comments

3 participants


If i do not specify a connection pool, then connections are leaked continuously until you hit the max open files limit, at which point redis uses 100% of cpu and silently fails unless debug is set. The latest version of redis has fixed the silent nature of the failure.

testing with ab -n 1000 -c 10 the connection count climbs and doesn't fall until they time out.

netstat -tn|grep 6379|wc -l
netstat -tn|grep 6379|wc -l

if i specify a pool:

def setup_redis_session(app):
redis_pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
return SessionMiddleware(app, type="redis", url="localhost:6379", key="musicfilmcomedy", connection_pool=redis_pool)

after many requests:
netstat -tn|grep 6379|wc -l
netstat -tn|grep 6379|wc -l
netstat -tn|grep 6379|wc -l

I tested with uwsgi and cherrypy. I'm using the latest git code.


@loki42 What is the version of your redis-py?

ConnectionPool should have been created by default during Redis.__init__. See: (Which you mentioned)

The beaker_extension itself doesn't have separate redis-py requirement. It uses whatever you have.


My py-redis is 2.4.5. And I just noticed in the py-redis changes file:

* Fixed a bug where some connections were not getting released back to the
  connection pool after pipeline execution.

I assume this is the bug i'm seeing. I'll upgrade py-redis and see if that fixes it.


I've just run ab a few times, with cherrypy and uwsgi and using version 2.4.11 of py-redis and i'm getting the same thing.

netstat -tn|grep 6379|wc -l

I run it with pool:
netstat -tn|grep 6379|wc -l


I think that my pull request in #17 addresses this issue.

@didip didip closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.