Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We seem to keep redis connections open? #165

Open
groovecoder opened this issue Dec 31, 2015 · 4 comments
Open

We seem to keep redis connections open? #165

groovecoder opened this issue Dec 31, 2015 · 4 comments
Labels

Comments

@groovecoder
Copy link
Owner

When I got back to my https://groovecord2.herokuapp.com instance, I got these ...

2015-12-31T17:29:39.652484+00:00 app[worker.1]: events.js:85
2015-12-31T17:29:39.652502+00:00 app[worker.1]:       throw er; // Unhandled 'error' event
2015-12-31T17:29:39.652503+00:00 app[worker.1]:             ^
2015-12-31T17:29:39.652504+00:00 app[worker.1]: Error: ERR max number of clients reached

Seems like something in the code is leaving redis connections open.

I was able to fix my instance by killing some idle redis connections:

lcrouch:discord lcrouch$ heroku redis:cli
 ▸    WARNING: Insecure action.
 ▸    All data, including the Redis password, will not be encrypted.
 ▸    To proceed, type groovecord2 or re-run this command with --confirm groovecord2

> groovecord2
Connecting to laughing-vastly-5158 (REDIS_URL):
ec2-50-16-205-207.compute-1.amazonaws.com:12019> CLIENT LIST
id=545223 addr=10.93.192.136:56203 fd=6 name=observatory age=6616 idle=1 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=ping
id=545250 addr=10.137.151.89:44719 fd=7 name= age=5962 idle=1 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=zrangebyscore
id=545251 addr=10.137.151.89:44720 fd=8 name= age=5962 idle=3337 flags=N db=0 sub=1 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=subscribe
id=545252 addr=10.137.151.89:44721 fd=9 name= age=5962 idle=3337 flags=b db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=blpop
id=545253 addr=10.137.151.89:44722 fd=10 name= age=5962 idle=1 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=evalsha
id=545671 addr=10.137.151.89:52797 fd=11 name= age=43 idle=43 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=select
id=545674 addr=68.0.123.163:33572 fd=12 name= age=9 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client

ec2-50-16-205-207.compute-1.amazonaws.com:12019> CLIENT KILL 10.137.151.89:44720
OK
ec2-50-16-205-207.compute-1.amazonaws.com:12019> CLIENT KILL 10.137.151.89:44721
OK

And (hopefully) preventing the problem from happening again by setting the timeout to 60 seconds:

heroku redis:timeout --seconds 60

But, it seems like something in the code should be closing these connections?

@openjck openjck added the bug label Dec 31, 2015
@Faeranne
Copy link
Contributor

Faeranne commented Jan 7, 2016

Maybe this is also related to the redis servers becoming inaccessible on heroku's end? I've received several emails from heroku saying the servers are non-responsive in the last couple weeks?

@openjck
Copy link
Contributor

openjck commented Jan 20, 2016

Looking into this. I'm tempted to say it's related to us using createQueue inside a loop, but the Kue documentation says that createQueue returns a singleton:

Note that unlike what the name createQueue suggests, it currently returns a singleton Queue instance. So you can configure and use only a single Queue object within your node.js process.

I'll keep digging...

@openjck
Copy link
Contributor

openjck commented Jan 20, 2016

Another possibility is that we need to do a graceful shutdown in situations when the connections would otherwise be left open, for example when the application crashes.

There's other work that's more important right now, so I'm going to hop off this for a while.

@groovecoder
Copy link
Owner Author

We may be able to fix this by adding the heroku-buildback-redis buildpack. It will create a stunnel proxy connection for the web and worker processes to connect to redis, which should effectively give them a connection pool ... ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants