CPU 100% with Redis Store #862

mindon opened this Issue Apr 28, 2012 · 14 comments


None yet

When i use cluster & redis store, the CPU usage is easily reach 100%.

with strace, mass of these found:

write(2358, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 179) = 179
write(7, "*3\r\n$7\r\npublish\r\n$9\r\nhandshake\r\n"..., 397) = 397
read(2359, "GET /socket.io/1/?t=133560705283"..., 65536) = 124

what's wrong? (a few slowlog results found in redis-cli.)

when running in single process without Redis store, everything works fine.

Any recommend configuration of redis, best for socket.io? Thanks.


that is happening to me too. Any suggestion ?


I'm having the same issue... single process works fine, but if I use cluster & redis store all CPUs(8) goes up to 100%

My clients connects/disconnects from socket.io several times per minute (actually every time they visit a new page on my site), so maybe there is too much overhead on the pub/sub part with redis...



When you use redis with socket.io, the server socket event "disconnet" isn't tiggered.

You may inspect in the server the collection io.connected see how the connections are accumulate when you connect/disconnect again and again with a single client.

It's possible that this generate a memory leak.

I wrote to the socket.io author about this topic.



Experiencing the same issue here, reaches 100% CPU on all processes across 15 machines. Have to restart the system every 1-3 hours.


I've also experienced this very consistently. Investigating.


probably related: #686


Having lots of trouble with my scaled up socket.io app, and this is a good culprit. I'm having to restart my servers constantly when they're under any significant load.

Screenshot of my profiler: http://i.imgur.com/AUHYX.jpg


In deploying socket.io with RedisStore in production, it became clear that it actually makes all the app servers crash, rather than scale. One app process was barely able to serve a few hundred sockets without going into a death spiral. The more app processes added, the worse it got.

After migrating everything to engine.io and using oil for rooms/broadcasting/reconnection and amino for multi-server support, I was able to handle 15k sockets easily with 5 servers. The difference was insane! I would definitely not recommend using RedisStore at all, at least in its current state.


@carlos8f yep, I'm drastically changing RedisStore in 1.0


I've experienced the same problem. I think, it's caused by redis subscriptions leakage.
Every connected socket adds 3 redis subscriptions that are never unsubscribed.
I've made changes that seemingly didn't break anything for me, but obviously removed some functionality from library.

My problem is solved for the time being, but i'm afraid that removing redis subscriptions isn't proper way to fix it. Looking forward to official changes.


I'm also seeing this with a few hundred clients using RedisStore and Cluster. The CPU goes to 100 percent and becomes unusable.


@daeq's hack worked for me so pub sub subscriptions looks like they are leaking

@shapeshed shapeshed added a commit to shapeshed/socket.io that referenced this issue Nov 12, 2012
@shapeshed shapeshed fix leaking message:id, disconnect:id #1081 #1064 #862 6256f56


This problem did not occur until we are running live and the number of connection passes over 300. Luckily removing redis as the store solved the problem.


Can anyone reproduce this in a development environment? I've had similar problems in production but even with considerable effort can't seem to reproduce it in a development environment, even with setting up multiple servers with RedisStore.

@rhoot rhoot pushed a commit to Pingdom/socket.io that referenced this issue Jul 25, 2013
@shapeshed shapeshed fix leaking message:id, disconnect:id #1081 #1064 #862 1b2c601
@dannymidnight dannymidnight pushed a commit to dannymidnight/node-push-server that referenced this issue Aug 6, 2015
@sinamt sinamt Remove usage of RedisStore, too buggy..
See socketio/socket.io#862, 100% cpu usage
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment