Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

CPU utilization #249

tomerpeled opened this Issue Feb 12, 2013 · 5 comments


None yet
4 participants


Great work:)

I'm having an issue while running some stress test on StatsD.
The cpu gets to 100% because of Node.js and the Carbon agent handles only several amount of metrics instead of thousands.
If I use some different implementation of StatsD https://github.com/armon/statsite
(which is implemented in C) with the same test I get significant improve of the results.
Instead of 300 metrics per 10 sec carbon handles 17000 metrics per sec and also the cpu is lower than 40%.

Is there some special configuration that I need to configure?
Please notice that I'm running my StatsD services on Virtual Box, is that matters?
Is there any issue with Node.js on virtual boxes?



mrtazz commented Feb 12, 2013

This is most likely because once StatsD gets flooded with packets and locks up the CPU it doesn't send anything to Graphite (or any other backend) anymore. Our instance of StatsD runs on a physical box and handles > 20k packets/s sending > 15k metrics to Graphite. Though the threshold is probably much lower when running it inside of Virtualbox.

I understand, I guess that this is due to the limitation of Node.js, since it isn't multi-threaded.
We can, however add support for node.js cluster - http://nodejs.org/api/cluster.html
This will probably solve the performance issue.
But, the 15,000 metrics rate is quite good enough I believe...


draco2003 commented Mar 5, 2013

Hey Tomer,

Thanks for the feedback. I've been recently working on reducing the CPU utilization and optimizing the current single threaded version. Please also see the related issue and response about adding cluster/mutli-threaded/multi-process functionality at #250

If you don't mind sharing your stress testing scripts/process that might also be helpful in reproducing what you are seeing.


timbunce commented Mar 17, 2013

For the record, I see there's a perl version of statsd that can apparently handle almost 20-70% more traffic. The note in the blog post about tuning net.core.rmem_max is interesting.


mrtazz commented May 12, 2013

Closing this since I think the question is answered. If there is still any confusion or you don't feel this is answered, feel free to reopen the issue.

@mrtazz mrtazz closed this May 12, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment