Skip to content


Subversion checkout URL

You can clone with
Download ZIP


Concurrency fault - issue. #449

kr1zmo opened this Issue · 13 comments

3 participants


I was testing concurrency with my server and was able to lock up the Docpad Node.js process and it hung at 100% cpu usage. After I used Apache Bench, to process over 100,000 concurrent connections - local. The concurrency level was 1000 requests per second. What happened was the server eventually stopped responding to all connections, yet the process was still running, even after stopping all testing and dropping the concurrency level to 0 requests per second. I will post a dump below. You will notice eventually a accept EMFILE log appears after the concurrency level becomes too high. I have done little research on this error and usually it results in too many open files/sockets. I know it is possible to up the limit of sockets with ulimit, but is there anyway to keep the requests in the event pool and finish the prior requests to then eventually allow the waiting requests to open sockets. This may not be an issue with Docpad. One thing that is odd is the process hang at around 100% CPU usage. Ideas people? I also tested the same Apache Bench processing with my Geddy.js app and this did not occur. Express.js is not at the heart of Geddy's core.

I am not sure if this is an express.js issue because Docpad uses it.

�[1m�[32minfo:�[39m Welcome to DocPad v6.21.10�[22m
�[1m�[32minfo:�[39m Plugins: coffeekup, eco, marked�[22m
�[1m�[32minfo:�[39m Environment: production�[22m
�[1m�[32minfo:�[39m DocPad listening to http://localhost:3002/ on directory /var/local/www/project/site/out�[22m
�[1m�[32minfo:�[39m Generating...�[22m
  Rendering the extension "eco" to "html" on "" didn't do anything.
  Explanation here:�[22m
  Rendering the extension "eco" to "html" on "__pages/" didn't do anything.
  Explanation here:�[22m
�[1m�[32minfo:�[39m Generated all 5 files in 0.105 seconds�[22m
�[1m�[32minfo:�[39m Watching setup starting...�[22m
�[1m�[32minfo:�[39m Watching setup�[22m
�[1m�[32minfo:�[39m The action completed successfully�[22m
�[1m�[31merror:�[39m An error occured: 
Error: accept EMFILE
    at errnoException (net.js:770:11)
    at TCP.onconnection (net.js:1018:24)�[22m

Probably solved, solution. It is odd to me that another app I have running doesn't result in this error when testing. Maybe someone can provide details about that.


Perhaps it is the issue that hyperquest is having a rant about?


Probably, I increased the socket limit and it seemed to resolve it. But with a lower amount, the server locked up. I couldn't get the process to stop hanging at 100%. I did not try very hard.


Perhaps what we can do is handle the pooling ourselves, something like maxConnections config option - as it is, we try to respond to every single request - but we could add some special handling like our own connection pool, or just drop more than the max allowed - though it seems this should be something handled by node, rather than us.


I completely agree. Check out the stability of hyperquest and reach out to the dev to see if it is worth implementing. In the future, HTTP module may have support for these features. I haven't checked the documentation in a long time it may have it now.


I wonder if this was fixed in newer node versions as it is a very low-level issue?


Not sure, any idea who we can ping for more help?


I'm not 100% sure if this is the right issue to report this, but I keep receiving this error all the time:

Error: EMFILE, too many open files

Seems like this is a common issue on NodeJS and I found some solutions:


A linux account has a limit (ulimit) which defines a user resource limit. Node is complaining that it cannot open any more due to the restrictions and errors.

I'm not exactly sure how it could be solved without increasing the ulimit to unlimited, which creates some stability worries. The other work-around solution is to keep requests in a pool while ensuring that the active requests do not exceed the set ulimit.

If I misunderstood anything, someone with low-system-level knowledge inform us.

Related issue:


@balupton I was able to locate how Isaacs overcomes this error with his enhancements to the file-system module.

If you want to handle ulimit issues directly in DocPad then this may be the route to go as you are reading many files in DocPad and larger src directories may suffer from these issues.


bevry/safefs should also overcome the emfile issue as it limits the amount of open files at once, to be extremely pedantic we could get safefs's limiting ontop of graceful-fs I guess.


I'm sure safefs is good, I'm worried about the error on http requests and docpad hanging. It seems to occur when ulimit is low.


This is probably solved recently, we implemented actions for file models, and safefs now uses graceful-fs behind it.

@balupton balupton closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.