-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak #42
Comments
memwatch heapDiff after one minute and after idling shows two concerning subjects:
|
No smoking guns in your code above -- seems like it's in the library. There are buffers (pass-through streams) in-between channels and the socket, but they ought to clear once you're not publishing. Thanks for the report! I'll take a look in the morning. |
OK, no smoking guns, but some suspicious characters. Firstly, the consumer code is acknowledging messages with low throughput (up to 100/s), and you're publishing up to a thousand messages a second. So messages will be backed up in RabbitMQ -- although, if you only leave it running for a minute, it's unlikely to cause RabbitMQ any difficulty. Needless to say, these examples are really not the way to send or receive a lot of messages quickly! But more importantly, your consumer process won't have done very much work after a minute, and is probably still receiving messages for some time after that. So it's not surprising if the heap grows. What do you mean by "idling" (how can you tell?) and how long after starting to idle did you measure the heap? After inserting memwatch code in both scripts, and using a timeout of five minutes, I found that the heap topped out at about 5MB, then ran level at a bit under 4.5MB afterwards. I left the processes running for ten minutes after all the messages had been drained from RabbitMQ, and the heaps didn't increase (or decrease) after that either. This suggested to me that around 4MB is basically the overhead of a warm VM. I wrote an HTTP server and client to test this -- and yes, they both level off at about 4MB. |
Let me back up for a second and start by saying why I wrote this example in the first place: Our app, in practice, has a few hundred messages being routed to each consumer per second at maximum. If left running indefinitely, the consumer will eat up all available memory (gigs) and crash over the course of an hour or so. Consumers acknowledge messages within a few milliseconds of arrival, and our RabbitMQ queue flushes quickly. The example was not meant to acknowledge messages quickly. What I mean by "idling" is that all messages have been acknowledged and the queue has been empty for some time. When i generate a heap dump during this time on a consumer, I see native Buffers of 1024 bytes a piece laying around as the "heaviest" retainers that never seem to get GC'd. |
You know what, don't sweat this until I make 100% sure that another library I'm using isn't leaking against an eventemitter. |
Well, I've learned my lesson about using 'webkit-devtools-agent' to remotely generate heap snapshots. It always shows a retained write buffer against the websocket it's using to communicate with the browser, which is a total red herring. After generating a heap dump in-code, it was abundantly clear what was going on. An eventemitter leak, as per usual (not in the example i pasted). Thanks for your time and concern. Consider this closed. |
Ok, thanks Gabriel. |
NOTE: Second comment contains a memwatch heapDiff
We've noticed that memory is being retained indefinitely when consuming from a channel. I'm hoping that I'm making some poor assumptions or doing something terribly wrong.
The following is a quick example of how to reproduce this. In this example, I'm continuously publishing to a queue every millisecond until one minute has passed. After one minute, I'm clearing the interval. The consumer's memory grows continuously over the course of that minute, but stays consistently high after is stops receiving messages. I've run this same test using the current #master branch, seeing that you've committed a fix to some global leaks. This hasn't fixed the issue.
consumer.js
producer.js:
The text was updated successfully, but these errors were encountered: