Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak #42

Closed
objectundefined opened this issue Feb 13, 2014 · 7 comments
Closed

Memory Leak #42

objectundefined opened this issue Feb 13, 2014 · 7 comments

Comments

@objectundefined
Copy link

NOTE: Second comment contains a memwatch heapDiff

We've noticed that memory is being retained indefinitely when consuming from a channel. I'm hoping that I'm making some poor assumptions or doing something terribly wrong.

The following is a quick example of how to reproduce this. In this example, I'm continuously publishing to a queue every millisecond until one minute has passed. After one minute, I'm clearing the interval. The consumer's memory grows continuously over the course of that minute, but stays consistently high after is stops receiving messages. I've run this same test using the current #master branch, seeing that you've committed a fix to some global leaks. This hasn't fixed the issue.

consumer.js

var amqplib = require('amqplib');
var when = require('when');
var connPromise = amqplib.connect();
var queueName = 'bench_queue';

function getChannelFixture (qn) {
    return connPromise.then(function(conn){
        return conn.createChannel().then(function(ch){
            return when.all([
                ch.prefetch(10),
                ch.assertQueue(qn,{durable:true})
            ]).then(function(){
                return when(ch);
            })
        });
    });
}

getChannelFixture(queueName).then(function(ch){

    ch.consume(queueName,function(m){
        console.log('\tgot message %s',m.content.toString('utf8'))
        setTimeout(function(){
            ch.ack(m);
        },100)
    })

}).then(null,function(err){
    console.warn(err);
});

producer.js:

var amqplib = require('amqplib');
var when = require('when');
var connPromise = amqplib.connect();
var queueName = 'bench_queue';

function getChannelFixture (qn) {
    return connPromise.then(function(conn){
        return conn.createChannel().then(function(ch){
            return ch.assertQueue(qn,{durable:true}).then(function(){
                return when(ch);
            })
        });
    });
}

getChannelFixture(queueName).then(function(ch){
    var ct = 0;
    var pubInterval = setInterval(function(){
        var msg = { ct: ++ct, foo:1, bar:2, baz:3, time: Date.now() };
        console.log('publishing message %s',msg.ct);
        ch.sendToQueue(queueName, new Buffer(JSON.stringify(msg)), {
            deliveryMode: true, 
            contentType: 'application/json' 
        });
    },1);

    setTimeout(function(){
        console.log('stopping publish interval');
        clearInterval(pubInterval)
    },60000)

}).then(null,function(err){
    console.warn(err);
});
@objectundefined
Copy link
Author

memwatch heapDiff after one minute and after idling shows two concerning subjects:

{
    "before": {
        "nodes": 11611,
        "time": "2014-02-13T21:55:10.000Z",
        "size_bytes": 1479496,
        "size": "1.41 mb"
    },
    "after": {
        "nodes": 33442,
        "time": "2014-02-13T21:56:10.000Z",
        "size_bytes": 6267856,
        "size": "5.98 mb"
    },
    "change": {
        "size_bytes": 4788360,
        "size": "4.57 mb",
        "freed_nodes": 369,
        "allocated_nodes": 22200,
        "details": [
            {
                "what": "Array",
                "size_bytes": 1141920,
                "size": "1.09 mb",
                "+": 6779,
                "-": 175
            },
            {
                "what": "String",
                "size_bytes": 987728,
                "size": "964.58 kb",
                "+": 4194,
                "-": 16
            },...

@squaremo squaremo added the bug label Feb 14, 2014
@squaremo
Copy link
Collaborator

No smoking guns in your code above -- seems like it's in the library. There are buffers (pass-through streams) in-between channels and the socket, but they ought to clear once you're not publishing.

Thanks for the report! I'll take a look in the morning.

@squaremo
Copy link
Collaborator

OK, no smoking guns, but some suspicious characters.

Firstly, the consumer code is acknowledging messages with low throughput (up to 100/s), and you're publishing up to a thousand messages a second. So messages will be backed up in RabbitMQ -- although, if you only leave it running for a minute, it's unlikely to cause RabbitMQ any difficulty. Needless to say, these examples are really not the way to send or receive a lot of messages quickly!

But more importantly, your consumer process won't have done very much work after a minute, and is probably still receiving messages for some time after that. So it's not surprising if the heap grows. What do you mean by "idling" (how can you tell?) and how long after starting to idle did you measure the heap?

After inserting memwatch code in both scripts, and using a timeout of five minutes, I found that the heap topped out at about 5MB, then ran level at a bit under 4.5MB afterwards. I left the processes running for ten minutes after all the messages had been drained from RabbitMQ, and the heaps didn't increase (or decrease) after that either.

This suggested to me that around 4MB is basically the overhead of a warm VM. I wrote an HTTP server and client to test this -- and yes, they both level off at about 4MB.

@objectundefined
Copy link
Author

Let me back up for a second and start by saying why I wrote this example in the first place:

Our app, in practice, has a few hundred messages being routed to each consumer per second at maximum. If left running indefinitely, the consumer will eat up all available memory (gigs) and crash over the course of an hour or so.

Consumers acknowledge messages within a few milliseconds of arrival, and our RabbitMQ queue flushes quickly. The example was not meant to acknowledge messages quickly.

What I mean by "idling" is that all messages have been acknowledged and the queue has been empty for some time. When i generate a heap dump during this time on a consumer, I see native Buffers of 1024 bytes a piece laying around as the "heaviest" retainers that never seem to get GC'd.

@objectundefined
Copy link
Author

You know what, don't sweat this until I make 100% sure that another library I'm using isn't leaking against an eventemitter.

@objectundefined
Copy link
Author

Well, I've learned my lesson about using 'webkit-devtools-agent' to remotely generate heap snapshots. It always shows a retained write buffer against the websocket it's using to communicate with the browser, which is a total red herring.

After generating a heap dump in-code, it was abundantly clear what was going on. An eventemitter leak, as per usual (not in the example i pasted). Thanks for your time and concern. Consider this closed.

@squaremo squaremo removed the bug label Feb 15, 2014
@squaremo
Copy link
Collaborator

Ok, thanks Gabriel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants