Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible memory leak #56

Open
jirihybek opened this issue Aug 28, 2015 · 8 comments
Open

Possible memory leak #56

jirihybek opened this issue Aug 28, 2015 · 8 comments
Labels

Comments

@jirihybek
Copy link

Hi, I've tried LSTM module via Architect with following configuration:

var LSTM = new synaptic.Architect.LSTM(100, 100, 20);

My algorithm then make about 450 - 500 iterations and at each iteration it activates network and then propagates correct output values.

var output = LSTM.activate(HistoricalFrame);
//some code
LSTM.propagate(0.5, PredictionFrame);

And after 400 - 500 iterations it suddenly stops and start taking about 14GB of memory. Before this single step it uses about 1GB.

I don't know if it is caused by traces or optimization but the network is then unusable.

I've tested rest of code for memory leaks and I found that problem is caused by activate function.

This is sample code which demonstrates this issue. On my MacBook Pro with Intel Core i5 and 8GB RAM it stops on step 512 and start eating memory up to 14GB.

var synaptic = require("synaptic");

//Define frames
var HistoricalFrame = [];
var PredictionFrame = [];

var HistoricalFrameSize = 100;
var PredictionFrameSize = 20;

var FrameCount = 25000;

//Create LSTM
console.log("Initializing LSTM...");
var LSTM = new synaptic.Architect.LSTM(HistoricalFrameSize, HistoricalFrameSize, PredictionFrameSize);

console.log("Optimizing LSTM...");
LSTM.optimize();

console.log("Starting prediction...");

//Make predictions
for(var FrameIndex = 0; FrameIndex < FrameCount; FrameIndex++){

    console.log(FrameIndex);

    //Add value to frame(s)
    PredictionFrame.push(Math.random());

    //Move first value from prediction frame to historical frame
    if(PredictionFrame.length > PredictionFrameSize){
        HistoricalFrame.push( PredictionFrame.shift() );
    }

    //Throw away first value from historical frame to keep the max size
    if(HistoricalFrame.length > HistoricalFrameSize)
        HistoricalFrame.shift();

    //Activate LSTM when frames are filled
    if(HistoricalFrame.length == HistoricalFrameSize){

        var output = LSTM.activate(HistoricalFrame);
        LSTM.propagate(0.5, PredictionFrame);

    }

}

Any ideas where problem can be?

@nikcheerla
Copy link

I have the same error! Using the parameters --max_old_space_size and --max_new_space and setting them to large values doesn't change anything. Making the network smaller (only 20 neurons in the hidden layer) seems to work -- but isn't exactly what I want. Did you manage to fix this somehow?

@jirihybek
Copy link
Author

Not yet, maybe reseting network via reset method should help, but it clears all traces so it won't work well, I think.

@bobalazek
Copy link
Contributor

I have the same issue after around 105 iterations. It suddenly jumps from 1G to 10G RAM usage.

I have a perceptron with around 700 inputs, 32 hidden and around 160 output layers.

@ghost
Copy link

ghost commented Sep 10, 2016

What's the status of this issue?

@Jabher Jabher added the bug label Sep 12, 2016
@jirihybek
Copy link
Author

jirihybek commented Nov 18, 2016

I've made code debugging and I've traced propagation function calls. It all seems to be OK. I found that when issue occours 'propagate' functions is not even called. When I changed Float64Array to standard array '[]' then problem occours later. So I guess that this is some bug in V8 engine. Maybe with memory management or garbage collector.

But when network optimization is turned off using 'net.setOptimize(false)' this problem is not happening. In other project I've found, that V8 has problems with large arrays and objects. Nesting helps a lot.

Additionaly my code which is using synaptic works without issues on OS X (Node 5.0, 1M iterations) but when executed on Linux server (Node 6.9, CentOS, i7, 16GB RAM) this issue happens after 5000 iterations. So it can depend on OS / hardware.

@jirihybek
Copy link
Author

jirihybek commented Nov 18, 2016

Ok, it seems really to be a V8 bug. But I've found awful and strange solution.

1) Float64Array needs to be changed to std array in src/network.js

From:

    var hardcode = "";
    hardcode += "var F = Float64Array ? new Float64Array(" + optimized.memory +
      ") : []; ";

to

    var hardcode = "";
    hardcode += "var F = []; ";

2) NodeJS process must be executed with --inspect flag (so remote debugger can be attached)

node --inspect my_script.js

It's strange but when --inspect is provided issue is not happening anymore.

Update:
When running multiple node processes (separately) then second one suddenly stops again after cca 5000 iterations. But not consuming memory any more - maybe another issue?

@Jabher
Copy link
Collaborator

Jabher commented Nov 18, 2016

Can you please check whether this is also happening with Float32Array?

@jirihybek
Copy link
Author

jirihybek commented Nov 21, 2016

With Float32Array is rapidly starts to eating memory.
With standard array it is just stucked with CPU at 100% without memory leak.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants