-
Notifications
You must be signed in to change notification settings - Fork 666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak #56
Comments
I have the same error! Using the parameters --max_old_space_size and --max_new_space and setting them to large values doesn't change anything. Making the network smaller (only 20 neurons in the hidden layer) seems to work -- but isn't exactly what I want. Did you manage to fix this somehow? |
Not yet, maybe reseting network via reset method should help, but it clears all traces so it won't work well, I think. |
I have the same issue after around 105 iterations. It suddenly jumps from 1G to 10G RAM usage. I have a perceptron with around 700 inputs, 32 hidden and around 160 output layers. |
What's the status of this issue? |
I've made code debugging and I've traced propagation function calls. It all seems to be OK. I found that when issue occours 'propagate' functions is not even called. When I changed Float64Array to standard array '[]' then problem occours later. So I guess that this is some bug in V8 engine. Maybe with memory management or garbage collector. But when network optimization is turned off using 'net.setOptimize(false)' this problem is not happening. In other project I've found, that V8 has problems with large arrays and objects. Nesting helps a lot. Additionaly my code which is using synaptic works without issues on OS X (Node 5.0, 1M iterations) but when executed on Linux server (Node 6.9, CentOS, i7, 16GB RAM) this issue happens after 5000 iterations. So it can depend on OS / hardware. |
Ok, it seems really to be a V8 bug. But I've found awful and strange solution. 1) Float64Array needs to be changed to std array in From: var hardcode = "";
hardcode += "var F = Float64Array ? new Float64Array(" + optimized.memory +
") : []; "; to var hardcode = "";
hardcode += "var F = []; "; 2) NodeJS process must be executed with --inspect flag (so remote debugger can be attached)
It's strange but when Update: |
Can you please check whether this is also happening with Float32Array? |
With Float32Array is rapidly starts to eating memory. |
Hi, I've tried LSTM module via Architect with following configuration:
My algorithm then make about 450 - 500 iterations and at each iteration it activates network and then propagates correct output values.
And after 400 - 500 iterations it suddenly stops and start taking about 14GB of memory. Before this single step it uses about 1GB.
I don't know if it is caused by traces or optimization but the network is then unusable.
I've tested rest of code for memory leaks and I found that problem is caused by
activate
function.This is sample code which demonstrates this issue. On my MacBook Pro with Intel Core i5 and 8GB RAM it stops on step 512 and start eating memory up to 14GB.
Any ideas where problem can be?
The text was updated successfully, but these errors were encountered: