New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leakage when converting to tensor #31419
Comments
Was able to reproduce the error in google colab. |
@benoitkoenig You are overloading the graph with several variables. In order to clear graph, I added a line to the end of your code
|
@jvishnuvardhan thank you for your answer
The problem I am facing is equivalent to keeping the outside_tensor in the graph in order to print it at the end. In that case, reseting the whole graph won't do. Is there any way to clear one specific tensor from the graph? Here is my specific code:
Calling tf.reset_default_graph or not using graph.as_default inside the generator both result in a "Invalid input graph." error Thanks, |
@benoitkoenig Sorry for the delay in my response. There were lot of improvements between TF1.14 and TF1.15. Can you please check with TF1.15.0 which was released recently and let us know how it progresses. Thanks! |
Hi! Sorry for not getting back here, I'm having issues installing tensorflow 1.15 on my machine, I will get back to you as soon as this is done So you know, the test simply consists in running the following code:
And making sure no memory leakage is observed (it does with tensorflow1.14) Benoît |
@benoitkoenig Is this still an issue? I suspect there are no more updates to TF1.x unless there is any security related issues. Can you please try TF2.x and let us know how it progresses. I used recent Please close the issue If this was already resolved for you. Thanks! |
@benoitkoenig Can you please check my last response? Thanks! |
Hi @jvishnuvardhan print('\n\n\n\n') outside_tensor = tf.convert_to_tensor(2) I then uninstalled tensorflow and ran Regarding you gist, when I execute it, the fields "RAM" and "Disk" seem stable, however execution stops at the 57th iteration Let me know if I can help you any further, |
I suspect the In TF 1, whenever you call convert_to_tensor, a new constant is created in the graph. These constants are permanent. Removing them them is not easy and in general you want to avoid creating too many anyway. In TF 2, this is not a problem because the execution model has radically changed and it's more intuitive. But in TF 1, you should consider using a single |
Can confirm that there isn't any memory increase with TF2 (as per Dan's last comment). Closing issue for now. Please let me know if you run into problems. |
`
import numpy as np
import tensorflow as tf
for i in range(5000):
print(i)
array = np.random.random((1024, 1024))
tf.convert_to_tensor(array, dtype=tf.float32)
`
Tensorflow version is 1.14.0, Numpy version is 1.17.0, python version is 3.6.8
The process is killed when i ~= 2400 on my machine
The command "watch -d free -m" shows that memory decreases over time until it gets close to zero, then crashes
I did not find a way to free the memory from the unreferenced tensors
Best,
Benoît
The text was updated successfully, but these errors were encountered: