Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when continuously run assign op #4151

Closed
zhangcx93 opened this issue Sep 1, 2016 · 5 comments
Closed

Memory leak when continuously run assign op #4151

zhangcx93 opened this issue Sep 1, 2016 · 5 comments

Comments

@zhangcx93
Copy link

Environment info

Operating System:
Ubuntu 16.04
Installed version of CUDA and cuDNN:
8.0 RC, 5.1 (on GTX 1080)
If installed from source, provide
HEAD: a8512e2
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Jan 01 00:00:00 1970 (0)
Build timestamp: Thu Jan 01 00:00:00 1970 (0)
Build timestamp as int: 0

Problem:

In my application, I need to change value of some variable, and run the minimize in a loop, thus I have to run assign op continuously, which may add new op to graph every time, thus the program become very slow and memory explode.

sess = tf.Session()
a = tf.Variable(np.ones((5, 10000, 10000, 3)))
sess.run(tf.initialize_all_variables())

t0 = time.time()
for i in range(10000):
    sess.run(tf.assign(a,np.ones((5, 10000, 10000, 3)) ))
    t1 = time.time()
    print(t1-t0)
    t0 = t1

So is there a method to change the value of Variables without adding an op to graph? Or a way to remove it after(I have a big graph defined before the loop, I don't want to reset all of them and define again with reset_default_graph)?
And since the value assigned to the variable is different all the time, I cannot define the op before the for loop.

@deepali-c
Copy link

deepali-c commented Sep 1, 2016

Could you please try something like:

sess = tf.Session()
a = tf.Variable(np.ones((5, 100, 100, 3)))
sess.run(tf.initialize_all_variables())
update_a = tf.assign(a,np.ones((5, 100, 100, 3)) )  #Define update operation here 
t0 = time.time()
for i in range(10):
    sess.run(update_a)
    t1 = time.time()
    print(t1-t0)
    t0 = t1

Please also refer to this SO thread.

@zhangcx93
Copy link
Author

the data assign to the variable is different in every iteration, so I cannot define the op before the loop.

@zhangcx93
Copy link
Author

For example in neural art, I want to init the generate image with content image, in each iteration, I read new content image, and assign it to the variable.

@mrry
Copy link
Contributor

mrry commented Sep 1, 2016

This memory leak is caused by adding new nodes (a tf.assign() node and an implicitly created tf.constant() node) on each iteration of the training loop. This documentation has a guide to tracking down leaks like this.

The solution for your particular problem is to define a single assign op that takes its input from a tf.placeholder(), and feed different values into that placeholder on each iteration:

sess = tf.Session()
a = tf.Variable(np.ones((5, 10000, 10000, 3)))
update_placeholder = tf.placeholder(a.dtype, shape=a.get_shape())
update_op = a.assign(update_placeholder)

sess.run(tf.initialize_all_variables())

t0 = time.time()
for i in range(10000):
    # Obviously, you'd change the value being assigned in each step in a real program.
    sess.run(update_op, {update_placeholder: np.ones((5, 10000, 10000, 3))})
    t1 = time.time()
    print(t1-t0)
    t0 = t1

@Fangyh09
Copy link

@mrry Hello, the document link is not available. Could you update it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants