Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when using tf.layers #11273

Closed
drasmuss opened this issue Jul 4, 2017 · 1 comment
Closed

Memory leak when using tf.layers #11273

drasmuss opened this issue Jul 4, 2017 · 1 comment
Assignees
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower

Comments

@drasmuss
Copy link
Contributor

drasmuss commented Jul 4, 2017

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 1.2.1
  • Python version: 3.5
  • Exact command to reproduce:
import os

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import psutil


def memory():
    pid = os.getpid()
    py = psutil.Process(pid)
    memory_use = py.memory_info()[0] / 2. ** 30
    return memory_use


memory_usage = []
for i in range(1000):
    memory_usage.append(memory())
    print("iter", i, memory_usage[-1])

    with tf.Graph().as_default():
        x = tf.constant(np.ones((100, 1000), dtype=np.float32))

        # memory leak
        x = tf.layers.dense(x, units=1000)

        # no memory leak
        # with tf.variable_scope("layer", reuse=False):
        #     x = tf.matmul(x, tf.get_variable(
        #         "w", shape=(1000, 1000), dtype=tf.float32,
        #         initializer=tf.ones_initializer()))

plt.figure()
plt.plot(memory_usage)
plt.xlabel("iterations")
plt.ylabel("memory usage")
plt.show()

Describe the problem

There is some kind of memory leak when repeatedly building graphs containing tf.layers elements. The example above shows the memory usage comparing what I think should be roughly equivalent implementations, one using tf.layers.dense and the other using manually created kernels/matmul ops. When using tf.layers.dense the memory usage continually increases, whereas the manual approach shows memory being periodically cleaned up by garbage collection. So my guess would be that there is some internal reference to the tf.layers elements that is preventing them from being garbage collected.

not using tf.layers.dense:
non_layer

using tf.layers.dense:
with_layer

@drasmuss
Copy link
Contributor Author

drasmuss commented Jul 4, 2017

I believe this is the cause, as it keeps a global mapping to all the Graphs that are created (preventing them from being garbage collected)

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/layers/base.py#L697

drasmuss added a commit to drasmuss/tensorflow that referenced this issue Jul 4, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected.

Fixes tensorflow#11273
drasmuss added a commit to drasmuss/tensorflow that referenced this issue Jul 4, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected.

Fixes tensorflow#11273
drasmuss added a commit to drasmuss/tensorflow that referenced this issue Jul 4, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected.

Fixes tensorflow#11273
drasmuss added a commit to nengo/nengo-dl that referenced this issue Jul 4, 2017
drasmuss added a commit to nengo/nengo-dl that referenced this issue Jul 4, 2017
@andydavis1 andydavis1 added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jul 5, 2017
allenlavoie pushed a commit to allenlavoie/tensorflow that referenced this issue Jul 15, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected.

Fixes tensorflow#11273
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower
Projects
None yet
Development

No branches or pull requests

3 participants