New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak when using tf.layers
#11273
Labels
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
Comments
I believe this is the cause, as it keeps a global mapping to all the Graphs that are created (preventing them from being garbage collected) https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/layers/base.py#L697 |
drasmuss
added a commit
to drasmuss/tensorflow
that referenced
this issue
Jul 4, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected. Fixes tensorflow#11273
drasmuss
added a commit
to drasmuss/tensorflow
that referenced
this issue
Jul 4, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected. Fixes tensorflow#11273
drasmuss
added a commit
to drasmuss/tensorflow
that referenced
this issue
Jul 4, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected. Fixes tensorflow#11273
drasmuss
added a commit
to nengo/nengo-dl
that referenced
this issue
Jul 4, 2017
drasmuss
added a commit
to nengo/nengo-dl
that referenced
this issue
Jul 4, 2017
andydavis1
added
the
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
label
Jul 5, 2017
allenlavoie
pushed a commit
to allenlavoie/tensorflow
that referenced
this issue
Jul 15, 2017
Uses `weakref` so that PER_GRAPH_LAYER_NAME_UIDS doesn't prevent Graphs from being garbage collected. Fixes tensorflow#11273
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
System information
Describe the problem
There is some kind of memory leak when repeatedly building graphs containing
tf.layers
elements. The example above shows the memory usage comparing what I think should be roughly equivalent implementations, one usingtf.layers.dense
and the other using manually created kernels/matmul ops. When usingtf.layers.dense
the memory usage continually increases, whereas the manual approach shows memory being periodically cleaned up by garbage collection. So my guess would be that there is some internal reference to thetf.layers
elements that is preventing them from being garbage collected.not using
tf.layers.dense
:using
tf.layers.dense
:The text was updated successfully, but these errors were encountered: