Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when training simple LSTM Network #33139

johmicrot opened this issue Oct 8, 2019 · 2 comments

Memory leak when training simple LSTM Network #33139

johmicrot opened this issue Oct 8, 2019 · 2 comments


Copy link

@johmicrot johmicrot commented Oct 8, 2019

Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No custom code written

== check python ===================================================
python version: 3.7.4
python build version: ('default', 'Aug 13 2019 20:35:49')
python compiler version: GCC 7.3.0
python implementation: CPython
== check os platform ===============================================
os: Linux
os kernel version: #31 18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019
os release version: 5.0.0-29-generic
os platform: Linux-5.0.0-29-generic-x86_64-with-debian-buster-sid
linux distribution: ('debian', 'buster/sid', '')
linux os distribution: ('debian', 'buster/sid', '')
architecture: ('64bit', '')
machine: x86_64
== are we in docker =============================================
== compiler =====================================================
c++ (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
== check pips ===================================================
numpy 1.16.4
protobuf 3.8.0
tensorflow 1.14.0
tensorflow-estimator 1.14.0
== check for virtualenv =========================================
== tensorflow import ============================================
tf.version.VERSION = 1.14.0
tf.version.GIT_VERSION = unknown
tf.version.COMPILER_VERSION = 5.4.0

Describe the current behavior
When I run the code below the memory usage increases each epoch until my system is unresponsive.

Code to reproduce the issue

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, TimeDistributed, Dropout
import numpy as np

num_features = 205
time_lenth = 12
num_of_instances = 5000
trip_sets = np.random.rand(num_of_instances, time_lenth, num_features)
print('num features: ', num_features)
data_len = len(trip_sets)
test_split = np.arange(data_len)
new_dataset = np.array(trip_sets)
targets = np.random.rand(num_of_instances, time_lenth, 1)

test_data = new_dataset[test_split[:int(data_len*0.2)]]
y_test_data = targets[test_split[:int(data_len*0.2)]]
train_data = new_dataset[test_split[int(data_len*0.2):]]
y_train_data = targets[test_split[int(data_len*0.2):]]

model = Sequential()
model.add(LSTM(75, return_sequences=True, input_shape=(None, num_features)))
model.add(LSTM(75, return_sequences=True))
# Memory leak also occurs if i use a model.add(Dense(1)) below instead of Time Distributed
# model.add(TimeDistributed(Dense(1)))

adam = tf.keras.optimizers.Adam(lr=0.001)
model.compile(loss='mse', optimizer=adam)
history =, y=y_train_data, epochs=100, validation_data=(test_data,y_test_data))
Copy link

@johmicrot johmicrot commented Oct 8, 2019

I upgraded to TF2.0 and the issue went away.

Copy link

@oanush oanush commented Oct 9, 2019

@johmicrot ,
Hi,Code working fine in TF-2.0, try using the same version. please confirm if the issue can be closed ?Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants