Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Used RAM memory gradually increases killing the process #42

Closed
ardianumam opened this issue Feb 11, 2019 · 1 comment
Closed

Used RAM memory gradually increases killing the process #42

ardianumam opened this issue Feb 11, 2019 · 1 comment

Comments

@ardianumam
Copy link

Hi,

I try to train, and the used RAM memory gradually increases until it kills the process due to out of RAM memory. I test it tf 1.12 and 1.10. Thanks.

@ardianumam
Copy link
Author

Good news
I wanna share the cause of memory increase in train.py code (however, currently this code is already deleted). The root cause is in this code part:

_, _, _, summary = sess.run([tf.assign(rec_tensor, rec),
                            tf.assign(prec_tensor, prec),
                            tf.assign(mAP_tensor, mAP), write_op], feed_dict={is_training:True})

Putting tf.assign operation inside the training loop will create new additional graph repeatedly. So, I change those three tf.assign by using placeholder, and do feed_dict to them using rec, prec and mAP. The training time is also faster afterward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant