-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training transition model is too resource intensive, uses too much memory. Possible bug #27
Comments
it is super resource intensive yes. I saw elsewhere that Keras does a lot of memory leaks. I used to have a tensorflow only implementation that seemed lighter. But it was less convenient, that was why I opted for Keras in the release. |
@kamal94 : Were you able to resolve that issue? I am having the same problem and my train fails sometimes on epoch 1/200 or 2/200 and never goes beyond that. Any suggestions?? |
how do you train the train_generative_model.py autoencoder successfully ,i meet some difficuty , have to doing somehting in code? |
Have you solved this issue? I am having the same problem and my train fails sometimes on epoch 10/200 or 40/200 and never goes beyond that. Any suggestions? |
After training the autoencode, i try to train the transition model as described by the same document.
using
and
on two different tmux sessions.
Soon (a minute) after running the training command, the process is killed because my memory and swap (16 + 10 GB) are used up, and I'm still on epoch one.
Here is a dump:
The text was updated successfully, but these errors were encountered: