You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, when I train the example seq2seq model on ubuntu dataset on Parlai, GPU is out of memory after training on a few thousand of examples. Do you know how to solve this problem? I saw similar issues posted before. I think it is related to Pytorch.
The text was updated successfully, but these errors were encountered:
Hello, @ZixuanLiang
I am sorry, but the example implementations currently guarantees operation only with bAbI task.
In other tasks, if the number of vocabulary is too large, the matrix of embedding layer and softmax layer becomes huge, causing out of memory. It is necessary to reduce vocabulary by converting low frequency words to unknown. Unfortunately ParlAI does not implement this feature. I plan to implement a dictionary agent (eg dict-minfreq, subword, sentencepiece) to solve this problem soon.
Also, in case of seq2seq, if the input sentence is too long, the number of LSTM cells to be stored in memory increases, which causes out of memory. Simply it may be possible to avoid by reducing the hidden size.
Thank you!
Hello, when I train the example seq2seq model on ubuntu dataset on Parlai, GPU is out of memory after training on a few thousand of examples. Do you know how to solve this problem? I saw similar issues posted before. I think it is related to Pytorch.
The text was updated successfully, but these errors were encountered: