You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I was training your model, I encountered an out-of-memory issue. I am using a Tesla V100S 32G GPU, and although I tried reducing the batch size to 1, the problem still persists. Is there any way to reduce the memory consumption during training?
The text was updated successfully, but these errors were encountered:
Hi, thanks for your interest! We run all experiments on one RTX 8000 GPU with 48g memory. A lowered number of extracted snippets will help, but it may harm the performance since less information is seen by the generator.
Thanks for your outstanding work!
When I was training your model, I encountered an out-of-memory issue. I am using a Tesla V100S 32G GPU, and although I tried reducing the batch size to 1, the problem still persists. Is there any way to reduce the memory consumption during training?
The text was updated successfully, but these errors were encountered: