You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi ,
Thanks for your great work!
When I fine tuning the VQA ,I met the problems that:
As the epoch increased, so did the GPU memory,Eventually,It will exceed the GPU's highest memory which causes the stopping.
And when using multiple GPUs for training, GPU0 uses more internal memory than any other.
This problem has been bothering me for a long time, and I want to ask do you know what is the reason?
Thanks for your reply~:)
The text was updated successfully, but these errors were encountered:
@clytze0216 Thanks for your interest in our work. I did not meet this problem before. When you turn off "adversarial training", i.e., performing standard training, did you also meet the same problem? This can help us identify where the problem is.
Hi ,
Thanks for your great work!
When I fine tuning the VQA ,I met the problems that:
As the epoch increased, so did the GPU memory,Eventually,It will exceed the GPU's highest memory which causes the stopping.
And when using multiple GPUs for training, GPU0 uses more internal memory than any other.
This problem has been bothering me for a long time, and I want to ask do you know what is the reason?
Thanks for your reply~:)
The text was updated successfully, but these errors were encountered: