You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encounter a problem when I train this code. As the code wrote in 'main.py':
while not t.terminate():
t.train()
t.test()
where we can see that the test phase begins immediately after the train phase. However, since the GPU memory is not released, and the test model only runs on a single GPU even though this model can run on 4 GPUs on the train phase. The problem of out of memory occurs then.
Actually, I can run this code successfully on 4 GPUs of GTX 1080Ti, even though the test model only runs on a single GPU. In recent days my work environment changes and I train these netwoks on 4 GPUs of Titan Xp. Although the GPU memory increases the problem of out of memory occurs.
I wonder if we can test the model with multi-GPU just like the train phase. By the way, setting --chop_forward doesn't work for me.
Thank you!
The text was updated successfully, but these errors were encountered:
Thank you for your excellent code.
I encounter a problem when I train this code. As the code wrote in 'main.py':
while not t.terminate():
t.train()
t.test()
where we can see that the test phase begins immediately after the train phase. However, since the GPU memory is not released, and the test model only runs on a single GPU even though this model can run on 4 GPUs on the train phase. The problem of out of memory occurs then.
Actually, I can run this code successfully on 4 GPUs of GTX 1080Ti, even though the test model only runs on a single GPU. In recent days my work environment changes and I train these netwoks on 4 GPUs of Titan Xp. Although the GPU memory increases the problem of out of memory occurs.
I wonder if we can test the model with multi-GPU just like the train phase. By the way, setting --chop_forward doesn't work for me.
Thank you!
The text was updated successfully, but these errors were encountered: