You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've trained a model from scratch with batch size: 8 and window size: 500 on 4xa10 GPUs. Entering the second phase of training I'm getting the following error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 22.00 MiB. GPU
Is there any way I can salvage the model trained in the first stage?
The text was updated successfully, but these errors were encountered:
You could do with CPU memory fallback. For windows it is a setting in control panel and allows for a total of 24 GB VRAM + 24 GB Shared. If you are on Linux you will have to find your environment's way of turning it on.
I've trained a model from scratch with batch size: 8 and window size: 500 on 4xa10 GPUs. Entering the second phase of training I'm getting the following error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 22.00 MiB. GPU
Is there any way I can salvage the model trained in the first stage?
The text was updated successfully, but these errors were encountered: