Hi,
I tried running the model on my Quadro M1200 (4G), and I've ran into a memory issue. Does the model need a lot of vRAM or can I somehow tweak it?
Here's the output after running the main script:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 34.00 MiB (GPU 0; 3.95 GiB total capacity; 3.29 GiB already allocated; 11.38 MiB free; 3.37 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I've tried changing max_slip_size_mb but it didn't help. Thank you in advance for any feedback you might give ;)
Hi,
I tried running the model on my Quadro M1200 (4G), and I've ran into a memory issue. Does the model need a lot of vRAM or can I somehow tweak it?
Here's the output after running the main script:
I've tried changing
max_slip_size_mbbut it didn't help. Thank you in advance for any feedback you might give ;)