You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm asking for the lowest amount of GPU video memory (VRAM) necessary to run latte video generation effectively? for both training and inference.
Hi, thanks for your interest. Inferencing one video on the A100 requires 20916MiB of GPU memory under fp16 precision mode. As for the GPU memory requirement of training, I think it may be dependent on your batch size.
@maxin-cn
May i set the local bz=1 for training latte on my own dataset? I mean, I heard that the enough batchsize seems to be key for the training of diffusion.
Hi, you can set batchsize as 1. But I'm not sure if this will slow down performance. You can try it first. Looking forward to your feedback later~
I'm asking for the lowest amount of GPU video memory (VRAM) necessary to run latte video generation effectively? for both training and inference.
The text was updated successfully, but these errors were encountered: