Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is #30

Open
olliacc opened this issue Feb 24, 2024 · 2 comments
Open

What is #30

olliacc opened this issue Feb 24, 2024 · 2 comments
Labels
question Further information is requested

Comments

@olliacc
Copy link

olliacc commented Feb 24, 2024

I'm asking for the lowest amount of GPU video memory (VRAM) necessary to run latte video generation effectively? for both training and inference.

@maxin-cn
Copy link
Collaborator

I'm asking for the lowest amount of GPU video memory (VRAM) necessary to run latte video generation effectively? for both training and inference.

Hi, thanks for your interest. Inferencing one video on the A100 requires 20916MiB of GPU memory under fp16 precision mode. As for the GPU memory requirement of training, I think it may be dependent on your batch size.

@XGGNet
Copy link

XGGNet commented Feb 26, 2024

@maxin-cn
May i set the local bz=1 for training latte on my own dataset? I mean, I heard that the enough batchsize seems to be key for the training of diffusion.
image

Hi, you can set batchsize as 1. But I'm not sure if this will slow down performance. You can try it first. Looking forward to your feedback later~

@maxin-cn maxin-cn added the question Further information is requested label Jul 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants