Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

May I inquire whether the 4090 can participate in this project? For instance, in inference tasks? #104

Closed
wenter opened this issue Mar 18, 2024 · 8 comments

Comments

@wenter
Copy link

wenter commented Mar 18, 2024

No description provided.

@HioZx
Copy link

HioZx commented Mar 18, 2024

I have the same problem

@min-star
Copy link

me too

@zdyshine
Copy link

me too, can inference on 4090?

@tanghaom
Copy link

tanghaom commented Mar 19, 2024

我用一块3090,可以跑16x256x256和16x512x512的,batchsize设成1

@min-star
Copy link

min-star commented Mar 19, 2024 via email

@aguang1201
Copy link

@tanghaom 16x512x512的好像不行吧?把num_frames改成4倒是可以,但效果惨不忍睹

我用一块3090,可以跑16x256x256和16x512x512的,batchsize设成1

@tanghaom
Copy link

@tanghaom 16x512x512的好像不行吧?把num_frames改成4倒是可以,但效果惨不忍睹

我用一块3090,可以跑16x256x256和16x512x512的,batchsize设成1

你把vae那个步骤单独跑就行了,前面的生成部分24G显存够用;
你可以拆开跑,比如前面模型用一块卡,vae用另一块卡;或者vae用cpu;或者去掉生成的模型,vae在单独的流程里跑。

@chaojie
Copy link

chaojie commented Mar 20, 2024

https://github.com/chaojie/ComfyUI-Open-Sora
可以用我这个项目,4090可以跑16x512x512的

FrankLeeeee pushed a commit that referenced this issue Jun 17, 2024
Co-authored-by: Shen-Chenhui <shen_chenhui@u.nus.edu>
odb9402 pushed a commit to odb9402/Open-Sora that referenced this issue Jul 18, 2024
⭐ [Feature] Support deepspeed for videogpt training.

Former-commit-id: dc639030e3544ca54e9c579624375bdea486a0ef
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants