-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Computation Cost #2
Comments
Thanks for your interest! |
How about the computation cost for the base model? (I guess "64 A100-80G GPUs for 5~6 days" is for the large model). Thanks! |
Hi @chenguolin, apologize for the late reply. The updated v1.0 models are trained as follows. For small model, 32 A100-80G for 1.5 days. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks for open-sourcing this wonderful work.
I'm curious about the computation cost for training your LRM on the Objaverse dataset.
For example, it takes 128 A100-40G GPUs for 3 days to complete training on the Objaverse + MVImgNet dataset in the original paper.
The text was updated successfully, but these errors were encountered: