Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference memory with torch.set_grad_enabled(True) #99

Closed
pwwwyyy opened this issue Jul 22, 2024 · 2 comments
Closed

inference memory with torch.set_grad_enabled(True) #99

pwwwyyy opened this issue Jul 22, 2024 · 2 comments
Labels
automatic-closing automatic-stale question Further information is requested

Comments

@pwwwyyy
Copy link

pwwwyyy commented Jul 22, 2024

Thank you for your great work! For some reasons ,I need to infer with torch.set_grad_enabled(True),but 80G cuda memory is not enough. I want to know whether this is normal and the configuration you use when training transformer of Latte-1.

@maxin-cn
Copy link
Collaborator

You can enable gradient checkpointing to save the GPU memory.

@maxin-cn maxin-cn added the question Further information is requested label Jul 22, 2024
Copy link

github-actions bot commented Aug 6, 2024

Hi There! 👋

This issue has been marked as stale due to inactivity for 14 days.

We would like to inquire if you still have the same problem or if it has been resolved.

If you need further assistance, please feel free to respond to this comment within the next 7 days. Otherwise, the issue will be automatically closed.

We appreciate your understanding and would like to express our gratitude for your contribution to Latte. Thank you for your support. 🙏

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automatic-closing automatic-stale question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants