-
Notifications
You must be signed in to change notification settings - Fork 25.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could you please implement a Adafactor optimizer? :) #1256
Comments
What didn't work for you with the fairseq implementation? It seems pretty self-contained: https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py#L65-L213 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
FYI @sshleifer -- I was wrong -- able to train T5-large even batch==1 with FP32, no gradient check-pointing and ADAM. Given that T5 team strongly recommends AdaFactor -- giving it a try, other pieces perhaps being more difficult... |
馃殌 Feature
Could you please implement a Adafactor optimizer? :)
( https://arxiv.org/abs/1804.04235 )
Motivation
In contrast to Adam it requires much less GPU memory.
I tried to use the FairSeq implementation for the pytorch-transformers, but I'm no expert and I couldn't get it done.
Could you please do that? :)
Additional context
The text was updated successfully, but these errors were encountered: