Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Did you try to fine-tune transformers LM with Ranger? #13

Open
avostryakov opened this issue Sep 17, 2019 · 4 comments
Open

Did you try to fine-tune transformers LM with Ranger? #13

avostryakov opened this issue Sep 17, 2019 · 4 comments

Comments

@avostryakov
Copy link

Recent transformers architectures are very famous in NLP: BERT, GPT-2, RoBERTa, XLNET. Did you try to fine-tune them on some NLP task? If so, what was the best Ranger hyper-parameters and learning rate scheduler?

@LifeIsStrange
Copy link

Testing for XLnet should be prioritarised as it is the current best state of the art.
ERNIE 2.0 would be interesting too.

@JohnGiorgi
Copy link

JohnGiorgi commented Oct 11, 2019

@avostryakov I tried fine-tuning a BERT based model for joint NER and relation classification. It performs about ~1.5% worse for my tasks than the AdamW implementation in Transformers:

AdamW

  • lr: 3e-5
  • betas: (0.9, 0.999)
  • eps: 1e-6
  • weight_decay: 0.1
  • correct_bias: True

Ranger

  • lr: 3e-4
  • betas: (0.95, 0.999)
  • eps: 1e-5
  • weight_decay: 0.1

It is possible that with more tuning I might be able to close the gap. If anyone else has any tips for fine-tuning BERT with Ranger, please let me know!

@lessw2020
Copy link
Owner

I'm working with DETR which is object detection with transformer internally and will test it out there soon.
Note that Ranger now has GC (gradient centralization) and will be interesting to see if that helps for transformers.

@hiyyg
Copy link

hiyyg commented Sep 25, 2023

How does ranger perform for Detr?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants