Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformer training needs more improvements to catch up to SOTA #28

Open
eelcovdw opened this issue Feb 12, 2021 · 0 comments
Open

Transformer training needs more improvements to catch up to SOTA #28

eelcovdw opened this issue Feb 12, 2021 · 0 comments

Comments

@eelcovdw
Copy link
Collaborator

Transformer training still needs some improvements:

  • Naom training schedule does not work great. Fairseq seems to have another schedule that works better (sqrt)
  • Batching based on number of tokens instead of number of sentences would let us use larger batch sizes on average
  • Possibly another optimizer than Adam.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant