Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training ConvBERT on multi GPU #5

Closed
PhilipMay opened this issue Apr 22, 2021 · 2 comments
Closed

Training ConvBERT on multi GPU #5

PhilipMay opened this issue Apr 22, 2021 · 2 comments

Comments

@PhilipMay
Copy link

PhilipMay commented Apr 22, 2021

Hi @stefan-it ,

I saw that you trained multiple ConvBERT models with BASE size. Now the guys from ConvBERT say they tested the code only on one single GPU: yitu-opensource/ConvBert#16 (comment)

I saw you did use TPUs for training.

  • Did you have to make any modifucations to use the TPUs?
  • Do you have experience in using ConvBERT on multiple GPUs?

This is also connected to my question here: yitu-opensource/ConvBert#18

Thanks
Philip

@PhilipMay
Copy link
Author

Well - they already said that the language model training only works on TPU....

@PhilipMay
Copy link
Author

Closing it again...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant