Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding computational power and training time #2

Closed
mradul2 opened this issue Nov 2, 2021 · 1 comment
Closed

Regarding computational power and training time #2

mradul2 opened this issue Nov 2, 2021 · 1 comment

Comments

@mradul2
Copy link

mradul2 commented Nov 2, 2021

Hello! Thank you for your code implementation.

It would be very helpful if you mention approximately how much time was required for the model training per experiment. And what computational devices were being used during the experiments (Number and type of GPU/TPU's).

Looking forward to hearing from you soon.

@jianlong-yuan
Copy link
Owner

We counted the training time, about 4 hours or so of training. We used 8 GPUs, our GPU is Tesla V100. if you want to go even faster, please use fp16 for acceleration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants