Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time to train #18

Closed
JoseMoFi opened this issue Jun 9, 2022 · 2 comments
Closed

Time to train #18

JoseMoFi opened this issue Jun 9, 2022 · 2 comments

Comments

@JoseMoFi
Copy link

JoseMoFi commented Jun 9, 2022

Hello, great work with this paper and repo!
I would like to ask you how much time you spent training the model (for the dataset Phoenix12) and what kind gpu you used for the training. Because I am trying to replicate it but with other dataset (specificly the Phoenix14-T), and in my first test I spent around 14h to train 10 epochs. I used a TitanXP with 12Gb for the training and a batch = 1.

Thank you again for your work and congratulation for this repo.

@ycmin95
Copy link
Collaborator

ycmin95 commented Jun 9, 2022

Thanks for your attention, it takes about half an hour to train one epoch on Phoenix14 with batch size=2 on a single 3090, and the total training process (40 epochs) costs about 20 hours. Using multiple GPUs can speed up this process.

@JoseMoFi
Copy link
Author

JoseMoFi commented Jun 9, 2022

Great! Thank you for your early answer.

@JoseMoFi JoseMoFi closed this as completed Jun 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants