You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, great work with this paper and repo!
I would like to ask you how much time you spent training the model (for the dataset Phoenix12) and what kind gpu you used for the training. Because I am trying to replicate it but with other dataset (specificly the Phoenix14-T), and in my first test I spent around 14h to train 10 epochs. I used a TitanXP with 12Gb for the training and a batch = 1.
Thank you again for your work and congratulation for this repo.
The text was updated successfully, but these errors were encountered:
Thanks for your attention, it takes about half an hour to train one epoch on Phoenix14 with batch size=2 on a single 3090, and the total training process (40 epochs) costs about 20 hours. Using multiple GPUs can speed up this process.
Hello, great work with this paper and repo!
I would like to ask you how much time you spent training the model (for the dataset Phoenix12) and what kind gpu you used for the training. Because I am trying to replicate it but with other dataset (specificly the Phoenix14-T), and in my first test I spent around 14h to train 10 epochs. I used a TitanXP with 12Gb for the training and a batch = 1.
Thank you again for your work and congratulation for this repo.
The text was updated successfully, but these errors were encountered: