Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the difference between the number of training iters in the paper and this Repo #3

Closed
superPangpang opened this issue May 31, 2021 · 3 comments

Comments

@superPangpang
Copy link

Thanks for your great work and source code ! The training epoch numbers in the paper Table 2 are 300, but there are 300000 iters in source code . Data augmentations in the code are very thorough, I think a longer training process is necessary. Which one is your experimental strategy? I do not know if you have done similar experiments that how many iters of training performance will be basically stable under your strong data augment setting.
I look forward to your reply!

@roatienza
Copy link
Owner

300K is the correct one. Table 2 meant to be 300K iterations. 300K is CLOVA AI training protocol for their STR benchmark. In my experience, anything greater than 300K has little improvement.

@superPangpang
Copy link
Author

thanks for your reply!

@roatienza
Copy link
Owner

Thanks. Closing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants