Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you supply optimal configurations for 1B benchmark? #32

Closed
alexandres opened this issue Sep 6, 2016 · 2 comments
Closed

Could you supply optimal configurations for 1B benchmark? #32

alexandres opened this issue Sep 6, 2016 · 2 comments

Comments

@alexandres
Copy link

I'm trying to use faster-rnnlm on a 3B word dataset and would like to use the optimal hyperparameters you obtained in the One Billion Word benchmark.

In particular, the end of this section https://github.com/yandex/faster-rnnlm#one-billion-word-benchmark contains the sentence:

Note. We took the best performing models from the previous and added maxent layer of size 1000 and order 3.

Is there any way you can provide what those hyperparameters are for each of the three models graphed?

Thanks!

@akhti
Copy link
Contributor

akhti commented Sep 9, 2016

Hi!
Check out results file:
https://github.com/yandex/faster-rnnlm/blob/master/doc/RESULTS.md
It contains command line arguments for the benchmark.

@akhti akhti closed this as completed Sep 9, 2016
@alexandres
Copy link
Author

Thank you. This line in the README should be corrected:

Note. We took the best performing models from the previous and added maxent layer of size 1000 and order 3.

In the RESULTS.md file, the best performing models use --direct-order 4 --direct 1000, not order 3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants