New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce the drQA performance #109

Closed
hwaranlee opened this Issue May 29, 2017 · 2 comments

Comments

Projects
None yet
2 participants
@hwaranlee

hwaranlee commented May 29, 2017

Using GloVe pre-trained word vectors and the default hyper-parameters, I've gotten the performance EM=52.87 F1=64.47, which is far lower than the reported performance (EM=69.5, F1=78.8) in the paper. (http://arxiv.org/abs/1704.00051)
Did you reproduce the performance?

@ajfisch

This comment has been minimized.

Show comment
Hide comment
@ajfisch

ajfisch May 29, 2017

Contributor

Perhaps you have something incorrect in your setup.

Running in ParlAI, we get EM = 66.4, F1 = 76.5. This was after 78k iterations of batch size 32; using the default parameters with glove.840.300d vectors and tune_partial set to 1000.

Contributor

ajfisch commented May 29, 2017

Perhaps you have something incorrect in your setup.

Running in ParlAI, we get EM = 66.4, F1 = 76.5. This was after 78k iterations of batch size 32; using the default parameters with glove.840.300d vectors and tune_partial set to 1000.

@hwaranlee

This comment has been minimized.

Show comment
Hide comment
@hwaranlee

hwaranlee May 31, 2017

Thanks!
I used no -bs option resulting in batch size 1

hwaranlee commented May 31, 2017

Thanks!
I used no -bs option resulting in batch size 1

@hwaranlee hwaranlee closed this May 31, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment