You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing this repo! Have you tried running experiments on the dstc7 dataset as in the original paper? I parsed the corpus similar to the ubuntu dataset you used. Specifically, train 1:1, test 1:100 but the accuracy (or R1@100) is always 0. If I reduce the test candidates to 1:10, the performance becomes 0.4. Any suggestions?
The text was updated successfully, but these errors were encountered:
Thanks a lot for your experiment!
I haven't tried this dataset yet, but I think what you said is really an interesting thing, I will do some experiments on this dataset if time permits (probabily in 2-3 weeks).
Turns out it's a bug in my data processing code. Using the default parameters, the performance is around (0.18 R1@100), (0.28 R2@100) (0.48 R10@100). The parlai guys say they use augmented data for training, i am trying this to see if performance can be boosted.
Hi,
Thanks for sharing this repo! Have you tried running experiments on the dstc7 dataset as in the original paper? I parsed the corpus similar to the ubuntu dataset you used. Specifically, train 1:1, test 1:100 but the accuracy (or R1@100) is always 0. If I reduce the test candidates to 1:10, the performance becomes 0.4. Any suggestions?
The text was updated successfully, but these errors were encountered: