You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I went to the layer-norm repo, downloaded the lngru_may13_1700000.npz files, added the layer norm function to the Kyros's Skipthought repo, as explained in https://github.com/ryankiros/layer-norm).
And then I tried the Step 4 of https://github.com/ryankiros/skip-thoughts/tree/master/training. Now I can encode sentence with his model. I used a 20,000 word vocabulary that I had on my own skip thought implementation (I can't find the 20,000 words vocabulary used by Kyros for his lngru_may13_1700000.npz model)
In SentEval, instead of 'import skipthoughts', I imported tools in your SentEval/examples/skipthought.py file. When I am running the experiments, I get pretty different results than yours (sometimes worse, sometimes a little better on certain benchmarks).
Could you explain how you obtained these scores? Which vocabulary did you use ? Is there any special trick I am not aware of, that I didn't mention earlier?
Thanks a lot, it would be a big help :)
The text was updated successfully, but these errors were encountered:
Hello, I am trying to get the same results as you for the ST-LN model (https://arxiv.org/pdf/1707.06320.pdf : first row of Table 2).
Could you explain how you obtained these scores? Which vocabulary did you use ? Is there any special trick I am not aware of, that I didn't mention earlier?
Thanks a lot, it would be a big help :)
The text was updated successfully, but these errors were encountered: