Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't obtain your scores with ST-LN :/ #67

Open
pbordes opened this issue Mar 25, 2019 · 0 comments
Open

Can't obtain your scores with ST-LN :/ #67

pbordes opened this issue Mar 25, 2019 · 0 comments

Comments

@pbordes
Copy link

pbordes commented Mar 25, 2019

Hello, I am trying to get the same results as you for the ST-LN model (https://arxiv.org/pdf/1707.06320.pdf : first row of Table 2).

  1. I went to the layer-norm repo, downloaded the lngru_may13_1700000.npz files, added the layer norm function to the Kyros's Skipthought repo, as explained in https://github.com/ryankiros/layer-norm).
  2. And then I tried the Step 4 of https://github.com/ryankiros/skip-thoughts/tree/master/training. Now I can encode sentence with his model. I used a 20,000 word vocabulary that I had on my own skip thought implementation (I can't find the 20,000 words vocabulary used by Kyros for his lngru_may13_1700000.npz model)
  3. In SentEval, instead of 'import skipthoughts', I imported tools in your SentEval/examples/skipthought.py file. When I am running the experiments, I get pretty different results than yours (sometimes worse, sometimes a little better on certain benchmarks).
    Could you explain how you obtained these scores? Which vocabulary did you use ? Is there any special trick I am not aware of, that I didn't mention earlier?
    Thanks a lot, it would be a big help :)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant