New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No such file Error: ../data/MSRvid2012 #2
Comments
It's because the function sim_evaluate_all will evaluate over all textual similarity datasets, but I only put online a few example datasets. You can:
|
It is my fault, I should check the comments of source code before starting a new issue. |
I run
I will change the |
Yes, data_io.getIDFWeight will read all the data files and compute the idf weights. I forgot to change it to read only the example file. Just changed. That being said, I recommend using more files for computing the idf weights. Using one file will probably lead to a not so accurate estimation. |
Sorry to bother you again.
How many memory do I need to run |
Are you running it with the word vector file glove.840B.300d.txt? This file contains a very large vocabulary (the file is about 6G), and probably using the whole set of word vectors causes memory issue. You can try using only the first 50,000 words (i.e., keep only the first 50,000 lines in the file), which merely affects the experiments. |
Yes, I haven't noticed that... |
No problem! |
@huache in my case it is not a OOM problem, but it takes to long. How you achieved to cap to the first 50K words? |
@loretoparisi As YingyuLiang said, modify the file |
@huache ok so I just cat the first 50K rows of the text file. |
When I run the
demo.sh
inexamples
directory, this error occurred:Would you please tell me that:
Thanks a lot !
The text was updated successfully, but these errors were encountered: