-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Evaluation of the index and model doesn't match of the reported paper #1
Comments
Hi, I'm very happy to help! Best, |
Hi Buruo, I can get the reported results using the code in this repo. I don't know why you couldn't. Maybe you can provide me more details so we can work it out. I open source the ranking results per your request. I also provide several useful scripts to help you reproduce our work.
Could you please follow the guidelines in the readme and see whether run_retrieve.sh will give you the expectant numbers? Jingtao |
Hi Jingtao, Thanks for the improved repository and the results file. We can confirm that your improved repository can be reproduced using your code. Buruo |
Great! |
Hi all,
Thanks for this interesting work and making the code available. I am Buruo, an MSc student at the University of Glasgow, studying with @cmacdonald. I’m trying to make a PyTerrier plugin for JPQ.
I am using your pre-built indices, but my evaluation doesn't match those reported in the paper. My best NDCG@10 is 0.5568 on the TREC 2019 Deep Learning track Passage Task.
For us to debug our integration, would you be able to provide results files - i.e. rankings generated by JPQ. This would also be useful for others wishing to compare the JPQ results in their own papers.
Buruo
The text was updated successfully, but these errors were encountered: