You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 11, 2023. It is now read-only.
The file is hidden behind the leaderboard. Once you submit the relevant evaluation is run. We don't release the annotation file yet, to avoid the case where people overfit on the small testset. You're more than welcome to submit to the leaderboard if you'd like to see the results of your model. We do accept multiple submissions (but no more than 1 submission every two weeks, see #80 ) For more fine-grained testing, we recommend that you use the proxy task.
Hi,
I wanted to run an evaluation using the NDCG score as done in the paper.
Where is the RELEVANCE_ANNOTATIONS_CSV_PATH for the 99 queries as mentioned in the README to run the
/src/relevanceeval.py
file?Just want to test my results..
The text was updated successfully, but these errors were encountered: