New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
text2text model evaluation not working #41
Comments
@Khalid-Usman , from the error message, it seems like the label_id in your Can you verify the label_ids in the |
I don't think so , but let me double check ... Also instead of this i used the following command and get precision / Recall. Is that correct ?
|
If |
@OctoberChang I verified , there were few |
@OctoberChang there is something wrong in the evaluation code. I tried to debug and for ground-truth items, I found error in the following line "
".
I printed each variable and got,
Moreover,
|
@Khalid-Usman , you should print the |
@OctoberChang , yes So I don't think so, there exist any Thanks, please verify that. |
@Khalid-Usman , can you try just evaluating the first line of your prediction If this still not working, you can share the first line of those two files as well as the |
@OctoberChang what is the score in |
the prediction score matrix (i.e., |
I found the recall numbers output by that function to be wildly different from what i was expecting. So i did the calculation myself and got much higher numbers. If a query has 4 labels, and if the top result is in the recall set, for recall@1 do you compute that as 1.0 or 0.25? It could also be that i am having a similar label misalignment issue. |
@simonhughes22 , In your example, the Recall@1 should be 0.25 while the Prec@1 is 1.0. See the definition of Prec@k and Recall@k in http://manikvarma.org/downloads/XC/XMLRepository.html#metrics |
Closing this issue. Feel free to reopen if you still have any questions related to the text2text evaluation module. |
Description
Model evaluation is not working properly to output the precision and recall
How to Reproduce?
I run the following line of code,
where,
--pred-path is the path of file produced during model prediction,
--truth-path is the path of test file, e.g. Out1, Out2, Out3 \t cheap door
Out1, Out2 and Out3 are the line number in the the following output file
--text-item-path ./output-labels.txt
What have you tried to solve it?
Error message or code output
Environment
(Add as much information about your environment as possible, e.g. dependencies versions.)
The text was updated successfully, but these errors were encountered: