Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version of sklearn.metrics.average_precision_score Should Be Carefully Considered for mAP #50

Open
huanghoujing opened this issue Jan 6, 2018 · 7 comments

Comments

@huanghoujing
Copy link

Hi, Tong Xiao.

I find that sklearn.metrics.average_precision_score has changed its behavior since version 0.19. Previous versions (I have only tested 0.18.1) generate mAP identical to the code of Market1501, while newer versions (I have only tested 0.19.1) generate higher mAP.

I provide a test case for this, link.

Thank you!

@Cysu
Copy link
Owner

Cysu commented Feb 3, 2018

@huanghoujing Thanks a lot for this information! That's interesting. I will check about it when available.

@huanghoujing
Copy link
Author

Thanks again. Yeah, maybe it's necessary to make the mAP eval code self-contained in open-reid. Looking forward to your solution.

@Cysu
Copy link
Owner

Cysu commented Feb 3, 2018

@huanghoujing you may install specific version by

pip uninstall scikit-learn
pip install scikit-learn==0.18.1

@huanghoujing
Copy link
Author

Yeah, currently it is also my awkward solution to downgrade the package and check the version in the code.

@yee-kevin
Copy link

yee-kevin commented Jul 3, 2018

@huanghoujing
Hi, I am following your mAP calculations and I came across this issue too.
Can you advice me on this:

  1. Should I use sklearn version 0.18.1 or 0.19.1?
    From the documentation: http://scikit-learn.org/dev/modules/generated/sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score
    "Changed in version 0.19: Instead of linearly interpolating between operating points, precisions are weighted by the change in recall since the last operating point."

  2. Running in Python2 and Python3 also gives me different mAP results. Which one is correct?

  3. Do you happen to know for other reid datasets such as CUHK, which is the standard sklearn version for mAP calculation?
    Thank you

@huanghoujing
Copy link
Author

@yee-kevin

  1. You should use 0.18.1
  2. If you are using open-reid which recommends using Python 3, then you should use Python 3 to reduce potential inconsistency.
  3. Sklearn version 0.18.1 is the standard mAP for ReID, regardless of which dataset you are using.

@yee-kevin
Copy link

@huanghoujing
Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants