Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with MAP@k #25

Open
podgorskiy opened this issue Mar 3, 2019 · 0 comments
Open

Problem with MAP@k #25

podgorskiy opened this issue Mar 3, 2019 · 0 comments

Comments

@podgorskiy
Copy link

In here:

https://github.com/thuml/HashNet/blob/master/pytorch/src/test.py#L51

You compute:

for i in range(query_num):
    label = validation_labels[i, :]
    label[label == 0] = -1
    idx = ids[:, i]
    imatch = np.sum(database_labels[idx[0:R], :] == label, axis=1) > 0
    relevant_num = np.sum(imatch)
    Lx = np.cumsum(imatch)
    Px = Lx.astype(float) / np.arange(1, R+1, 1)
    if relevant_num != 0:
            APx.append(np.sum(Px * imatch) / relevant_num)

Where relevant_num will be the number of relevant entries in the answer of length R, but not the total number of relevant entries. relevant_num will be always less or equal to R

Here https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/average_precision.py#L39 it is computed differently. There, divider is min(total_number_of_relevant, k).

Also see discussion here: https://stackoverflow.com/questions/40906671/confusion-about-mean-average-precision

It is not possible to cheat the AP by tweaking the size of the returned ranked list. AP is the area below the precision-recall curve which plots precision as a function of recall, where recall is the number of returned positives relative to the total number of positives that exist in the ground truth, not relative to the number of positives in the returned list. So if you crop the list, all you are doing is that you are cropping the precision-recall curve and ignoring to plot its tail.

And:

Your confusion might be related to the way some popular function, such as VLFeat's vl_pr compute the precision-recall curves as they assume that you've provided them the entire ranked list and therefore compute the total number of positives in the ground truth by just looking at the ranked list instead of the ground truth itself. So if you used vl_pr naively on cropped lists you could indeed cheat it, but that would be an invalid computation.

Also here is an explanation of MAP@k: https://www.kaggle.com/c/FacebookRecruiting/discussion/2002

The number you divide by is the number of points possible. This is the lesser of ten (the most you can predict) and the number of actual correct answers that exist.

Am I missing something or your code is not correct? It is true that many other hashing papers compute MAP the same way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant