Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to compute Recall@100 #7

Closed
zhongxiangzju opened this issue Jul 8, 2021 · 4 comments
Closed

How to compute Recall@100 #7

zhongxiangzju opened this issue Jul 8, 2021 · 4 comments

Comments

@zhongxiangzju
Copy link

Hi, one of the evaluation metric for zero shot object detection is Recall@100, but how to compute it is not very clear.
My understanding is following.
First, select top 100 detections from an image.
Second, mark a predicted bounding box as positive if it has an IoU greater than a threshold (0.5 for example) and no other higher confidence bounding box has been assigned to the same GT box.
Third, compute recall@100 for this image number_of_positive_prediction / 100.
Forth, compute recall@100 for all images sum(recall@100 for each image) / number_of_image.

Is it correct ? Thanks a lot!

@zhengye1995
Copy link
Owner

In the third step, it should be : number_of_positive_prediction / number_of_all_gt_box * 100, for example: 50 / 100 * 100 = 50%
The other steps are correct.

@zhongxiangzju
Copy link
Author

Yes. Thanks!

By the way, how to generate word embedding vectors seems very confusing.
From here, I know that gensim is used to generate embeddings. But results are different between mine and the author's.
Do you know to generate the embeddings? Thanks very much.

Here is the script I used.

import numpy as np
import gensim.downloader 
print(list(gensim.downloader.info()['models'].keys()))
# ['fasttext-wiki-news-subwords-300', 'conceptnet-numberbatch-17-06-300', 'word2vec-ruscorpora-300', 'word2vec-google-news-300', 'glove-wiki-gigaword-50', 'glove-wiki-gigaword-100', 'glove-wiki-gigaword-200', 'glove-wiki-gigaword-300', 'glove-twitter-25', 'glove-twitter-50', 'glove-twitter-100', 'glove-twitter-200', '__testing_word2vec-matrix-synopsis']
word_vectors = gensim.downloader.load('word2vec-google-news-300')
person_embedding = word_vectors['person']
person_embedding  = person_embedding / np.linalg.norm(person_embedding)
print(person_embedding)

@zhengye1995
Copy link
Owner

Hi, I directly used the word embedding vectors from https://github.com/salman-h-khan/PL-ZSD_Release/blob/master/MSCOCO/word_w2v.txt.

I think your normalization operation is correct and I suspect the pre-trained w2v model could be different. (I also do not know which pre-trained w2v model used in https://github.com/salman-h-khan/PL-ZSD_Release, and I just use the same embedding vectors for fair comparison)

@zhongxiangzju
Copy link
Author

OK, thanks very much for your kind answers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants