Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mAP #174

Open
salehnia opened this issue May 5, 2022 · 2 comments
Open

mAP #174

salehnia opened this issue May 5, 2022 · 2 comments

Comments

@salehnia
Copy link

salehnia commented May 5, 2022

Hi, Thanks for sharing your Code.
The evaluation metric in your method is mAP. We know that the AP must be between 0 and 1.
Considering that the result of executing your code on COCO in 10 shots is equal to 9.1. Does this mean that your model has a detection mAP about 0.09 in real?

Thank you

@frankvp11
Copy link

I was just wondering about the evaluation metrics in general. It says bAP, but I'm only familiar with mAP. I'm also curious about the 9.1

@muratbayrktr
Copy link

Please refer to the paper https://arxiv.org/pdf/2003.06957.pdf. bAP stands for the baseAP meaning it measures the two-stage fine-tuning approach (TFA)'s performance over the base class after the fine tuning stages. The authors wanted to make sure the performance of the base class didn't vanish after the fine-tuning stage and that is why they included base classes' AP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants