Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix gt priority tiny bug #3208

Merged
merged 2 commits into from
Jul 20, 2020
Merged

Conversation

simonJJJ
Copy link
Contributor

@simonJJJ simonJJJ commented Jul 5, 2020

The original method calculates gt priority has a tiny bug, it actually outputs the rank, not the priority.
E.g. areas=tensor([200., 300., 500., 400., 100.])
The sort_idx would be tensor([2, 3, 1, 0, 4]). But the priority is actually tensor([3, 2, 0, 1, 4]) (Due to the gt areas). So I use an torch.argsort() method to get it.

@CLAassistant
Copy link

CLAassistant commented Jul 5, 2020

CLA assistant check
All committers have signed the CLA.

@hellock hellock requested a review from Johnson-Wang July 5, 2020 09:54
@Johnson-Wang
Copy link
Collaborator

Hi, thanks for the contribution. Have you run the baseline after fixing this?

@simonJJJ
Copy link
Contributor Author

Hi, thanks for the contribution. Have you run the baseline after fixing this?

Nope. I also want to see the performance gap due to the fix.

@Johnson-Wang
Copy link
Collaborator

I have tried to re-benchmark the performance with the fix. The AP of Resnet50 does not change (still 37.4), while that of Resnet50 with 0.2-0.5 treated as ignored decreases a bit (from 37.0 to 36.0). Performance of other backbone is still pending.

@Johnson-Wang
Copy link
Collaborator

Johnson-Wang commented Jul 17, 2020

An update on the performance.

R50-ignore range 0.2-0.5 R50 R101
Before fix 37.0 37.4 39.3
After fix 36.0 37.4 39.4

Copy link
Collaborator

@Johnson-Wang Johnson-Wang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this bug out. Most of configs are quite robust to this modification while the performance with 0.2-0.5 as ignored decreases a bit. I will update this log afterwards.

@hellock hellock merged commit 4c32b07 into open-mmlab:master Jul 20, 2020
mike112223 pushed a commit to mike112223/mmdetection that referenced this pull request Aug 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants