Skip to content

Conversation

dhananjaisharma10
Copy link

@dhananjaisharma10 dhananjaisharma10 commented Nov 10, 2022

What does this PR do?

Fixes #1184

Adds a fix for incorrect mean AP when there are no GT bboxes for a class, but there are predicted boxes. This can happen in the following cases:

  • When all GT boxes are to be ignored.
  • When there are no GT boxes.

Solution: Make Precision for that class 0.

Note: I am yet to write tests. I tried running them at tests/unittests/detection/test_map.py but faced a FileNotFoundError exception because of the following missing file: _SAMPLE_DETECTION_SEGMENTATION = os.path.join(_PATH_ROOT, "_data", "detection", "instance_segmentation_inputs.json")

Please check the code inside the issue and check the before vs. after outputs below.

Before

{'map': tensor(1.),
 'map_50': tensor(1.),
 'map_75': tensor(-1),
 'map_large': tensor(-1.),
 'map_medium': tensor(-1.),
 'map_per_class': tensor([ 1.,  1., -1., -1.]),
 'map_small': tensor(1.),
 'mar_1': tensor(1.),
 'mar_10': tensor(1.),
 'mar_100': tensor(1.),
 'mar_100_per_class': tensor([ 1.,  1., -1., -1.]),
 'mar_large': tensor(-1.),
 'mar_medium': tensor(-1.),
 'mar_small': tensor(1.)}

After

{'map': tensor(0.5000),
 'map_50': tensor(0.5000),
 'map_75': tensor(0.5000),
 'map_large': tensor(-1.),
 'map_medium': tensor(-1.),
 'map_per_class': tensor([1., 1., 0., 0.]),
 'map_small': tensor(0.5000),
 'mar_1': tensor(1.),
 'mar_10': tensor(1.),
 'mar_100': tensor(1.),
 'mar_100_per_class': tensor([ 1.,  1., -1., -1.]),
 'mar_large': tensor(-1.),
 'mar_medium': tensor(-1.),
 'mar_small': tensor(1.)}

Other changes:

  • Moved up the code for checking gtignore to early-return from the function. This will save computation.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

Copy link
Collaborator

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we pls add a test to verify the correctness and not break it later again...

@Borda Borda added this to the v0.11 milestone Nov 18, 2022
@SkafteNicki SkafteNicki modified the milestones: v0.11, v0.12 Nov 20, 2022
@dhananjaisharma10
Copy link
Author

Sorry for the late response @Borda

Yes, will do.

@dhananjaisharma10
Copy link
Author

dhananjaisharma10 commented Nov 27, 2022

@Borda I have added the test. Please check.

A doctest failed. Checking.

@homomorfism
Copy link

Could you please speed up approving this PR?)

@Borda
Copy link
Collaborator

Borda commented Dec 19, 2022

Could you please speed up approving this PR?)

I see it as a draft so you say it is ready for review?
also not sure but some GPU tests were failing so just re-running with an update from the master...

@wilderrodrigues
Copy link
Contributor

Hi @dhananjaisharma10 ,

We found this issue in our system and I was going to look at it. Fortunately, I checked the open issues @Borda sent me and saw that you have a PR for it, which fixes it and seems to be green now.

Could you please remove the draft so we all can benefit from the fix? If you need more time / help to adjust anything, just let me know.

Cheers!

@Borda
Copy link
Collaborator

Borda commented Dec 23, 2022

Could you please remove the draft so we all can benefit from the fix?

@wilderrodrigues what do you mean by removing the draft and still benefiting from it? Do you want to say merging it?

If you need more time/help to adjust anything, just let me know.

I am not sure if the fix is ready, @SkafteNicki @twsl

@justusschock justusschock marked this pull request as ready for review January 9, 2023 12:08
@justusschock
Copy link
Member

justusschock commented Jan 9, 2023

@homomorfism we cannot approve this PR as long as tests are failing as this indicates wrong behaviour.

@wilderrodrigues @dhananjaisharma10 Is anyone of you familiar with detection metrics, hast the bandwidth and wants to take this to merging? Unfortunately I have very little knowledge on detection...

@Borda
Copy link
Collaborator

Borda commented Feb 27, 2023

@homomorfism @wilderrodrigues, could you please hekp with debugging the last three failing tests?

@dhananjaisharma10
Copy link
Author

dhananjaisharma10 commented Mar 3, 2023

@homomorfism @wilderrodrigues, could you please hekp with debugging the last three failing tests?

Hi @Borda, could you please help me with running the tests locally. I am trying to find why they are failing. They do not seem directly related to my fix - but I could be wrong.
On another note, apologies for being silent. I did not have enough bandwidth.

@Borda
Copy link
Collaborator

Borda commented Mar 4, 2023

could you please help me with running the tests locally. I am trying to find why they are failing. They do not seem directly related to my fix - but I could be wrong.

the error states that the expected values are different from what the metrics returns

On another note, apologies for being silent. I did not have enough bandwidth.

that is fine :)

@wilderrodrigues
Copy link
Contributor

@homomorfism @wilderrodrigues, could you please hekp with debugging the last three failing tests?

Looking into it now. There is also some discrepancy in the mAP results. It's a bit larger than with my tests.

What is weird is that locally (MacBook) and on a RTX 3090 with Ubuntu 20.04, those tests are passing.

tm_result = {'map': tensor(0.2347), 'map_50': tensor(0.5017), 'map_75': tensor(0.1683), 'map_small': tensor(-1.), 'map_medium': te...0, 0.3000]), 'mar_100_per_class': tensor([ 0.4000, -1.0000,  0.3000]), 'classes': tensor([2, 3, 4], dtype=torch.int32)}
ref_result = array([0.352], dtype=float32), atol = 0.01, key = 'map'

@Borda Borda added the bug / fix Something isn't working label Apr 26, 2023
@Borda Borda self-assigned this May 17, 2023
@SkafteNicki SkafteNicki modified the milestones: v1.0.0, future Jun 3, 2023
@mergify mergify bot added the has conflicts label Jul 3, 2023
@Borda
Copy link
Collaborator

Borda commented Jul 3, 2023

@dhananjaisharma10, we value your work and thank you for your contribution 💜 could you pls revisit this PR with respect to the recently merged revert to COCO implementation #1327 and setting the torch version as an Internal option? 🐰

@Borda Borda marked this pull request as draft July 3, 2023 18:14
@Borda
Copy link
Collaborator

Borda commented Aug 8, 2023

Once again thank you for all your effort! 💜
Feel free to reopen this PR, and happy to talk about it any time in the future 🐿️

@Borda Borda closed this Aug 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug / fix Something isn't working has conflicts

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Wrong Calculation of Mean Average Precision

6 participants