-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Segmentation] Add mean IoU #1236
Conversation
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, thanks for opening this PR. I have left a few comments:
1.) Why didn't you rely on the jaccard index as pointed out in the issue you linked?
2.) You only implemented a multiclass case. Could you also implement a binary case (see our recent classification refactor) and comment on a multilabel case?
Thanks for the quick review.
I made a quick Colab notebook to showcase what I'd like to achieve: https://colab.research.google.com/drive/1O8KlOdiz7JXAIKLh2TsKs11cwH5B4ZDK?usp=sharing. I implemented the mIoU metric in HF evaluate, but as it's in NumPy it's rather slow. As can be seen, the Could we achieve this using the existing implementation? I think the current implementation doesn't take |
@NielsRogge fair enough. Just wanted to get some confirmation that we actually need this and don't implement something again we already have on a different name :) |
@NielsRogge why close this? Below, returning 0 rather than nan results in a clearly 'wrong' result: |
Let's reopen this one. I agree that the output is not handled correctly here when |
…into add_mean_iou
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #1236 +/- ##
========================================
- Coverage 69% 34% -35%
========================================
Files 311 313 +2
Lines 17527 17596 +69
========================================
- Hits 12085 5915 -6170
- Misses 5442 11681 +6239 |
Co-authored-by: Jirka Borovec <6035284+Borda@users.noreply.github.com>
What does this PR do?
This PR adds the mean Intersection over Union (mIoU) metric, especially useful for semantic segmentation (where the goal is to label each pixel of an image with a certain class).
I first tried to use the existing Jaccard Index metric for this, but it's not ideal; one needs to set
average=None
, and even then, you can't just easily calculate the mIoU as you need to take the union of the labels present in the predicted segmentation map and the ground truth segmentation map.Hence, this PR proposes to add a new "Segmentation" section, to which metrics like mIoU and panoptic quality (PQ) can be added.
Fixes #1124
The implementation is based on the one in mmsegmentation by OpenMMLab.
I just created a functional variant for now, if this gets approved I can proceed with making a module variant, as well as implementing the tests.
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃