Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Fix reduce_zero_label in evaluation #2504

Merged
merged 9 commits into from
Jan 30, 2023
Merged

[Fix] Fix reduce_zero_label in evaluation #2504

merged 9 commits into from
Jan 30, 2023

Conversation

siddancha
Copy link
Contributor

@siddancha siddancha commented Jan 20, 2023

Motivation

Through this PR, I (1) fix a bug, and (2) perform some associated cleanup, and (3) add a unit test. The bug occurs during evaluation when two options -- reduce_zero_label=True, and custom classes are used. The bug was that the reduce_zero_label is not properly propagated (see details below).

Modification

  1. Bugfix

The bug occurs in the initialization of CustomDataset where the reduce_zero_label flag is not propagated to its member self.gt_seg_map_loader_cfg:

self.gt_seg_map_loader = LoadAnnotations(
) if gt_seg_map_loader_cfg is None else LoadAnnotations(
    **gt_seg_map_loader_cfg)

Because the reduce_zero_label flag was not being propagated, the zero label reduction was being unnecessarily and explicitly duplicated during the evaluation.

As pointed in a previous PR (#2500), reduce_zero_label must occur before applying the label_map. Due to this bug, the order gets reversed when both features are used simultaneously.

This has been fixed to:

self.gt_seg_map_loader = LoadAnnotations(
    reduce_zero_label=reduce_zero_label, **gt_seg_map_loader_cfg)
  1. Cleanup

Due to the bug fix, since both reduce_zero_label and label_map are being applied in get_gt_seg_map_by_idx() (i.e. LoadAnnotations.__call__()), the evaluation does not need perform them anymore.

This was pointed out for label_map in a previous issue (#1415) that the label_map should not be applied in the evaluation. This was handled by passing an empty dict:

# as the labels has been converted when dataset initialized
# in `get_palette_for_custom_classes ` this `label_map`
# should be `dict()`, see
# https://github.com/open-mmlab/mmsegmentation/issues/1415
# for more ditails
label_map=dict(),
reduce_zero_label=self.reduce_zero_label))

Similar to this, I now also set reduce_label=False since it is now also being handled by get_gt_seg_map_by_idx() (i.e. LoadAnnotations.__call__()).

  1. Unit test

I've added a unit test that tests the CustomDataset.pre_eval() function when reduce_zero_label=True and custom classes are used. The test fails on the original master branch but passes with this fix.

BC-breaking (Optional)

I do not anticipate this change braking any backward-compatibility.

Checklist

  • Pre-commit or other linting tools are used to fix the potential lint issues.
    • I've fixed all linting/pre-commit errors.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • I've added a test that passes when the fix is introduced, and fails on the original master branch.
  • If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMDet3D.
    • I don't think this change affects MMDet or MMDet3D.
  • The documentation has been modified accordingly, like docstring or example tutorials.
    • This change fixes an existing bug and doesn't require modifying any documentation/docstring.

@siddancha siddancha marked this pull request as ready for review January 20, 2023 19:56
@siddancha
Copy link
Contributor Author

This PR cleans up some unnecessary arguments in evaluation functions. This needs to update the documentation/docstring that I don't have access to.

@MeowZheng
Copy link
Collaborator

Thanks for your contribution. @MengzhangLI please have a look

Copy link
Collaborator

@MeowZheng MeowZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My consideration is that the modifications of intersect_and_union and other evaluation functions are bc-breaking for the users who use it alone, but I support the modification of CustomDataset. I suggest keeping input arguments of evaluation functions.

@siddancha
Copy link
Contributor Author

@MeowZheng sounds good! I've reverted my "cleanup" in metrics.py for backwards compatibility in
ca3a7b2 . Now, I keep the label_dict and return_label arguments in the evaluation functions, but when calling them from CustomDataset, I set return_label=False just like setting label_map=dict().

@siddancha siddancha changed the title [Fix] Fix & cleanup reduce_zero_label in evaluation [Fix] Fix reduce_zero_label in evaluation Jan 28, 2023
Copy link
Collaborator

@MeowZheng MeowZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lint failed, as there is some problem from isort PyCQA/isort#2077, but it has been fixed in PyCQA/isort#2078.

We are working on upgrading the isort, but meet other problems now. :(

mmseg/datasets/custom.py Outdated Show resolved Hide resolved
@MeowZheng
Copy link
Collaborator

We have fixed lint problem of master branch #2520
I suggest rebasing your modification on the latest code of master

siddancha and others added 9 commits January 29, 2023 15:48
Eval functions shouldn't reduce zero label or apply label_map since that is already done by get_gt_seg_map_by_idx() i.e. LoadAnnotations.__call__().
This is to maintain backward compatibility.
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
@siddancha
Copy link
Contributor Author

siddancha commented Jan 29, 2023

@MeowZheng I rebased this PR on the latest version of master. The linting now passes!

@codecov
Copy link

codecov bot commented Jan 30, 2023

Codecov Report

Base: 88.33% // Head: 88.30% // Decreases project coverage by -0.03% ⚠️

Coverage data is based on head (db92532) compared to base (ac5d650).
Patch has no changes to coverable lines.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2504      +/-   ##
==========================================
- Coverage   88.33%   88.30%   -0.03%     
==========================================
  Files         147      147              
  Lines        8844     8844              
  Branches     1490     1490              
==========================================
- Hits         7812     7810       -2     
- Misses        790      793       +3     
+ Partials      242      241       -1     
Flag Coverage Δ
unittests 88.30% <ø> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmseg/datasets/custom.py 95.78% <ø> (+1.05%) ⬆️
mmseg/core/evaluation/metrics.py 89.47% <0.00%> (-4.22%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@MengzhangLI
Copy link
Contributor

Hi, @siddancha many thanks for your nice PR and detailed description!

@MeowZheng MeowZheng merged commit 190063f into open-mmlab:master Jan 30, 2023
@siddancha siddancha deleted the sid/pr/fix-reduce-zero-label-in-eval branch January 30, 2023 04:49
aravind-h-v pushed a commit to aravind-h-v/mmsegmentation that referenced this pull request Mar 27, 2023
…pen-mmlab#2504)

* [PipelineTesterMixin] Handle non-image outputs for batch/sinle inference test

* style

---------

Co-authored-by: William Berman <WLBberman@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants