Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add robustness benchmarking toolkit #1021

Merged
merged 9 commits into from
Aug 2, 2019

Conversation

michaelisc
Copy link
Contributor

This pull request contains tools to benchmark the robustness of object detection models to common image corruptions. The corresponding robust detection benchmark was defined in "Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming".

We tried to design the code in a way that it minimally infer with the current workflow, while adding functionality to evaluate any model implemented in mmdetection for robustness. To corrupt and distort the images we uses the imagecorruptions package we separately developed and distribute through PyPI.


```shell
# noise
python tools/test_c.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] --corruptions noise
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_c.py has been renamed to test_robustness.py?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the name

@@ -11,6 +11,8 @@
from .utils import to_tensor, random_scale
from .extra_aug import ExtraAugmentation

from imagecorruptions import corrupt
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Third-party imports may be moved before relative imports.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# corruption
if self.corruption is not None:
img = corrupt(img, severity=self.corruption_severity,
corruption_name=self.corruption)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These lines need to be formatted by yapf with the specified style configurations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

for j in range(len(eval_output[distortion][severity]))]
results[i, severity, :] = mAP
# if verbose > 0:
# print(distortion, severity, mAP)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments may be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

print("\nmodel: {}".format(osp.basename(filename)))
if metric is None:
if 'P' in prints:
print("Performance on Clean Data [P] ({})"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be consistent with other files, single quote is preferred.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

]

with open(filename, "rb") as f:
eval_output = pickle.load(f)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two lines can be simply replaced with mmcv.load(filename).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@hellock
Copy link
Member

hellock commented Jul 31, 2019

Hi there are some conflicts. Could you fix that?

@hellock
Copy link
Member

hellock commented Jul 31, 2019

Sorry that I noted that there were some isort inconsistency. I will fix that.

@michaelisc
Copy link
Contributor Author

Give me a minute. I am currently fixing the yapf errors

@michaelisc
Copy link
Contributor Author

The merge conflicts and yapf errors should be fixed.

@hellock hellock merged commit 4e387a6 into open-mmlab:master Aug 2, 2019
JegernOUTT pushed a commit to JegernOUTT/mmdetection that referenced this pull request Nov 23, 2019
* Add robust detection benchmark

* Update readmes

* Changed readmes for pull request

* Ensure pep8 conformity

* fixed formatting

* Fix yapf errors

* minor formatting

* fix imports order
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants