-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed nan loss by filtering out-of-frame gt_bboxes in coco.py #2999
Conversation
or y1 >= img_info['height']) | ||
and (x2 < 0 or x2 >= img_info['width'] or y2 < 0 | ||
or y2 >= img_info['height'])): | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some questions about the condition statement,
- The conditions seem are redundant. For example if
x1 >= img_info['width']
then x2 must>= img_info['width']
, so we only need to checkx2 >= img_info['width']
. - Why you use
and
between the first condition and the second condition, in my opinion, if one of two conditions satisfy then the image is out-of-frame.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Yeah, the conditions are redundant. But it is more readable. The first condition checks if the left-top point is out of image, and the second one checks the bottom-right point.
- I mean to filter boxes that are fully outside the image (frame), but preserve those partially outside the image. In another word, when the boxes do not intersect with the image, then they are filtered.
I think I should check the intersection instead. Current conditions are True (both corner points are out of frame) when the image is inside a box, and the box is filtered. But the box should be preserved since it intersects with the image.
…mlab#2999) * Fixed nan loss by filtering out-of-frame gt_bboxes * Discarded lambda expression of is_out_of_frame * Cleaned trailing whitespace * Reformatted code * Checked the intersection between boxes and image
* fix cn docs
Hi @Jokoe66 !First of all, we want to express our gratitude for your significant PR in the mmdetection project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR. We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/raweFPmdzG If you are Chinese or have WeChat,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:) |
There may exist out-of-frame annotations in custom datasets. These abnormal annotations will cause nan loss. For example, such cases exist in the object365 datasets, and I found both Libra RCNN and Cascade RCNN diverged due to nan loss. After filtering these abnormal cases, these detectors converged. Even though this should be avoided by dataset annotators, it can be more robust of mmdetection to filter out-of-frame annotations.