-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for images to contain zero true detections #1531
Merged
Merged
Changes from all commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
8e2b82d
Allow for images to contain zero true detections
Erotemic 6b2d88d
Allow for empty assignment in PointAssigner
Erotemic 0a30bb3
Allow ApproxMaxIouAssigner to return an empty result
Erotemic 44ba99f
Fix CascadeRNN forward when entire batch has no truth
Erotemic 9126b95
Correctly assign boxes to background when there is no truth
Erotemic ff2de2b
Fix assignment tests
Erotemic f46b73e
Make flatten robust
Erotemic f6934e0
Fix bbox loss with empty pred/truth
Erotemic 0457ca7
Fix logic error in BBoxHead.loss
Erotemic ab403b5
Add tests for empty truth cases
Erotemic 3bde1a4
tests faster rcnn empty forward
Erotemic 951cdc4
Skip roipool forward tests if torchvision is not installed
Erotemic 9528c8b
Add tests for bbox/anchor heads
Erotemic 628c265
Consolidate test_forward and test_forward2
Erotemic 7068ed9
Fix assign_results.labels = None when gt_labels is given; Add test fo…
Erotemic f381609
Fix OHEM Sampler with zero truth
Erotemic d0307ae
remove xdev
Erotemic dea8e23
resolve 3 reviews
Erotemic d345369
Fix flake8
Erotemic a9cd7bb
refactoring
yhcao6 0aeb47b
fix yaml format
yhcao6 61073de
add filter flag
yhcao6 82d8097
minor fix
yhcao6 4a4bafc
delete redundant code in load anno
yhcao6 f31e052
fix flake8 errors
Erotemic b0753e3
quick fix for empty truth with masks
Erotemic 14c2436
fix yapf error
Erotemic 1651b25
fix mask padding for empty masks
hellock File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,6 +2,41 @@ | |
|
||
|
||
class AssignResult(object): | ||
""" | ||
Stores assignments between predicted and truth boxes. | ||
|
||
Attributes: | ||
num_gts (int): the number of truth boxes considered when computing this | ||
assignment | ||
|
||
gt_inds (LongTensor): for each predicted box indicates the 1-based | ||
index of the assigned truth box. 0 means unassigned and -1 means | ||
ignore. | ||
|
||
max_overlaps (FloatTensor): the iou between the predicted box and its | ||
assigned truth box. | ||
|
||
labels (None | LongTensor): If specified, for each predicted box | ||
indicates the category label of the assigned truth box. | ||
|
||
Example: | ||
>>> # An assign result between 4 predicted boxes and 9 true boxes | ||
>>> # where only two boxes were assigned. | ||
>>> num_gts = 9 | ||
>>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) | ||
>>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) | ||
>>> labels = torch.LongTensor([0, 3, 4, 0]) | ||
>>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) | ||
>>> print(str(self)) # xdoctest: +IGNORE_WANT | ||
<AssignResult(num_gts=9, gt_inds.shape=(4,), max_overlaps.shape=(4,), | ||
labels.shape=(4,))> | ||
>>> # Force addition of gt labels (when adding gt as proposals) | ||
>>> new_labels = torch.LongTensor([3, 4, 5]) | ||
>>> self.add_gt_(new_labels) | ||
>>> print(str(self)) # xdoctest: +IGNORE_WANT | ||
<AssignResult(num_gts=9, gt_inds.shape=(7,), max_overlaps.shape=(7,), | ||
labels.shape=(7,))> | ||
""" | ||
|
||
def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): | ||
self.num_gts = num_gts | ||
|
@@ -13,7 +48,45 @@ def add_gt_(self, gt_labels): | |
self_inds = torch.arange( | ||
1, len(gt_labels) + 1, dtype=torch.long, device=gt_labels.device) | ||
self.gt_inds = torch.cat([self_inds, self.gt_inds]) | ||
|
||
# Was this a bug? | ||
# self.max_overlaps = torch.cat( | ||
# [self.max_overlaps.new_ones(self.num_gts), self.max_overlaps]) | ||
# IIUC, It seems like the correct code should be: | ||
self.max_overlaps = torch.cat( | ||
[self.max_overlaps.new_ones(self.num_gts), self.max_overlaps]) | ||
[self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why |
||
|
||
if self.labels is not None: | ||
self.labels = torch.cat([gt_labels, self.labels]) | ||
|
||
def __nice__(self): | ||
""" | ||
Create a "nice" summary string describing this assign result | ||
""" | ||
parts = [] | ||
parts.append('num_gts={!r}'.format(self.num_gts)) | ||
if self.gt_inds is None: | ||
parts.append('gt_inds={!r}'.format(self.gt_inds)) | ||
else: | ||
parts.append('gt_inds.shape={!r}'.format( | ||
tuple(self.gt_inds.shape))) | ||
if self.max_overlaps is None: | ||
parts.append('max_overlaps={!r}'.format(self.max_overlaps)) | ||
else: | ||
parts.append('max_overlaps.shape={!r}'.format( | ||
tuple(self.max_overlaps.shape))) | ||
if self.labels is None: | ||
parts.append('labels={!r}'.format(self.labels)) | ||
else: | ||
parts.append('labels.shape={!r}'.format(tuple(self.labels.shape))) | ||
return ', '.join(parts) | ||
|
||
def __repr__(self): | ||
nice = self.__nice__() | ||
classname = self.__class__.__name__ | ||
return '<{}({}) at {}>'.format(classname, nice, hex(id(self))) | ||
|
||
def __str__(self): | ||
classname = self.__class__.__name__ | ||
nice = self.__nice__() | ||
return '<{}({})>'.format(classname, nice) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overlaps
initialization is not consistent.In approx_max_iou_assigner:
overlaps = approxs.new(num_gts, num_squares)
In max_iou_assigner:
max_overlaps = overlaps.new_zeros((num_bboxes, ))
In point_assigner:
max_overlaps = None