-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Refactor the AnchorGenerator implementation #3045
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ssary vars and simplifying code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I highlight a few parts in the original implementation that I found strange in case there is a reason they are like that.
# (scales, aspect_ratios) are usually an element of zip(self.scales, self.aspect_ratios) | ||
# This method assumes aspect ratio = height / width for an anchor. | ||
def generate_anchors(self, scales, aspect_ratios, dtype=torch.float32, device="cpu"): | ||
# type: (List[int], List[float], int, Device) -> Tensor # noqa: F821 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason why the dtype was defined as int
earlier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because torchscript didn't support torch.dtype
before. Maybe it's supported now so we can switch to using it
return base_anchors.round() | ||
|
||
def set_cell_anchors(self, dtype, device): | ||
# type: (int, Device) -> None # noqa: F821 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unclear why dtype
is declared as int
and device
as Device
instead of torch.device
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see above, those were limitations of torchscript that might have since then been fixed
self.set_cell_anchors(dtype, device) | ||
anchors_over_all_feature_maps = self.cached_grid_anchors(grid_sizes, strides) | ||
anchors = torch.jit.annotate(List[List[torch.Tensor]], []) | ||
for i, (image_height, image_width) in enumerate(image_list.image_sizes): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused vars.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! This is due to some last minute refactoring before the release that kept those variables still around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks!
self.set_cell_anchors(dtype, device) | ||
anchors_over_all_feature_maps = self.cached_grid_anchors(grid_sizes, strides) | ||
anchors = torch.jit.annotate(List[List[torch.Tensor]], []) | ||
for i, (image_height, image_width) in enumerate(image_list.image_sizes): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! This is due to some last minute refactoring before the release that kept those variables still around.
…ssary vars and simplifying code. (pytorch#3045)
Adding Python type hints, correcting incorrect types, removing unnecessary vars and simplifying code.