New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds Backbones to Faster RCNN #382
Conversation
Codecov Report
@@ Coverage Diff @@
## master #382 +/- ##
==========================================
- Coverage 80.79% 80.60% -0.20%
==========================================
Files 100 105 +5
Lines 5728 5800 +72
==========================================
+ Hits 4628 4675 +47
- Misses 1100 1125 +25
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There is lot of scope to improve it to make transfer learning better.
We can iterate on that further. These are really components and can be used by I slightly rearranged the FRCNN codebase, and it's classname since it collides with torchvision. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls, simplify the model selection...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oke-aditya Thank you for your contribution as always :] I left some comments. Mind having a look?
…n self.log (#4327) * update logging.rst * logger of choice Co-authored-by: Nicki Skafte <skaftenicki@gmail.com> * add metrics reference * trigger ci * Revert "trigger ci" This reverts commit 97bf461cf9c00d182b0cc841c6b966a0ca9e85a4. Co-authored-by: Nicki Skafte <skaftenicki@gmail.com> Co-authored-by: chaton <thomas@grid.ai> Co-authored-by: Roger Shieh <sh.rog@protonmail.ch> Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
* update test * docstring Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
* precision * precision * recall * f beta * confusion matrix * mse * mae * msle * expalained variance * psnr * ssim * text fp fn tp * accuracy * wiki -> sklearn for confusion metrix link * confusion matrix logging note Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
I shifted these above to a new file to avoid clutter. One more proposal is What we do is additionally maintain a file with This helps us to load pretrained weights given the So people can possibly do the following with
Also FRCNN gets these superpowers with simple string parameter.
This makes a bit more powerful and allows people to easily publish their own weights using a simple url to the file. This collides with #200 thoughts and I leave this optional here as this PR is meant for object detection. We can discuss this in a new issue too. |
* Remove Sourcerer * trigger Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* skip test * Apply suggestions from code review Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com> Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
* remove unused rpc import * isort Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* draft * fix * drop folder Co-authored-by: chaton <thomas@grid.ai>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oke-aditya Thank you for your work. I'm sorry I'm replying late. I left some comments, so would you mind having a look at them?
Also, I think we need some tests for newly added public functions:
create_torchvision_backbone
create_fasterrcnn_backbone
out_channels = 512 | ||
model_selected = TORCHVISION_MODEL_ZOO[model_name] | ||
net = model_selected(pretrained=pretrained) | ||
ft_backbone = _create_backbone_features(net, out_channels) | ||
return ft_backbone, out_channels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oke-aditya How about moving
model_selected = TORCHVISION_MODEL_ZOO[model_name]
net = model_selected(pretrained=pretrained)
out of if/elif/else
? The current implementation looks somewhat repetitive...
warn_missing_pkg('torchvision') # pragma: no-cover | ||
|
||
|
||
def _create_backbone_generic(model: nn.Module, out_channels: int): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also add return types for all functions?
out_channels = 512 | ||
model_selected = TORCHVISION_MODEL_ZOO[model_name] | ||
net = model_selected(pretrained=pretrained) | ||
ft_backbone = _create_backbone_features(net, out_channels) | ||
return ft_backbone, out_channels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oke-aditya Also, I think that, instead of returning in each if/elif/else
, having only one return statement in this function will look better.
|
||
def create_torchvision_backbone(model_name: str, pretrained: bool = True): | ||
""" | ||
Creates CNN backbone from Torchvision. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A tiny fix for consistency :]
Creates CNN backbone from Torchvision. | |
Creates CNN backbone from torchvision. |
|
||
Args: | ||
model_name: Name of the model. E.g. resnet18 | ||
pretrained: Pretrained weights dataset "imagenet", etc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Load by defualt with pretrained=None, which will allow us to use any other possible weights.
These weights support can then be made available using pretrained: str = "imagenet" and so on paramters.
For now, let's match the docstring of arg pretrained
with pretrained (bool)
in torchvision. (docs) Should be further discussed in #200.
def create_fasterrcnn_backbone(backbone: str, fpn: bool = True, pretrained: str = None, | ||
trainable_backbone_layers: int = 3, **kwargs) -> nn.Module: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def create_fasterrcnn_backbone(backbone: str, fpn: bool = True, pretrained: str = None, | |
trainable_backbone_layers: int = 3, **kwargs) -> nn.Module: | |
def create_fasterrcnn_backbone(backbone: str, fpn: bool = True, pretrained: str = None, | |
trainable_backbone_layers: int = 3, **kwargs: Any) -> nn.Module: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, the name create_fasterrcnn_backbone
sounds a bit limiting. The function can be used for other detection algorithms, right?
@oke-aditya Would you resolve the conflicts, too? (Just updated the title of this PR since we might need some more work...) |
Yeah sure, |
* update * clean test * still in progress * udpdate test * update * update * resolve flake * add test for zero_grad * update * works without accumulated_grad * update * update * resolve amp * revert back to True * update * clean tests * cleaned out * typo * update test * git repare bug * remove print * udpate * Fix formatting/optimizer imports * Refactor the test for cleanliness * Add vanilla model to the test, better var names * Fixed var names, let's clean up these mock tests * repare test * update test * resolve flake8 * add manual_optimization * update tests * resolve flake8 * add random accumulate_grad_batches * improve test * Update tests/trainer/optimization/test_parity_automatic_optimization.py Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Update tests/trainer/optimization/test_parity_automatic_optimization.py Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * update * clean tests * correct bug * Apply suggestions from code review * format * adress comments * update on comments Co-authored-by: SeanNaren <sean@grid.ai> Co-authored-by: Ubuntu <ubuntu@ip-172-31-62-109.ec2.internal> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
* update chlog for future 1.1.3rc * prune Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* [bugfix] Group defaults to WORLD if None * fix no_grad * Update pytorch_lightning/utilities/distributed.py * Update pytorch_lightning/utilities/distributed.py Co-authored-by: Gregor Koporec <gregork@unicorn.gorenje.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com> Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
* refactor * memory * show * clean * clean * try * device * reset * fix * fix * mean * hook * format * add todo Co-authored-by: chaton <thomas@grid.ai> Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* skip some description from pypi * flake8
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Add TPU example * add badge * add badge * add badge * bullets * name * trigger * add dataset name Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com> Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
😓 Looks like I smashed the git history. (Too many conflicts and something went wrong). I will pick up the changes from the previous commit and add stuff in new PR. All I intended to do was merge master into this branch and accept all incoming changes if any conflict. |
What does this PR do?
Closes #340
Before submitting
PR review
cc @ananyahjha93
Anyone in the community is free to review the PR once the tests have passed.
Did you have fun?
Make sure you had fun coding 🙃 Obviosuly 😆