-
Notifications
You must be signed in to change notification settings - Fork 382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: Refactored detection post-processing #724
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Codecov Report
@@ Coverage Diff @@
## main #724 +/- ##
==========================================
- Coverage 96.03% 96.03% -0.01%
==========================================
Files 125 125
Lines 4717 4713 -4
==========================================
- Hits 4530 4526 -4
Misses 187 187
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
This PR introduces the following modifications:
After comparing latency with the option of switching to PyTorch & TF implementations of the opening transform, I reverted back the changes for one reason: on CPU & GPU, it was all faster except for TensorFlow on CPU that takes ~4-6 times longer to compute the opening transform (max_pool2d seems to be really slow on TF for CPU 🤷♂️ )
Any feedback is welcome!