-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmark: add some warping benchmarks to create a baseline #3260
Conversation
Hello @hmaarrfk! Thanks for updating the PR. Cheers ! There are no PEP8 issues in this Pull Request. 🍻 Comment last updated on August 06, 2018 at 00:07 Hours UTC |
benchmarks/transform_warp.py
Outdated
translation=(0, 4)) | ||
self.order = order | ||
|
||
def same_type(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you'll need to prefix these functions with time_
otherwise asv won't run / find them. And the signature must be able to accept all parameters as positional arguments because asv passes them implicitly to all setup
, time_x
and peakmem_
methods of the class.
Is the traceback shown during installation of asv or when you try to run it? I managed to install asv using either conda (conda-forge channel) or providing pip with the path to the cloned repository (which is similar to your attempts). |
benchmarks/transform_warp.py
Outdated
precision data type.""" | ||
result = warp(self.image, self.tform, order=self.order, | ||
preserve_range=True) | ||
convert(result, dtype=self.image.dtype, copy=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now you are measuring the time it takes to execute warp
and convert
together. Is this intended? Wouldn't it be better to keep these measurements separate?
@lagru. I’ll try to install again tonight.
I do want to benchmark conversion and warping together. Through
conversations with @jni, he mentionnés that it was crucial to try and keep
everything in the floating point representation during intermediate
processing steps for multi step algorithms.
That said, the original data that you might ty and warp might be in uint
data types. I’m wondering if the CPU implicit casts are faster than
converting everything to a given data type upfront before calling warp.
…On Sat, Jul 14, 2018 at 5:55 AM Lars G. ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In benchmarks/transform_warp.py
<#3260 (comment)>
:
> + [0, 1, 3]
+ )
+ param_names = ['dtype_in', 'N', 'order']
+
+ def setup(self, dtype_in, N, order):
+ self.image = convert(np.random.rand(N, N), dtype=dtype_in)
+ self.tform = SimilarityTransform(scale=1, rotation=np.pi / 10,
+ translation=(0, 4))
+ self.order = order
+
+ def same_type(self):
+ """Test the case where the users wants to preserve their same low
+ precision data type."""
+ result = warp(self.image, self.tform, order=self.order,
+ preserve_range=True)
+ convert(result, dtype=self.image.dtype, copy=False)
Right now you are measuring the time it takes to execute warp *and*
convert together. Is this intended? Wouldn't it be better to keep these
measurements separate?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3260 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAFfmEpV8cftN6N8g1X9jvW8eKt0YAKPks5uGepKgaJpZM4VI21C>
.
|
@hmaarrfk yes, floating point values are critical for precision, but it is still useful to measure the time for each step separately. Therefore I would recommend benchmarks with and without conversion. |
Codecov Report
@@ Coverage Diff @@
## master #3260 +/- ##
==========================================
- Coverage 86.81% 86.75% -0.06%
==========================================
Files 341 339 -2
Lines 27406 27356 -50
==========================================
- Hits 23792 23733 -59
- Misses 3614 3623 +9
Continue to review full report at Codecov.
|
@lagru @jni thank you for your comments. I was treating the convert step as a "nop" in the My hypothesis:
I had erroneously used |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hmaarrfk great, LGTM. Only a minor suggestion: I think it is preferred form to use np.random.random((N, N))
over np.random.rand(N, N)
for consistency with other random
functions, as well as the standard NumPy array creation functions (zeros
, ones
, full
, etc).
@jni, why can't I find the documentation for numpy.random.random |
@jni done! |
I suspected that might be the case as I saw your comments rolling in... =D |
@jni, I realize i needed to install
|
e2f3c8e
to
c597859
Compare
c597859
to
d49ecc5
Compare
Closing in favour of just includeing this straight in #3253 |
PR #3253 is about speeding up computation when you want to.
I figured I would add some benchmarks to create a baseline.
It considers 2 cases:
I really wish I could test this on my own machine but I tried to install asv:
and
but all failed with
Checklist
[It's fine to submit PRs which are a work in progress! But before they are merged, all PRs should provide:]
./doc/examples
(new features only)./benchmarks
, if your changes aren't covered by anexisting benchmark
[For detailed information on these and other aspects see scikit-image contribution guidelines]
References
[If this is a bug-fix or enhancement, it closes issue # ]
[If this is a new feature, it implements the following paper: ]
For reviewers
(Don't remove the checklist below.)
later.
__init__.py
.doc/release/release_dev.rst
.@meeseeksdev backport to v0.14.x