-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix dynamic max trials of RANSAC #7065
base: main
Are you sure you want to change the base?
Conversation
skimage/measure/tests/test_fit.py
Outdated
assert_equal(_dynamic_max_trials(1, 100, 5, 1), 360436504051) | ||
# e = 0%, min_samples = 10 | ||
assert_equal(_dynamic_max_trials(1, 100, 10, 0), 0) | ||
assert_equal(_dynamic_max_trials(1, 100, 10, 1), 162326183972299328) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On main, it returns -np.inf
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can confirm. But why modify the existing test and not add another one precisely for the error case including a comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With my first initial implementation, I thought that min_samples = 5
would be redundant because the result for min_samples=5
changes. But with your suggested change, the result doesn't change so that it makes sense to keep it. Just added it back with my new commit.
The CI seems to be failing due to numerical stability between different binaries? As I don't know the difference of the CIs, I am leaving it as it is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. The current test failures look unrelated to this PR (imageio, tracked in imageio/imageio#1044). So ignore them for now.
I've left a few minor suggestions, otherwise looks good.
skimage/measure/tests/test_fit.py
Outdated
assert_equal(_dynamic_max_trials(1, 100, 5, 1), 360436504051) | ||
# e = 0%, min_samples = 10 | ||
assert_equal(_dynamic_max_trials(1, 100, 10, 0), 0) | ||
assert_equal(_dynamic_max_trials(1, 100, 10, 1), 162326183972299328) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can confirm. But why modify the existing test and not add another one precisely for the error case including a comment?
Thanks for the review! I just pushed the changes based on the suggestion. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks. Feel welcome to tweak my suggested comment further.
@@ -668,7 +668,8 @@ def _dynamic_max_trials(n_inliers, n_samples, min_samples, probability): | |||
return np.inf | |||
inlier_ratio = n_inliers / n_samples | |||
nom = max(_EPSILON, 1 - probability) | |||
denom = max(_EPSILON, 1 - inlier_ratio ** min_samples - _EPSILON) | |||
denom = max(_EPSILON, 1 - inlier_ratio ** min_samples) | |||
denom = min(denom, 1 - _EPSILON) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
denom = min(denom, 1 - _EPSILON) | |
# Avoid log(1) below turning into -inf | |
denom = min(denom, 1 - _EPSILON) |
Also move it into its own test at is isn't part of the hand-calculated values in Multiple View Geometry in Computer Vision in test_ransac_dynamic_max_trials.
@hayatoikoma, I took the liberty to push 8d5aa4a. Please feel welcome to disagree or tweak further. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice! Thank you for improving the test and fixing the CI!
LGTM! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, looks ready to go. :)
I just found that the function was a transplant from scikit-learn, and scikit-learn is handling the edge case in a slightly different way. I'm sure that it doesn't matter, but just FYI. |
Description
Due to numerical precision,
denom
sometimes becomes exactly1.0
so thatnp.log(denom)
becomes0.0
. As the numerator is always negative, the function returns-np.inf
. This PR fixes this issue.Release note
Summarize the introduced changes in the code block below in one or a few sentences. The
summary will be included in the next release notes automatically: