You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found out that with certain input interest points, the fundamental matrix output by OpenCV is wrong. I used a set with many hundreds of interest points, and with both RANSAC and the LMEDS outlier removal option with various outlier thresholds. It was consistently giving the wrong results.
Wiggling those interest points by adding random noise made it give the right result, but that is not a good solution.
This appears to be a long-standing problem, described in the documentation at:
The solution which worked for me was based on the following observation. If the fundamental matrix is found correctly using given interest points, and the two images are rectified using that matrix, for example using stereoRectifyUncalibrated(), then the y component of the disparity between the rectified images must be close to 0. That is the same as saying that if the rectification matrices are applied to the interest points, each pair of transformed interest points must have the y difference very close to 0.
Based on this, I implemented a RANSAC algorithm for finding the fundamental matrix where the error metric is precisely the above. If the fundamental matrix is F, and the rectification matrices are H1 and H2, then for given interest point matches P1 and P2, the error is abs( (H1P1).y - (H2P2).y ).
The existing code in the OpenCV repository uses RANSAC too, but I think its error metric is not as good. I found the code here:
About the fundamental matrix property x'^T F x = 0, at least this is the classical correspondence condition in the HZ book. A look at the literature is needed to see what is the best to use.
It really uses the same ideas as the OpenCV one, but the metric of measuring outliers is not x'^T F x = 0 which is unreliable in degenerate situations, but rather the distance to the epipolar line, which is the ultimate predictor of how accurately F is computed.
@oleg-alexandrov, have you looked at the new RANSAC framework (USAC), added in OpenCV 4.5.0? You can try USAC_DEFAULT or some other USAC_... presets. It should give much better results in terms of performance and stability
System information (version)
Detailed description
I found out that with certain input interest points, the fundamental matrix output by OpenCV is wrong. I used a set with many hundreds of interest points, and with both RANSAC and the LMEDS outlier removal option with various outlier thresholds. It was consistently giving the wrong results.
Wiggling those interest points by adding random noise made it give the right result, but that is not a good solution.
This appears to be a long-standing problem, described in the documentation at:
https://docs.opencv.org/master/da/de9/tutorial_py_epipolar_geometry.html
and with no good solution there.
The solution which worked for me was based on the following observation. If the fundamental matrix is found correctly using given interest points, and the two images are rectified using that matrix, for example using stereoRectifyUncalibrated(), then the y component of the disparity between the rectified images must be close to 0. That is the same as saying that if the rectification matrices are applied to the interest points, each pair of transformed interest points must have the y difference very close to 0.
Based on this, I implemented a RANSAC algorithm for finding the fundamental matrix where the error metric is precisely the above. If the fundamental matrix is F, and the rectification matrices are H1 and H2, then for given interest point matches P1 and P2, the error is abs( (H1P1).y - (H2P2).y ).
The existing code in the OpenCV repository uses RANSAC too, but I think its error metric is not as good. I found the code here:
opencv/modules/calib3d/src/fundam.cpp
Line 796 in 68d15fc
It seems to encode the fundamental matrix property m1 * F * m2 = 0, where m1 and m2 are the interest points, and I think this is not robust enough.
Any thoughts?
The text was updated successfully, but these errors were encountered: