-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature detection on fisheye lenses #15
Comments
It might help a little to try different values for the Otherwise, one has to dig into the source code. Generally, the feature detection was not designed for strong fisheye lenses. It assumes that the image locally behaves similarly to a pinhole camera. This assumption is used in fitting homographies to small groups of nearby feature matches in order to predict the positions of additional neighboring features. Unfortunately, it is strongly violated here, so the feature positions are predicted quite wrongly. I think that a proper fix would be to fit a slightly more complicated model (than only a homography) to the matches instead, which needs to be able to account for some of the distortion. It does not need to be perfect, only account for some generic distortion somewhat such that the predictions become 'good enough' for the feature detection scheme to use them. There are also a few hardcoded outlier rejection thresholds in |
I have tested several parameters to get this to work. With Now I try to calibrate the camera with the recorded features. Unfortunately I can't get good results at the moment. If I use a small feature set for calibration with a big pattern (e.g. If I use a greater feature set with many features detected with a smaller pattern (e.g. Am I right that the smaller fov and the fact that some image corners (black areas) containing no features break the calibration process? is there a way to get a good calibration result? Thanks in advance! |
Yes, I think so. The calibration program is unfortunately not designed for this kind of cameras. It uses a bounding box of all detected feature locations in image space to define a rectangular image area that it tries to calibrate. This area can be seen in the bottommost visualizations that you posted. If the actual area that contains feature detections is circular, then this will include large parts without any detections (at the corners of the rectangle). Thus there are no or almost no constraints on the calibration values in these areas. They might happen to change in a way that breaks the point projection algorithm that is used by the program. I think that it probably does not depend on the type of pattern used, but more or less on luck regarding how the calibration values are initialized and how they change during the optimization. I don't have time to work on this right now, but I think that it should be comparatively easy to fix this problem, in case you are willing to make a few changes to the source code. First, the image area that shall be calibrated needs to be defined to more tightly fit the detected feature points. One simple way to do this would be to manually draw a mask image. Another way, for example, would be to compute the convex hull of the detected feature points (rather than their bounding box). Then the point projections need to be constrained to this area. Each time a point moves to a place outside the calibrated area during the optimization process used for projection, it should be clamped back to the closest point within the area. Also, I think one has to be careful to prevent points from getting stuck in small protrusions of the calibrated area or something like that. I imagine that it is helpful for this to have a relatively smooth boundary of the area, for example the polygon defined by the convex hull, rather than e.g. a boundary shaped by right-angled pixel boundaries. It might be an alternative to introduce some kind of regularization on the calibration values that tries to keep them smooth. Then it might be less of a problem that there are almost no data-based constraints on the values in the rectangle corners, since the regularization should try to keep them at sane values. |
For a very quick test whether the suggested clamping would solve the problem (or not), one could probably simply measure a suitable circle manually for the specific camera and add a clamping step to this circle below this here: camera_calibration/applications/camera_calibration/src/camera_calibration/models/central_generic.cc Line 478 in 2682d58
(or in an analogous part if using another camera model) |
At the moment it's really hard to detect features on fisheye lenses. I have tested with 187° and 220° lenses on different Allied Vision Alvium cameras with 2000x2000px resolution. Unfortunately the closer the test pattern comes to the edges of the cameras field of view the harder it gets to get a feature match. I have attached some of my calibration images with no features found.
Is there an appropriate way to get more feature matches or can I configure the application to be less restrictive while feature detection process?
Thanks in advance!
The text was updated successfully, but these errors were encountered: