Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature detection on fisheye lenses #15

Closed
chris-hagen opened this issue May 8, 2020 · 5 comments
Closed

Feature detection on fisheye lenses #15

chris-hagen opened this issue May 8, 2020 · 5 comments

Comments

@chris-hagen
Copy link

At the moment it's really hard to detect features on fisheye lenses. I have tested with 187° and 220° lenses on different Allied Vision Alvium cameras with 2000x2000px resolution. Unfortunately the closer the test pattern comes to the edges of the cameras field of view the harder it gets to get a feature match. I have attached some of my calibration images with no features found.

b0575
b1399
b1603
b1703

Is there an appropriate way to get more feature matches or can I configure the application to be less restrictive while feature detection process?

Thanks in advance!

@puzzlepaint
Copy link
Owner

It might help a little to try different values for the --refinement_window_half_extent parameter. I think that with larger values for this, usually more features tend to be found. For example, if I use 25, then the application finds some features in two of your four example images.

Otherwise, one has to dig into the source code. Generally, the feature detection was not designed for strong fisheye lenses. It assumes that the image locally behaves similarly to a pinhole camera. This assumption is used in fitting homographies to small groups of nearby feature matches in order to predict the positions of additional neighboring features. Unfortunately, it is strongly violated here, so the feature positions are predicted quite wrongly. I think that a proper fix would be to fit a slightly more complicated model (than only a homography) to the matches instead, which needs to be able to account for some of the distortion. It does not need to be perfect, only account for some generic distortion somewhat such that the predictions become 'good enough' for the feature detection scheme to use them.

There are also a few hardcoded outlier rejection thresholds in feature_detector_tagged_pattern.cc, but I don't think that relaxing them would help significantly.

@chris-hagen
Copy link
Author

I have tested several parameters to get this to work. With --refinement_window_half_extent 25 it works best, as you suggested. I used the live feature detection within the tool to check if features were found in all corners of the objectives fov. You have to be very careful and patient to get the current algorithm to detect enough features.

Now I try to calibrate the camera with the recorded features. Unfortunately I can't get good results at the moment. If I use a small feature set for calibration with a big pattern (e.g. pattern_resolution_17x24_segments_16_apriltag_0.yaml) the results are ok.
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_observation_directions
features.bin.zip

If I use a greater feature set with many features detected with a smaller pattern (e.g. pattern_resolution_25x36_segments_16_apriltag_6.yaml) the results are very bad:
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_observation_directions
features_6.bin.zip

Am I right that the smaller fov and the fact that some image corners (black areas) containing no features break the calibration process? is there a way to get a good calibration result?

Thanks in advance!

@puzzlepaint
Copy link
Owner

Am I right that [...] the fact that some image corners (black areas) containing no features break the calibration process?

Yes, I think so. The calibration program is unfortunately not designed for this kind of cameras. It uses a bounding box of all detected feature locations in image space to define a rectangular image area that it tries to calibrate. This area can be seen in the bottommost visualizations that you posted. If the actual area that contains feature detections is circular, then this will include large parts without any detections (at the corners of the rectangle). Thus there are no or almost no constraints on the calibration values in these areas. They might happen to change in a way that breaks the point projection algorithm that is used by the program. I think that it probably does not depend on the type of pattern used, but more or less on luck regarding how the calibration values are initialized and how they change during the optimization.

I don't have time to work on this right now, but I think that it should be comparatively easy to fix this problem, in case you are willing to make a few changes to the source code. First, the image area that shall be calibrated needs to be defined to more tightly fit the detected feature points. One simple way to do this would be to manually draw a mask image. Another way, for example, would be to compute the convex hull of the detected feature points (rather than their bounding box). Then the point projections need to be constrained to this area. Each time a point moves to a place outside the calibrated area during the optimization process used for projection, it should be clamped back to the closest point within the area. Also, I think one has to be careful to prevent points from getting stuck in small protrusions of the calibrated area or something like that. I imagine that it is helpful for this to have a relatively smooth boundary of the area, for example the polygon defined by the convex hull, rather than e.g. a boundary shaped by right-angled pixel boundaries.

It might be an alternative to introduce some kind of regularization on the calibration values that tries to keep them smooth. Then it might be less of a problem that there are almost no data-based constraints on the values in the rectangle corners, since the regularization should try to keep them at sane values.

@puzzlepaint
Copy link
Owner

For a very quick test whether the suggested clamping would solve the problem (or not), one could probably simply measure a suitable circle manually for the specific camera and add a clamping step to this circle below this here:

// Compute the test state (constrained to the calibrated image area).

(or in an analogous part if using another camera model)

@chris-hagen
Copy link
Author

chris-hagen commented May 25, 2020

I have extended the used noncentral model with the following lines of code:

      // Compute the test state (constrained to the calibrated image area).
      Vec2d test_result(
          std::max<double>(m_calibration_min_x, std::min(m_calibration_max_x + 0.999, result->x() - x_0)),
          std::max<double>(m_calibration_min_y, std::min(m_calibration_max_y + 0.999, result->y() - x_1)));

      // fov circle
      const double fov_centre_x=1306;
      const double fov_centre_y=972;
      const double fov_centre_r=1011;

      //check if point is out of fov
      double dist = std::sqrt(pow(fov_centre_x - test_result.x(), 2) + pow(fov_centre_y - test_result.y(), 2));
      if(dist>fov_centre_r){
          //clamp back to closest point within fov
          double phi=atan2(test_result.y()-fov_centre_y, test_result.x()-fov_centre_x);
          double clamped_x=fov_centre_x+fov_centre_r*cos(phi);
          double clamped_y=fov_centre_y+fov_centre_r*sin(phi);
          /*
            LOG(INFO) << "Clamping calculated Point(" << test_result.x() << ", " << test_result.y() <<") to fov(" <<
                        clamped_x <<", " << clamped_y << ")";
           */
          test_result=Vec2d(clamped_x, clamped_y);
      }

After that the calibration delivered much better results:
report_camera0_error_directions
report_camera0_error_magnitudes
report_camera0_errors_histogram
report_camera0_observation_directions

There are still some small spots to improve, but I think I can do that by providing more/better features at this points.

I will forge the existing code to deliver the 3 const parameters by command line, so that I can calibrate these kind of cameras with given circle shape fov. It would be very nice, if you will deliver an enhanced version with a graphical function to select fov on the calibration image. But at the moment this will do the trick for me.

Thanks a lot!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants