-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about undistortion of generic models #53
Comments
No, the algorithm given in #22 does not require a parametric calibration result. You can simply choose any focal length and principal point parameters that you want, and you will get the undistortion for these parameters (typically, you would choose these parameters such that they include the whole field-of-view of the distorted camera images, or most of it in case the distortion is too large). This is actually part of the "choosing the intersection plane" step that you propose here. (You could additionally choose an arbitrary rotation matrix to rotate the image plane.) I don't think there is a need for a different algorithm. This step:
in your proposed algorithm is problematic since it wrongly assumes that a homography describes the distortion within these points. I don't really see a point in discussing more about undistortion. The algorithm in #22 apparently works fine for that. |
To add a bit more detail: This can for example be done by iterating over all boundary pixels in the distorted image and unprojecting them. Normalize these directions to a z value of 1 (such that they are on the z=1 plane), and compute the bounding box of the (x, y) components of these unprojected directions. Then, compute the focal length and principal point parameters for undistortion such that the field-of-view of the resulting undistorted image exactly corresponds to this bounding box of directions on the image plane. |
Hi @puzzlepaint, I'm impressive by the undistortion algorithm you proposed at #22, which requires both generic calibration result (pixel directions) and parametric calibration result (focal length and center offset). I wondering if there is another way that uses only pixel directions, so I came up with this idea:
The reason why distortion occurs at the sensor (green plane) is because the outside rays refract when they pass through the lens (blue arrows become red ones). So if I can use a plane to "intercept" the outside rays before they reach the lens, I can obtain undistorted images on that plane (orange plane). The generic calibration result is exact the directions of the outside rays (blue arrows), so just use them to find their intersections with the orange plane, then apply simple perspective transformation to fill the color.
However, in the experiment I still saw distorted images. I'm not sure whether the idea is wrong or I misused the directions your program calibrated. Here is how I did it:
The text was updated successfully, but these errors were encountered: