Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rectification between event and RGB camera #43

Closed
Chohoonhee opened this issue Jul 9, 2022 · 4 comments
Closed

Rectification between event and RGB camera #43

Chohoonhee opened this issue Jul 9, 2022 · 4 comments

Comments

@Chohoonhee
Copy link

Chohoonhee commented Jul 9, 2022

Thank you very much for your hard work on the dataset!

From what I understand, the DSEC data set seems to have performed rectification between RGB cameras and between events.
And, it seems to provide a ground disparity map corresponding to the rectification. I was looking for rectification between the event and RGB while maintaining the ground truth disparity.

When I checked all the related issues, it seems that #6 gave a similar answer (thanks very much to the authors...)
First, I tried to apply the method using the opencv library that you pointed out. (opencv's stereoRectify)
However, this seems to be a new rectify, in which case it seems that the provided ground truth disparity map cannot be used, right??
(This is because when a new rectification is performed, it is moved to a new camera coordinate)

So, next, I tried to apply the manual rectify method you suggested in #6 while using the rectified image as it is without deformation. As you mentioned, I can use standard opencv functions to rectify the image according to the new rectification. First of all, I try to match the right distorted event camera to the left rectified RGB camera. In case of the event camera, rectification map can be obtained as follows:

coords = np.stack(np.meshgrid(np.arange(width), np.arange(height))).reshape((2, -1)).astype("float32")
term_criteria = (cv2.TERM_CRITERIA_MAX_ITER | cv2.TERM_CRITERIA_EPS, 100, 0.001)
points = cv2.undistortPointsIter(coords, K, dist_coeffs, R_rect_unrect, K_rect, criteria=term_criteria)
inv_map = points.reshape((height, width, 2))

However, in this case, I can't know the R_rect_unrect and K_rect, because the parameters mean R and K of rectified event camera from rectified RGB camera (not event camera), it is not provided in the calibration file. Can it be obtained simply by properly using extrinsic and intrinsic of calibration?

Maybe I misunderstood? I would like to know if it is possible to newly obtain rectified R and K of event camera from RGB camera manually with a calibration file.

Once again, thank you for doing this work and answering many of questions!

@magehrig
Copy link
Contributor

Hi @Chohoonhee

The high-level description of the entries in the calibration files are documented at the bottom of this page: https://dsec.ifi.uzh.ch/data-format/

R_rect_unrect is for example R_rect0 for camera 0 (left event camera) and K_rect is in intrinsics->camRect0->camera_matrix

@Chohoonhee
Copy link
Author

Thanks for the response!
When I use the R_rect0 for R_rect_unrect and camRect0->camera matrix for K_rect, then I can obtain the inv_map. This map has same value as the rectify map in rectify_map.h5.

However, as can be seen in below figure, it has some vertical gap between rectified left images provided in Dsec and left event data rectified by inv_map.
viz_screenshot_11 07 2022

I want to do a rectification between an event camera and an image. Although the above figure visualizes the left image and the left event, the horizontal deviation will be the same as the right image.

@magehrig
Copy link
Contributor

If you want to compute disparity reliably, I would not try to rectify the left images to the left event camera because the disparity is too small (you can see this here). Instead, you could rectify the left event camera with the right images (or the left images with the right event camera).

Second, "when a new rectification is performed, it is moved to a new camera coordinate" is not necessarily a problem because you can also move the disparity map based on the rotation (and then make perform your predictions and move your predictions back to the original ground truth).

Regarding your last comment: it does not look like you performed the rectification.

@magehrig
Copy link
Contributor

magehrig commented Sep 7, 2022

@Chohoonhee please let me know if I can still help you. Closing for now

@magehrig magehrig closed this as completed Sep 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants