Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Viewpoint difference between event and RGB camera on the same side #12

Closed
RunqiuBao opened this issue May 9, 2021 · 16 comments
Closed
Labels
question Further information is requested

Comments

@RunqiuBao
Copy link

Hello, I noticed that there is quite some viewpoint difference between the event (rectified) and RGB image (rectified) on the same side, for example the following alpha image between Cam0_rect and Cam1_rect.

This can be a problem if somebody wants to compare disparity map between event camera and RGB camera.

Personally, I think it can be solved by reprojecting the two camera's view to the same attitude (so that the view between cam0_rect and cam1_rect is completely aligned, pixel to pixel). But with the extrinsics you provide I could not achieve that. I wonder if you have tried this? is the extrinsics between event and RGB accurate enough?

Thanks a lot!
image

@RunqiuBao
Copy link
Author

RunqiuBao commented May 10, 2021

Update:
I am not sure but feel, the reason that pixel-to-pixel alignment by reprojection between event and RGB camera fails is because the two camera (after rectification) have different field of view as well as different resolution.

Pixel-to-pixel alignment should be possible. Only we will need the calibration pattern to do this.

I wonder if you could provide the calibration images (after rectification) for both event camera (cam0) and RGB camera (cam1) .

Thanks again!

@magehrig
Copy link
Contributor

Hi @RunqiuBao

First, you need to make use of the intrinsic and extrinsic parameters that are provided with the calibration file of each sequence. In addition, you need depth information to map from images to event cameras, which you can compute using known stereo matching approaches, for example. I answered a related question in another issue: #11 (comment)

Let me know if you still have questions

@RunqiuBao
Copy link
Author

RunqiuBao commented May 10, 2021

Thanks for reply! @magehrig

I totally understand your point. However, existing stereo matching approaches usually give noisy results and I am afraid such mapping from image to event camera would not turn out to be useful. My point is, since the left event camera and left blackfly very near to each other (4 cm), we can assume that there is only pure rotation between their poses. Therefore, we should be able to align them directly by a simple homography.
Only I will need the calibration pattern to estimate the transformation matrix.

Sorry I did not notice #11 was a similar issue.

@magehrig
Copy link
Contributor

The calibration files already contain all the transformations:

extrinsics[T_10]: transforms points from left distorted event camera coordinate frame to the left distorted standard camera frame
extrinsics[R_rect0]: Rotation that transforms a point in the left distorted event camera coordinate frame into the rectified frame.
extrinsics[R_rect1]: same but for left standard camera frame

That's all you need.

@magehrig magehrig added the question Further information is requested label May 10, 2021
@magehrig
Copy link
Contributor

Let me know if this answers your question or maybe I should also improve the documentation if there is something missing or ambiguous/unclear.

@RunqiuBao
Copy link
Author

RunqiuBao commented May 11, 2021

Hi, @magehrig
About the documentation, I haven't found anything unclear yet.

However, about the sensors' FoV alignment, I know the extrinsics are available. I am just afraid that they are not accurate enough (sorry.. but I have spent two days on this, quite sure it is not working). For example, if you see the following figures, they are from the interlaken_00_c, both left side and right side. Obviously, rgb cameras are looking at a lower angle than event cameras. But after reprojection with the extrinsics you provided, rgb cameras are still looking at a lower angle, not much improved. I can provide my test script for validation if necessary.
Could you please check if the extrinsics between cam0, cam1 (T10) and cam2, cam3 (T32) are correct?
And if you could provide calibration patterns, aligning them would be easier with known points correpondence.

Btw, I am using 'interlaken/interlaken_00_c_images_rectified_left/000000.png' and 'interlaken/interlaken_00_c_images_rectified_right/000000.png' as well as the corresponding events of the first 25ms stacked into frames for this test. I suppose they are at the same time point, are they?
image
[left cameras, original]
image
[right cameras, original]
image
[left cameras, after reprojection]
image
[right cameras, after reprojection]

@SoikatHasanAhmed
Copy link

@RunqiuBao I am trying to solve the same issue can please your code (if possible). So that I can also try to find out the actual issue.
Thank you.

@magehrig
Copy link
Contributor

magehrig commented May 11, 2021

I need a bit more information about how you compute these results. Can you show me how you get the transformations for this warping?

I will give you an example for warping a point from the left image coordinate system to the left event coordinate system. I.e. you want to compute the transformation

0

Now, we need to relate them to the transformations in the calibration file:

1

Now, we can transform a 3D point in the rectified image coordinate system to the rectified event coordinate system:

2

Is this how you did it?

Here is the rosbag that was used for calibration with kalibr for the interlaken_00 sequence: https://download.ifi.uzh.ch/rpg/tmp/interlaken_00_kalibr.bag. I will leave the file up for a few days and then delete it. So download it as soon as possible if you want to use it.

@RunqiuBao
Copy link
Author

Hi, @magehrig really thanks for the calibration patterns!
I am working on them. I will update if any breakthrough.

About computing transformations for the warping, yes I exactly used the same equations as you has kindly shown above.
And I used OpenCV function warpPerspective for the warping.
Just for your reference, here is the code I wrote and the results I got [@magehrig @soikat15 : https://github.com/RunqiuBao/fov_alignment

Regards.

@SoikatHasanAhmed
Copy link

@RunqiuBao Thanks for sharing, I will also try and share if I find any breakthrough.
:)

@magehrig
Copy link
Contributor

@RunqiuBao

a) Did you rectify the events using the provided rectification map? This is not visible in the code.
b) Resizing the image to fit the VGA resolution of the event camera seems problematic to me. Could you explain to me why this should make sense from a geometric perspective?

I will have a look at your code in more detail as soon as I have the time. But these are 2 sources of errors I identified from a quick look.

@RunqiuBao
Copy link
Author

Hi, @magehrig

a) I rectified the events. Because I used the "Sequence" class you provided in the tool scripts to load the stacked event frame, which by default rectifies the events. This part is not included in the code I uploaded.
b) After transformation, the event frame and image frame have different scale and different FoV, but they should have the same pose. Therefore they should be similar to each other with reference to the center of the image, I suppose. Please correct me if my understanding was wrong.

Thanks.

@magehrig
Copy link
Contributor

So what I would do for this sanity check, is the following:

  1. Assume that the scene is infinitely far away
  2. Compute the projection matrix that maps a pixel coordinate of the left event camera to the pixel coordinate in the left global shutter camera:
    3
  3. Compute this mapping for each pixel of the left event camera frame (make sure to normalize the homogeneous vector before retrieving the pixel coordinates)
  4. Use opencv's remap function to remap the left image pixels to the left event camera pixels

Another approach is to use the disparity ground truth and map the ground truth to ensure that it overlaps with the events. This is what I did for generating the ground truth and for sanity checking as well. The results were accurate.

@magehrig
Copy link
Contributor

magehrig commented May 12, 2021

Hi, @magehrig

a) I rectified the events. Because I used the "Sequence" class you provided in the tool scripts to load the stacked event frame, which by default rectifies the events. This part is not included in the code I uploaded.
b) After transformation, the event frame and image frame have different scale and different FoV, but they should have the same pose. Therefore they should be similar to each other with reference to the center of the image, I suppose. Please correct me if my understanding was wrong.

Thanks.

regarding b): This is an error, that won't work. Use my previously posted approach instead to avoid resizing. The reason for this is that, by resizing the image, you are essentially changing the intrinsics and the pixel warping is wrong.

@RunqiuBao
Copy link
Author

RunqiuBao commented May 13, 2021

So what I would do for this sanity check, is the following:

  1. Assume that the scene is infinitely far away
  2. Compute the projection matrix that maps a pixel coordinate of the left event camera to the pixel coordinate in the left global shutter camera:
    3
  3. Compute this mapping for each pixel of the left event camera frame (make sure to normalize the homogeneous vector before retrieving the pixel coordinates)
  4. Use opencv's remap function to remap the left image pixels to the left event camera pixels

Another approach is to use the disparity ground truth and map the ground truth to ensure that it overlaps with the events. This is what I did for generating the ground truth and for sanity checking as well. The results were accurate.

Hi, @magehrig
I tried the approach you mentioned above namely direct remapping without resize. It looks great. So the extrinsics has no problem, sorry for the trouble. Perspective transformation with the calibration patterns also shows similar results.
Thank you!
image

@magehrig
Copy link
Contributor

No worries, happy to help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants