Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformation matrix #16

Closed
aditdoshi333 opened this issue Jan 3, 2022 · 57 comments
Closed

Transformation matrix #16

aditdoshi333 opened this issue Jan 3, 2022 · 57 comments

Comments

@aditdoshi333
Copy link

Hello,

Thanks for sharing the code. In the config YAML file, there is extrinsic between camera and lidar, I want to know that the matrix is the camera to lidar or lidar to the camera?

And is there any reference for fine calibration between camera and lidar?

Thank you

@ziv-lin
Copy link
Member

ziv-lin commented Jan 3, 2022

You can get the detail about the relation among sensors by referring to our paper, which have given the detailed definitions of these sensor frames.

@aditdoshi333
Copy link
Author

Yes @ziv-lin I read details regarding hardware setup in your paper. But I am not able to find which transformation matrix is being used?

Camera to lidar or lidar to camera
Thank you

@ziv-lin
Copy link
Member

ziv-lin commented Jan 3, 2022

See R2LIVE section III.B, our extrinsic denote the sensor frame w.r.t. IMU frame.
Screenshot from 2022-01-03 17-20-54

@aditdoshi333
Copy link
Author

Okay thanks for the reply. But I am getting unexpected output.
Screenshot 2022-01-03 at 4 48 47 PM

The yellow color is of walls and pillars but on the map, it is showing on the ceiling. I am using a transformation matrix wrt IMU frame. Any clue what is happening here?

Thank you

@aditdoshi333
Copy link
Author

Hello,

I read your paper every transformation is wrt to imu frame. I have a calibration matrix between lidar and RGB. But not with imu are you using some static matrix between lidar and imu because there is no such information in the config file. I am using livox avia with your custom driver.

If you are using such a matrix I can use that to get transformation between RGB and IMU.

Thank you

@aditdoshi333
Copy link
Author

As suggested the following my hardware and software configuration
Screenshot 2022-01-04 at 10 02 27 AM

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Okay thanks for the reply. But I am getting unexpected output. Screenshot 2022-01-03 at 4 48 47 PM

The yellow color is of walls and pillars but on the map, it is showing on the ceiling. I am using a transformation matrix wrt IMU frame. Any clue what is happening here?

Thank you

It seems that the calibration between the camera and IMU is too bad, did you have a good calibration of your camera intrinsic, and the extrinsic between IMU and camera?

@aditdoshi333
Copy link
Author

Hello @ziv-lin ,

I think calibration is good enough because I am using the same matrix in fastlio2 (modified) for coloring point cloud and doing a decent job. But I think the way I am putting the matrix there is some with that. Is there any repo or something you can suggest for imu and rgb camera?

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

For calibrating the intrinsic of the camera, you can use the tools provided by OpenCV or Matlab. And to perform the LiDAR-camera calibration, I recommend you to use this repo: https://github.com/hku-mars/livox_camera_calib

@aditdoshi333
Copy link
Author

@ziv-lin I am using the same repo. But I think it is calibration lidar and RGB not imu and rgb. Do I have to matrix from this repo directly in the config file?

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Hello @ziv-lin ,

I think calibration is good enough because I am using the same matrix in fastlio2 (modified) for coloring point cloud and doing a decent job. But I think the way I am putting the matrix there is some with that. Is there any repo or something you can suggest for imu and rgb camera?

How about the color of the point cloud in a static environment (i.e., don't move the sensors), is that correct?

@aditdoshi333
Copy link
Author

@ziv-lin I tried the in a static environment and it looks better but there is still some offset. The following is the output
Screenshot 2022-01-04 at 10 43 57 AM

Screenshot 2022-01-04 at 10 43 41 AM

Screenshot 2022-01-04 at 10 43 49 AM

The objects are painted on the wall in real they are on a table. Looks like there is some static offset

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

@ziv-lin I am using the same repo. But I think it is calibration lidar and RGB not imu and rgb. Do I have to matrix from this repo directly in the config file?

In our work, we treat the LiDAR frame and IMU (the LiDAR built-in IMU) as the same in our implementation. By this, the extrinsic between camera and IMU is the same as the extrinsic as extrinsic of camera-LiDAR.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

@ziv-lin I tried the in a static environment and it looks better but there is still some offset. The following is the output Screenshot 2022-01-04 at 10 43 57 AM

Screenshot 2022-01-04 at 10 43 41 AM Screenshot 2022-01-04 at 10 43 49 AM

The objects are painted on the wall in real they are on a table. Looks like there is some static offset

Is this the output of r3live or others?

@aditdoshi333
Copy link
Author

@ziv-lin I tried the in a static environment and it looks better but there is still some offset. The following is the output Screenshot 2022-01-04 at 10 43 57 AM
Screenshot 2022-01-04 at 10 43 41 AM
Screenshot 2022-01-04 at 10 43 49 AM
The objects are painted on the wall in real they are on a table. Looks like there is some static offset

Is this the output of r3live or others?

Its from r3live

@aditdoshi333
Copy link
Author

@ziv-lin I am using the same repo. But I think it is calibration lidar and RGB not imu and rgb. Do I have to matrix from this repo directly in the config file?

In our work, we treat the LiDAR frame and IMU (the LiDAR built-in IMU) as the same in our implementation. By this, the extrinsic between camera and IMU is the same as the extrinsic as extrinsic of camera-LiDAR.

Okay great. Then I can use the matrix directly. I am attaching an image from fastlio coloring just to make sure the calibration mat is correct.
Screenshot 2021-12-29 at 6 26 31 PM

@aditdoshi333
Copy link
Author

@ziv-lin Sorry to bother but there is one more issue. Sometimes I am getting the following error ( Same config file and same bag file)
Screenshot 2022-01-03 at 6 29 19 PM

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

@ziv-lin I tried the in a static environment and it looks better but there is still some offset. The following is the output Screenshot 2022-01-04 at 10 43 57 AM
Screenshot 2022-01-04 at 10 43 41 AM
Screenshot 2022-01-04 at 10 43 49 AM
The objects are painted on the wall in real they are on a table. Looks like there is some static offset

Is this the output of r3live or others?

Its from r3live

It seems that your configuration is correct. What is the problem about R3LIVE?

@aditdoshi333
Copy link
Author

@ziv-lin I tried the in a static environment and it looks better but there is still some offset. The following is the output Screenshot 2022-01-04 at 10 43 57 AM
Screenshot 2022-01-04 at 10 43 41 AM
Screenshot 2022-01-04 at 10 43 49 AM
The objects are painted on the wall in real they are on a table. Looks like there is some static offset

Is this the output of r3live or others?

Its from r3live

It seems that your configuration is correct. What is the problem about R3LIVE?

The color is not correct. There is an offset in the above sample it looks as if the objects are pasted on a wall. But it is not like that. And such color is coming only if the scene is static. If I am moving the setup the color is messed up fully

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Oh, sorry, my mistake... What is your image resolution?

@aditdoshi333
Copy link
Author

its 1920 x 1080

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Might be I got the problem you meet with, the default image resolution I used is 1280 X 1024 and I should open this configuration for you to config.

@aditdoshi333
Copy link
Author

Okay thanks a lot @ziv-lin

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Can you replace all 1280 in codes to 1920 to see is that OK? I will commit a hotfix in tonight or tomorrow.

@aditdoshi333
Copy link
Author

Okay sure, I will try and update.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

m_vio_scale_factor = 1280 * m_image_downsample_ratio / temp_img.cols; // 320 * 24

initUndistortRectifyMap( intrinsic, dist_coeffs, cv::Mat(), intrinsic, cv::Size( 1280 / m_vio_scale_factor, 1024 / m_vio_scale_factor ),

m_mvs_recorder.init( g_cam_K, 1280 / m_vio_scale_factor, &m_map_rgb_pts );

cv::resize( temp_img, img_get, cv::Size( 1280 / m_vio_scale_factor, 1024 / m_vio_scale_factor ) );

op_track.set_intrinsic( g_cam_K, g_cam_dist * 0, cv::Size( 1280 / m_vio_scale_factor, 1024 / m_vio_scale_factor ) );

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Thank you for you reporting such issues~, this actually a bug.

@aditdoshi333
Copy link
Author

Hey @ziv-lin, I updated the image resolution. Thanks a lot for a quick fix. It improved the coloring but there is still significant bleeding. It looks like a calibration issue but I am not sure is it related to image size or aspect ratio?

Screenshot 2022-01-04 at 11 36 52 AM

Screenshot 2022-01-04 at 11 37 04 AM

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Can you try and move the sensors to the mapping results? Our algorithm can online calibrate the extrinsic to make it more correct.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Sorry, my bad, you should also change the image_heigh 1024 to 1080, e.g., in the following codes:

initUndistortRectifyMap( intrinsic, dist_coeffs, cv::Mat(), intrinsic, cv::Size( 1280 / m_vio_scale_factor, 1024 / m_vio_scale_factor ),

cv::resize( temp_img, img_get, cv::Size( 1280 / m_vio_scale_factor, 1024 / m_vio_scale_factor ) );

op_track.set_intrinsic( g_cam_K, g_cam_dist * 0, cv::Size( 1280 / m_vio_scale_factor, 1024 / m_vio_scale_factor ) );

@aditdoshi333
Copy link
Author

@ziv-lin I changed height and width as you suggested. But the issue is still the same image color is bleeding near the edges. And one more thing I am still getting the following error sometimes.
Screenshot 2022-01-03 at 6 29 19 PM

Whenever I get this error I need to rerun the r3live node.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Can you share your data with me,include both your rosbag files and your configurations?

@aditdoshi333
Copy link
Author

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

I can not open the folder, is there any problem?

@aditdoshi333
Copy link
Author

@ziv-lin I am able to open in incognito mode also. Should I upload somewhere else then google drive?

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

I can download the config files, but can't get the rosbag file, can you set put them together in the same directory?

@aditdoshi333
Copy link
Author

I can download the config files, but can't get the rosbag file, can you set put them together in the same directory?

Done. Please check and let me know

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

I can download the config files, but can't get the rosbag file, can you set put them together in the same directory?

Done. Please check and let me know

That right now, I will try your bag tonght if possible.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

hi, ziv-lin,

I share a short rosbag for calib, with hight image resolution and livox avia pointcloud.

To try r3live, Which config file should I modify ?

链接: https://pan.baidu.com/s/1y0xd2kICGSgKEhat-Bt3hQ 密码: kroh

height: 2048 width: 3072

camera matrix 1745.304795 0.000000 1519.690772 0.000000 1749.029333 1067.018842 0.000000 0.000000 1.000000

distortion -0.102255 0.116367 -0.000031 0.003545 0.000000

extrinsic.txt -0.016054,-0.999561,-0.0248822,-0.0664396 -0.00713346,0.0249993,-0.999662,0.0192496 0.999846,-0.0158711,-0.00753167,0.0634879 0,0,0,1

2022-01-04 15-07-20 的屏幕截图 999846,-0.0158711,-0.00753167,0.0634879 0,0,0,1

Take a glance look, it seems that your result is correct?

@jxx315
Copy link

jxx315 commented Jan 4, 2022

hi, ziv-lin,
I share a short rosbag for calib, with hight image resolution and livox avia pointcloud.
To try r3live, Which config file should I modify ?
链接: https://pan.baidu.com/s/1y0xd2kICGSgKEhat-Bt3hQ 密码: kroh
height: 2048 width: 3072
camera matrix 1745.304795 0.000000 1519.690772 0.000000 1749.029333 1067.018842 0.000000 0.000000 1.000000
distortion -0.102255 0.116367 -0.000031 0.003545 0.000000
extrinsic.txt -0.016054,-0.999561,-0.0248822,-0.0664396 -0.00713346,0.0249993,-0.999662,0.0192496 0.999846,-0.0158711,-0.00753167,0.0634879 0,0,0,1
2022-01-04 15-07-20 的屏幕截图 999846,-0.0158711,-0.00753167,0.0634879 0,0,0,1

Take a glance look, it seems that your result is correct?

Sorry, I upload rosbag and camera param(intrinsic and extrinsic)again in one link .
链接: https://pan.baidu.com/s/1IMkYRubNFjIX-z5UOR5mhA 密码: deja

I noticed that the resolution is in your work is 1280 X 1024 . However my camera in this test is 2048 X 3072, Which config file should I modify in R3Live? Change all of 1280 X 1024 to 2048 X 3072 ?

@ziv-lin
Copy link
Member

ziv-lin commented Jan 4, 2022

Yes, you should change all 1280 -> 2048, and 1024 -> 3072 (why is your height larger than width? is there any wrong?)
The fix of this bug will be pushed in this repo withing tease two days.

@jxx315
Copy link

jxx315 commented Jan 4, 2022

Yes, you should change all 1280 -> 2048, and 1024 -> 3072 (why is your height larger than width? is there any wrong?) The fix of this bug will be pushed in this repo withing tease two days.

Sorry, my bad,Resolution is actually 2048 X 3072 (height *width) .
Thanks very much for your reply,Looking forward to the next update

@tlin442
Copy link

tlin442 commented Jan 5, 2022

@ziv-lin I think the issue is that this:

m_vio_scale_factor = 1280 * m_image_downsample_ratio / temp_img.cols; // 320 * 24
// load_vio_parameters();
set_initial_camera_parameter( g_lio_state, m_camera_intrinsic.data(), m_camera_dist_coeffs.data(), m_camera_ext_R.data(),
m_camera_ext_t.data(), m_vio_scale_factor );

Malforms the camera calibration matrix here when the camera width isn't 1280 pixels.
g_cam_K << intrinsic_data[ 0 ] / cam_k_scale, intrinsic_data[ 1 ], intrinsic_data[ 2 ] / cam_k_scale, intrinsic_data[ 3 ],
intrinsic_data[ 4 ] / cam_k_scale, intrinsic_data[ 5 ] / cam_k_scale, intrinsic_data[ 6 ], intrinsic_data[ 7 ], intrinsic_data[ 8 ];

I had the same issue undistorting my input with a 848*800 camera.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 5, 2022

I am now fixing and testing this problem, please wait with patience.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 5, 2022

@aditdoshi333 It runs your data carefully, it seems that your problem is your calibration is not accurate enough, causing the color rendering is not so accurate.
WeChat Screenshot_20220105203301
WeChat Screenshot_20220105203450

In addition, the print of ""****** Remove_outlier_using_ransac_pnp error*****"" is caused by the delayed incoming image message (see following figure). You can ignore this print since it didn't play too much effect after R3LIVE has received enough image frames.
WeChat Screenshot_20220105204201

If your play the bag with ''-s 5'', this warning disappear:

rosbag play YOUR.bag -s 5

@ziv-lin
Copy link
Member

ziv-lin commented Jan 5, 2022

Hi~, @aditdoshi333 @jxx315 I have just pushed a commit that fixes this bug, now R3LIVE allows you to set the image resolution correctly, can you have a try of this version? Please let me know if you found any bugs and problems.

@ziv-lin
Copy link
Member

ziv-lin commented Jan 5, 2022

You can now set your own image resolution by modifying these two configs:

image_width: 1280

image_height: 1024

@aditdoshi333
Copy link
Author

@ziv-lin Thanks for the quick fix. I will run the new commit and update you in the next 2 days. And I will also check calibration at my end. Thanks for all the efforts.

One more doubt the calibration between camera and imu is also for texture mapping or it will used in mapping also. I mean preciseness of calibration is it proportional to the accuracy of mapping?

Thank you

@Camilochiang
Copy link

Camilochiang commented Jan 6, 2022

Hei @aditdoshi333 . In my experience a more precise calibration will give you a better mapping. See here for example (Both pictures are a top view):

Not so good calibration:
Screenshot from 2022-01-06 07-06-04
A better calibration:
Screenshot from 2022-01-06 06-56-54

You can clearly see that the better the calibration, the better is the alignment of the incoming scans (white points). This is of course as the intrinsic and extrinsic parameters play an important role to say from were the points are coming. I have to do a better calibration in any case, you can see than in my "better calibration" is not perfect

@jxx315
Copy link

jxx315 commented Jan 9, 2022

great!

@Tomato1107
Copy link

@ziv-lin I am using the same repo. But I think it is calibration lidar and RGB not imu and rgb. Do I have to matrix from this repo directly in the config file?

In our work, we treat the LiDAR frame and IMU (the LiDAR built-in IMU) as the same in our implementation. By this, the extrinsic between camera and IMU is the same as the extrinsic as extrinsic of camera-LiDAR.

Okay great. Then I can use the matrix directly. I am attaching an image from fastlio coloring just to make sure the calibration mat is correct. Screenshot 2021-12-29 at 6 26 31 PM

Hi! @aditdoshi333 May I ask you how to add the camera rgb information to the lidar in the fast_lio, I meet some problems.

@aditdoshi333
Copy link
Author

Hello @ziv-lin ,
Sorry for the delay. I am yet to calibrate the camera and test it. I am sick so will try out asap.

@aditdoshi333
Copy link
Author

@ziv-lin I am using the same repo. But I think it is calibration lidar and RGB not imu and rgb. Do I have to matrix from this repo directly in the config file?

In our work, we treat the LiDAR frame and IMU (the LiDAR built-in IMU) as the same in our implementation. By this, the extrinsic between camera and IMU is the same as the extrinsic as extrinsic of camera-LiDAR.

Okay great. Then I can use the matrix directly. I am attaching an image from fastlio coloring just to make sure the calibration mat is correct. Screenshot 2021-12-29 at 6 26 31 PM

Hi! @aditdoshi333 May I ask you how to add the camera rgb information to the lidar in the fast_lio, I meet some problems.

Hello,

You can change the struct of point cloud and add rgb. I color the incoming point cloud before mapping. I am using the livox calibration code for coloring every frame.

@Tomato1107
Copy link

@ziv-lin I am using the same repo. But I think it is calibration lidar and RGB not imu and rgb. Do I have to matrix from this repo directly in the config file?

In our work, we treat the LiDAR frame and IMU (the LiDAR built-in IMU) as the same in our implementation. By this, the extrinsic between camera and IMU is the same as the extrinsic as extrinsic of camera-LiDAR.

Okay great. Then I can use the matrix directly. I am attaching an image from fastlio coloring just to make sure the calibration mat is correct. Screenshot 2021-12-29 at 6 26 31 PM

Hi! @aditdoshi333 May I ask you how to add the camera rgb information to the lidar in the fast_lio, I meet some problems.

Hello,

You can change the struct of point cloud and add rgb. I color the incoming point cloud before mapping. I am using the livox calibration code for coloring every frame.

Thanks very much! I will try!

@ziv-lin
Copy link
Member

ziv-lin commented Jan 11, 2022

@aditdoshi333 I am so sorry to hear that. Please take good care of your body. Hoping every goes well.

@aditdoshi333
Copy link
Author

Hello @ziv-lin,

I can confirm that the color offset is because of poor calibration. Sorry for the trouble.
Thank you for all your efforts.

@PengKunPROO
Copy link

@ziv-lin I am using the same repo. But I think it is calibration lidar and RGB not imu and rgb. Do I have to matrix from this repo directly in the config file?

In our work, we treat the LiDAR frame and IMU (the LiDAR built-in IMU) as the same in our implementation. By this, the extrinsic between camera and IMU is the same as the extrinsic as extrinsic of camera-LiDAR.

Okay great. Then I can use the matrix directly. I am attaching an image from fastlio coloring just to make sure the calibration mat is correct. Screenshot 2021-12-29 at 6 26 31 PM

Hi! @aditdoshi333 May I ask you how to add the camera rgb information to the lidar in the fast_lio, I meet some problems.

Hello,

You can change the struct of point cloud and add rgb. I color the incoming point cloud before mapping. I am using the livox calibration code for coloring every frame.

Hi,I did same work as you did。Did you do the time sync between lidar and camera?I mean hard sync,cuz my result is not good,I don't have accurate color with pointcloud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants