-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about parameters and sensor displacement #3
Comments
Hi, thank you for using my code. Answer 2.
Answer 3. We calculate the transformation matrix with the Lidar-Camera calibration method. To calibrate the sensors, both must have intersection in their fields of view. |
Many Thanks for the detailed and friendly answer!! It is fruitful to understand your contribution. 1) Parameters: I also use D435 (only RGB) and VLP-16. It seems I only need to adopt all of your configuration including 'y-interpolation' and 'max_ang_FoV, min_ang_FoV, x_resolution, ang_Y_resolution, minlen, maxlen' only except 'the transformation matrix'. Am I right? 2) Transformation: What was the initial rotation (x,y,z) when using the Ldar-Camera calibration? I use the initial rotation [1.57, -1.57, 0] radians of LiDAR with repect to Camera since my sensors only look forward (my configuration has no additional tilting like your Camera.). The initial rotation of your configuration could be [1.57, -1.57, -something] since your camera looks little down-side. 3) y-interpolation: Impressive approach! I could be very helpful for sparse laser like VLP-16 when the param is properly set. What kind of algorithm do you use for the interpolation? Is there a paper or baseline? I guess this parameter could be beneficial or wrong according to running environment of a mobile robot. Thanks a lot for your help. |
|
You need to calibrate your camera to get the intrinsic parameters. In camera_matrix write the intrinsic parameters of your camera. If you don't know these parameters, you can use the tool Ros Camera_calibration. |
Thanks again for your detailed comments. 1) Transformation: I understood what you did. However, is there any specific reason you set the internal rotation as [-1.57, 1.57, 0]? I guess the convention follows the initial rotation [1.57, 1-.57, 0] as LiDAR to Cam axis as described in Ldar-Camera. This repo sets the initial_rot_x (and y,z) as [1.57, 1-.57, 0]. The rotation refers to the original LiDAR axis (x, y, z) to the Camera axis (z, x, y). Did I misunderstand? 2) y-interpolation: Can I turn off this function (the interpolation of LiDAR)? I set the value as '1.0' and it seems working. But, I don't know exactly I properly turned off the function. Thanks a lot for your help. |
Long time no see bro. I got three questions to complete the calibration.
Below describes the problem I got. (BTW, I also use VLP16 and D435i. And, I have similar displacement of the sensors like yours without tilting of the camera.) Now, I am getting a wrong result on the calibration. I will re-calculate the transformation matrix again. Also, I suspect an intrinsic matrix of camera. That's why I answered the second question. Many Thanks for this work. It helps me a lot. |
The intirnsec matrix is on line 1 of the cfg_params.yaml file. This matrix was obtained with the ROS camera calibration tool. Video tutorial link. My realsense camera resolution is 1280x720. I suggest you set the max_ang_FOV and min_ang_FOV limits to 2.7 and 0.5 respectively. The program removes all point cloud outside that range to optimize the code. In addition, the program removes the point cloud that is not projected on the 1280x720 image. If a very limited FOV range is set, information from the point cloud may be lost. Based on the image in the link, it could be an error in the intrinsic or extrinsic calibration parameters. I suggest you calibrate your camera with the ROS Camera Calibration tool. If you still have this problem, you could move your calibration matrix a few millimeters on the axes that don't match. |
Many thanks for the help.
|
The coordinates of the cfg file are the displacement distances in the axes [x , y , z] starting from the camera axes already oriented to the LiDAR axes, as shown in the image. I have uploaded in the README a preprint of an article where we apply the code of the repository in the depth estimation of objects. Preprint. In this article there is a section about the LiDAR and camera fusion. |
Thank you for your work. Very interesting! I have read the article and reviewed the code and couldn't find the moment where this is implemented in the code: And I also wanted to ask this: Based on the quote from the article, does that mean that the LIDAR's point cloud doesn't rely on the Image when it builds the interpolated point cloud (more dense point cloud) ? Because as I understand, camera cannot look further than 3m. Just want to understand how does this fusion works in practice. Thank you! |
Hi gxnse, the code in this repository is used to fusion the interpolated point cloud of a LiDAR sensor with the RGB channel of a camera. In this case the VLP16 lidar is used and the realsense 435 camera which is an RGBD camera. As seen in the article, the lidar and camera fusion is used to estimate the camera-to-object distance. When the object is within the range of 0.3 - 3 meters, the depth data from the RGBD camera is used and beyond 3 meters the LiDAR data is used. To answer your second question. We do not use the image data to interpolate the point cloud, in fact you can interpolate and have a lidar with more channels than the original (I am preparing a repository to do that). The interpolation of the data is given by converting the point cloud to a range image and a bilinear interpolation with the armadillo library. I will soon have a repository ready showing the point cloud interpolation. I still have to improve the filtering of the interpolation noise trails. |
Thank you! I will be using your interpolation approach in my Masters degree project for curb detection of the autonomous vehicle and will quote you. :) |
Hi, you can now use the point cloud interpolation node of the LiDAR velodyne VLP16. |
I would like to ask if the interpolation code does not seem to interpolate the laser intensity, is it possible to do this, how did you do it as shown in the second picture, the laser intensity is displayed after interpolation |
Hello, the code does not interpolate the data with the intensity channel, the image shows the colored point cloud due to the positioning of each point along the z-axis. |
First of all, Thanks for the great work.
I got three questions when analyzing your code.
Please help me to figure these out.
What is the model of your Camera?
What is the meaning of some parameters in 'cfg_params.yaml'?
Can you describe the sensor displacement of your LiDAR and Camera when you do the calibration?
Thanks in advance.
The text was updated successfully, but these errors were encountered: