Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Significant drift when using R3Live with Velodyne and a Camera #157

Closed
farhad-dalirani opened this issue Feb 8, 2023 · 8 comments
Closed
Labels

Comments

@farhad-dalirani
Copy link

farhad-dalirani commented Feb 8, 2023

Hi everyone,

I have an instrumented car. I installed sensors on the car roof. Sensors have different relative location and direction in comparison to r3live designed handle by authors. I added a Velodyne handler for 16-beam Velodyne LiDAR based on Fast-LIO Velodyne handler. I put an IMU under the LiDAR and tried my best to align IMU and LiDAR axes. Also, I use ROS provided time to sync IMU, LiDAR and Camera. It works perfectly when I use only the LiDAR-IMU (right image). However, when I add a camera (Realsense) , a significant drift happens (left image). With LiDAR alone, the total drift is less than 1m, when I add camera, the final drift is more that 150m. Also, in the following, I put output of rqt_bag, the launch and config file.

Would you please guide me?

Screenshot from 2023-02-08 14-15-26

Screenshot from 2023-02-08 14-26-43

This is the launch file that I use for R3Live with LiDAR and Camera:

<launch>
    <!-- Subscribed topics -->
    <param name="/LiDAR_pointcloud_topic" type="string" value= "/laser_cloud_flat" />
    <param name="/IMU_topic" type="string" value= "/xsens/imu/data" />
    <param name="/Image_topic" type="string" value= "/camera/color/image_raw" />
    <param name="map_output_dir" type="string" value="$(env HOME)/r3live_output" />
    <rosparam command="load" file="$(find r3live)/../config/r3live_config_velodyne_16_with_camera.yaml" />
    
    <!-- set LiDAR type as velodyne-16 spining LiDAR -->
    <param name="/Lidar_front_end/lidar_type" type="int" value= "2" /> 
    <param name="/Lidar_front_end/point_step" type="int" value="1" />
    <param name="r3live_lio/lio_update_point_step" type="int" value="6" />
    <param name="Lidar_front_end/N_SCANS" type="int" value="16" />
    <param name="/Lidar_front_end/feature_enabled" type="bool" value="true" />
        
    <node pkg="r3live" type="r3live_LiDAR_front_end" name="r3live_LiDAR_front_end"  output="screen" required="true"/>
    <node pkg="r3live" type="r3live_mapping" name="r3live_mapping" output="screen" required="true" />
    
    <arg name="rviz" default="1" />
    <group if="$(arg rviz)">
        <node name="rvizvisualisation" pkg="rviz" type="rviz" output="log" args="-d $(find r3live)/../config/rviz/r3live_rviz_config_ouster.rviz" />
    </group>
 </launch>

and this is the config file:


Lidar_front_end:
   lidar_type: 1   # 1 for Livox-avia, 3 for Ouster-OS1-64
   N_SCANS: 6
   using_raw_point: 1
   point_step: 1
   
r3live_common:
   if_dump_log: 0                   # If recording ESIKF update log. [default = 0]
   record_offline_map: 1            # If recording offline map. [default = 1]
   pub_pt_minimum_views: 3          # Publish points which have been render up to "pub_pt_minimum_views" time. [default = 3]
   minimum_pts_size: 0.01           # The minimum distance for every two points in Global map (unit in meter). [default = 0.01] 
   image_downsample_ratio: 1        # The downsample ratio of the input image. [default = 1]
   estimate_i2c_extrinsic: 1        # If enable estimate the extrinsic between camera and IMU. [default = 1] 
   estimate_intrinsic: 1            # If enable estimate the online intrinsic calibration of the camera lens. [default = 1] 
   maximum_vio_tracked_pts: 600     # The maximum points for tracking. [default = 600]
   append_global_map_point_step: 4  # The point step of append point to global map. [default = 4]

r3live_vio:
   image_width: 640
   image_height: 480
   camera_intrinsic:
      [616.44287109375, 0.0, 310.534423828125, 
      0.0, 616.0972900390625, 224.801513671875,
      0.0, 0.0, 1.0] 
   camera_dist_coeffs: [0.0, 0.0, 0.0, 0.0, 0.0]  #k1, k2, p1, p2, k3
   # Fine extrinsic value. form camera-LiDAR calibration.
   camera_ext_R:
         [1.855658548272057684e-04, -9.899595481172648315e-01, 2.986651313928978535e-02,
         2.081077221601334432e-02, -4.606756803228120173e-02, -9.806236191917293565e-01,
         9.997851490640013994e-01, 1.289102727932428397e-03, 1.907638741177997491e-02]
   # camera_ext_t: [0.050166, 0.0474116, -0.0312415] 
   camera_ext_t: [8.440789485113883472e-02, -1.508930501043006700e-01,-1.517150432428751117e-01] 
   # Rough extrinsic value, form CAD model, is not correct enough, but can be online calibrated in our datasets.
   # camera_ext_R:
   #    [0, 0, 1,
   #     -1, 0, 0,
   #     0, -1, 0]
   # camera_ext_t: [0,0,0] 
   
r3live_lio:        
   lio_update_point_step: 4   # Point step used for LIO update.  
   max_iteration: 2           # Maximum times of LIO esikf.
   lidar_time_delay: 0        # The time-offset between LiDAR and IMU, provided by user. 
   filter_size_corner: 0.30   
   filter_size_surf: 0.30
   filter_size_surf_z: 0.30
   filter_size_map: 0.30


@farhad-dalirani
Copy link
Author

farhad-dalirani commented Feb 15, 2023

When I change number of scans (N_SCANS) fro, 16 to 6, a lot of drifting problems are solved. However, it just gives floor map since I use 6 line of velodyne-16 beams. Any Idea what can be the cause of the problem?

When I use all 16 beams the drift happens during the car turn.

Screenshot from 2023-02-15 10-47-30

@farhad-dalirani
Copy link
Author

When I use all 16 beams the drift happens during the car turn. When the car goes on straight line, drifting does not happen even with 16 beams of Velodyne.

Screenshot from 2023-02-15 13-55-00

@stale
Copy link

stale bot commented Mar 4, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@farhad-dalirani
Copy link
Author

The main problem was low frame rate, low resolution image and bad calibration between camera and lidar. A super accurate extrinsic calibration should be used for camera-lidar.

@fanshixiong
Copy link

@farhad-dalirani How did you solve the drifting problem? I have a drifting problem with Livox Lidar Mid-70 + MYNT IMU + MYNT camera.
I explained it in detail. It would be great if look at it:
#173

@jingyilon
Copy link

Hi everyone,

I have an instrumented car. I installed sensors on the car roof. Sensors have different relative location and direction in comparison to r3live designed handle by authors. I added a Velodyne handler for 16-beam Velodyne LiDAR based on Fast-LIO Velodyne handler. I put an IMU under the LiDAR and tried my best to align IMU and LiDAR axes. Also, I use ROS provided time to sync IMU, LiDAR and Camera. It works perfectly when I use only the LiDAR-IMU (right image). However, when I add a camera (Realsense) , a significant drift happens (left image). With LiDAR alone, the total drift is less than 1m, when I add camera, the final drift is more that 150m. Also, in the following, I put output of rqt_bag, the launch and config file.

Would you please guide me?

Screenshot from 2023-02-08 14-15-26

Screenshot from 2023-02-08 14-26-43

This is the launch file that I use for R3Live with LiDAR and Camera:

<launch>
    <!-- Subscribed topics -->
    <param name="/LiDAR_pointcloud_topic" type="string" value= "/laser_cloud_flat" />
    <param name="/IMU_topic" type="string" value= "/xsens/imu/data" />
    <param name="/Image_topic" type="string" value= "/camera/color/image_raw" />
    <param name="map_output_dir" type="string" value="$(env HOME)/r3live_output" />
    <rosparam command="load" file="$(find r3live)/../config/r3live_config_velodyne_16_with_camera.yaml" />
    
    <!-- set LiDAR type as velodyne-16 spining LiDAR -->
    <param name="/Lidar_front_end/lidar_type" type="int" value= "2" /> 
    <param name="/Lidar_front_end/point_step" type="int" value="1" />
    <param name="r3live_lio/lio_update_point_step" type="int" value="6" />
    <param name="Lidar_front_end/N_SCANS" type="int" value="16" />
    <param name="/Lidar_front_end/feature_enabled" type="bool" value="true" />
        
    <node pkg="r3live" type="r3live_LiDAR_front_end" name="r3live_LiDAR_front_end"  output="screen" required="true"/>
    <node pkg="r3live" type="r3live_mapping" name="r3live_mapping" output="screen" required="true" />
    
    <arg name="rviz" default="1" />
    <group if="$(arg rviz)">
        <node name="rvizvisualisation" pkg="rviz" type="rviz" output="log" args="-d $(find r3live)/../config/rviz/r3live_rviz_config_ouster.rviz" />
    </group>
 </launch>

and this is the config file:


Lidar_front_end:
   lidar_type: 1   # 1 for Livox-avia, 3 for Ouster-OS1-64
   N_SCANS: 6
   using_raw_point: 1
   point_step: 1
   
r3live_common:
   if_dump_log: 0                   # If recording ESIKF update log. [default = 0]
   record_offline_map: 1            # If recording offline map. [default = 1]
   pub_pt_minimum_views: 3          # Publish points which have been render up to "pub_pt_minimum_views" time. [default = 3]
   minimum_pts_size: 0.01           # The minimum distance for every two points in Global map (unit in meter). [default = 0.01] 
   image_downsample_ratio: 1        # The downsample ratio of the input image. [default = 1]
   estimate_i2c_extrinsic: 1        # If enable estimate the extrinsic between camera and IMU. [default = 1] 
   estimate_intrinsic: 1            # If enable estimate the online intrinsic calibration of the camera lens. [default = 1] 
   maximum_vio_tracked_pts: 600     # The maximum points for tracking. [default = 600]
   append_global_map_point_step: 4  # The point step of append point to global map. [default = 4]

r3live_vio:
   image_width: 640
   image_height: 480
   camera_intrinsic:
      [616.44287109375, 0.0, 310.534423828125, 
      0.0, 616.0972900390625, 224.801513671875,
      0.0, 0.0, 1.0] 
   camera_dist_coeffs: [0.0, 0.0, 0.0, 0.0, 0.0]  #k1, k2, p1, p2, k3
   # Fine extrinsic value. form camera-LiDAR calibration.
   camera_ext_R:
         [1.855658548272057684e-04, -9.899595481172648315e-01, 2.986651313928978535e-02,
         2.081077221601334432e-02, -4.606756803228120173e-02, -9.806236191917293565e-01,
         9.997851490640013994e-01, 1.289102727932428397e-03, 1.907638741177997491e-02]
   # camera_ext_t: [0.050166, 0.0474116, -0.0312415] 
   camera_ext_t: [8.440789485113883472e-02, -1.508930501043006700e-01,-1.517150432428751117e-01] 
   # Rough extrinsic value, form CAD model, is not correct enough, but can be online calibrated in our datasets.
   # camera_ext_R:
   #    [0, 0, 1,
   #     -1, 0, 0,
   #     0, -1, 0]
   # camera_ext_t: [0,0,0] 
   
r3live_lio:        
   lio_update_point_step: 4   # Point step used for LIO update.  
   max_iteration: 2           # Maximum times of LIO esikf.
   lidar_time_delay: 0        # The time-offset between LiDAR and IMU, provided by user. 
   filter_size_corner: 0.30   
   filter_size_surf: 0.30
   filter_size_surf_z: 0.30
   filter_size_map: 0.30

Hello, I'm just getting to know R3live, could you please open source velodyne-based algorithm? Thank you very much. 2727375498@qq. com

@jingyilon
Copy link

Hi everyone,

I have an instrumented car. I installed sensors on the car roof. Sensors have different relative location and direction in comparison to r3live designed handle by authors. I added a Velodyne handler for 16-beam Velodyne LiDAR based on Fast-LIO Velodyne handler. I put an IMU under the LiDAR and tried my best to align IMU and LiDAR axes. Also, I use ROS provided time to sync IMU, LiDAR and Camera. It works perfectly when I use only the LiDAR-IMU (right image). However, when I add a camera (Realsense) , a significant drift happens (left image). With LiDAR alone, the total drift is less than 1m, when I add camera, the final drift is more that 150m. Also, in the following, I put output of rqt_bag, the launch and config file.

Would you please guide me?

Screenshot from 2023-02-08 14-15-26

Screenshot from 2023-02-08 14-26-43

This is the launch file that I use for R3Live with LiDAR and Camera:

<launch>
    <!-- Subscribed topics -->
    <param name="/LiDAR_pointcloud_topic" type="string" value= "/laser_cloud_flat" />
    <param name="/IMU_topic" type="string" value= "/xsens/imu/data" />
    <param name="/Image_topic" type="string" value= "/camera/color/image_raw" />
    <param name="map_output_dir" type="string" value="$(env HOME)/r3live_output" />
    <rosparam command="load" file="$(find r3live)/../config/r3live_config_velodyne_16_with_camera.yaml" />
    
    <!-- set LiDAR type as velodyne-16 spining LiDAR -->
    <param name="/Lidar_front_end/lidar_type" type="int" value= "2" /> 
    <param name="/Lidar_front_end/point_step" type="int" value="1" />
    <param name="r3live_lio/lio_update_point_step" type="int" value="6" />
    <param name="Lidar_front_end/N_SCANS" type="int" value="16" />
    <param name="/Lidar_front_end/feature_enabled" type="bool" value="true" />
        
    <node pkg="r3live" type="r3live_LiDAR_front_end" name="r3live_LiDAR_front_end"  output="screen" required="true"/>
    <node pkg="r3live" type="r3live_mapping" name="r3live_mapping" output="screen" required="true" />
    
    <arg name="rviz" default="1" />
    <group if="$(arg rviz)">
        <node name="rvizvisualisation" pkg="rviz" type="rviz" output="log" args="-d $(find r3live)/../config/rviz/r3live_rviz_config_ouster.rviz" />
    </group>
 </launch>

and this is the config file:


Lidar_front_end:
   lidar_type: 1   # 1 for Livox-avia, 3 for Ouster-OS1-64
   N_SCANS: 6
   using_raw_point: 1
   point_step: 1
   
r3live_common:
   if_dump_log: 0                   # If recording ESIKF update log. [default = 0]
   record_offline_map: 1            # If recording offline map. [default = 1]
   pub_pt_minimum_views: 3          # Publish points which have been render up to "pub_pt_minimum_views" time. [default = 3]
   minimum_pts_size: 0.01           # The minimum distance for every two points in Global map (unit in meter). [default = 0.01] 
   image_downsample_ratio: 1        # The downsample ratio of the input image. [default = 1]
   estimate_i2c_extrinsic: 1        # If enable estimate the extrinsic between camera and IMU. [default = 1] 
   estimate_intrinsic: 1            # If enable estimate the online intrinsic calibration of the camera lens. [default = 1] 
   maximum_vio_tracked_pts: 600     # The maximum points for tracking. [default = 600]
   append_global_map_point_step: 4  # The point step of append point to global map. [default = 4]

r3live_vio:
   image_width: 640
   image_height: 480
   camera_intrinsic:
      [616.44287109375, 0.0, 310.534423828125, 
      0.0, 616.0972900390625, 224.801513671875,
      0.0, 0.0, 1.0] 
   camera_dist_coeffs: [0.0, 0.0, 0.0, 0.0, 0.0]  #k1, k2, p1, p2, k3
   # Fine extrinsic value. form camera-LiDAR calibration.
   camera_ext_R:
         [1.855658548272057684e-04, -9.899595481172648315e-01, 2.986651313928978535e-02,
         2.081077221601334432e-02, -4.606756803228120173e-02, -9.806236191917293565e-01,
         9.997851490640013994e-01, 1.289102727932428397e-03, 1.907638741177997491e-02]
   # camera_ext_t: [0.050166, 0.0474116, -0.0312415] 
   camera_ext_t: [8.440789485113883472e-02, -1.508930501043006700e-01,-1.517150432428751117e-01] 
   # Rough extrinsic value, form CAD model, is not correct enough, but can be online calibrated in our datasets.
   # camera_ext_R:
   #    [0, 0, 1,
   #     -1, 0, 0,
   #     0, -1, 0]
   # camera_ext_t: [0,0,0] 
   
r3live_lio:        
   lio_update_point_step: 4   # Point step used for LIO update.  
   max_iteration: 2           # Maximum times of LIO esikf.
   lidar_time_delay: 0        # The time-offset between LiDAR and IMU, provided by user. 
   filter_size_corner: 0.30   
   filter_size_surf: 0.30
   filter_size_surf_z: 0.30
   filter_size_map: 0.30

Hello, I'm just getting to know R3live, could you please open source velodyne-based algorithm? Thank you very much. 2427375498@qq. com

@maffan-96
Copy link

Hi everyone,

I have an instrumented car. I installed sensors on the car roof. Sensors have different relative location and direction in comparison to r3live designed handle by authors. I added a Velodyne handler for 16-beam Velodyne LiDAR based on Fast-LIO Velodyne handler. I put an IMU under the LiDAR and tried my best to align IMU and LiDAR axes. Also, I use ROS provided time to sync IMU, LiDAR and Camera. It works perfectly when I use only the LiDAR-IMU (right image). However, when I add a camera (Realsense) , a significant drift happens (left image). With LiDAR alone, the total drift is less than 1m, when I add camera, the final drift is more that 150m. Also, in the following, I put output of rqt_bag, the launch and config file.

Would you please guide me?

Screenshot from 2023-02-08 14-15-26

Screenshot from 2023-02-08 14-26-43

This is the launch file that I use for R3Live with LiDAR and Camera:

<launch>
    <!-- Subscribed topics -->
    <param name="/LiDAR_pointcloud_topic" type="string" value= "/laser_cloud_flat" />
    <param name="/IMU_topic" type="string" value= "/xsens/imu/data" />
    <param name="/Image_topic" type="string" value= "/camera/color/image_raw" />
    <param name="map_output_dir" type="string" value="$(env HOME)/r3live_output" />
    <rosparam command="load" file="$(find r3live)/../config/r3live_config_velodyne_16_with_camera.yaml" />
    
    <!-- set LiDAR type as velodyne-16 spining LiDAR -->
    <param name="/Lidar_front_end/lidar_type" type="int" value= "2" /> 
    <param name="/Lidar_front_end/point_step" type="int" value="1" />
    <param name="r3live_lio/lio_update_point_step" type="int" value="6" />
    <param name="Lidar_front_end/N_SCANS" type="int" value="16" />
    <param name="/Lidar_front_end/feature_enabled" type="bool" value="true" />
        
    <node pkg="r3live" type="r3live_LiDAR_front_end" name="r3live_LiDAR_front_end"  output="screen" required="true"/>
    <node pkg="r3live" type="r3live_mapping" name="r3live_mapping" output="screen" required="true" />
    
    <arg name="rviz" default="1" />
    <group if="$(arg rviz)">
        <node name="rvizvisualisation" pkg="rviz" type="rviz" output="log" args="-d $(find r3live)/../config/rviz/r3live_rviz_config_ouster.rviz" />
    </group>
 </launch>

and this is the config file:


Lidar_front_end:
   lidar_type: 1   # 1 for Livox-avia, 3 for Ouster-OS1-64
   N_SCANS: 6
   using_raw_point: 1
   point_step: 1
   
r3live_common:
   if_dump_log: 0                   # If recording ESIKF update log. [default = 0]
   record_offline_map: 1            # If recording offline map. [default = 1]
   pub_pt_minimum_views: 3          # Publish points which have been render up to "pub_pt_minimum_views" time. [default = 3]
   minimum_pts_size: 0.01           # The minimum distance for every two points in Global map (unit in meter). [default = 0.01] 
   image_downsample_ratio: 1        # The downsample ratio of the input image. [default = 1]
   estimate_i2c_extrinsic: 1        # If enable estimate the extrinsic between camera and IMU. [default = 1] 
   estimate_intrinsic: 1            # If enable estimate the online intrinsic calibration of the camera lens. [default = 1] 
   maximum_vio_tracked_pts: 600     # The maximum points for tracking. [default = 600]
   append_global_map_point_step: 4  # The point step of append point to global map. [default = 4]

r3live_vio:
   image_width: 640
   image_height: 480
   camera_intrinsic:
      [616.44287109375, 0.0, 310.534423828125, 
      0.0, 616.0972900390625, 224.801513671875,
      0.0, 0.0, 1.0] 
   camera_dist_coeffs: [0.0, 0.0, 0.0, 0.0, 0.0]  #k1, k2, p1, p2, k3
   # Fine extrinsic value. form camera-LiDAR calibration.
   camera_ext_R:
         [1.855658548272057684e-04, -9.899595481172648315e-01, 2.986651313928978535e-02,
         2.081077221601334432e-02, -4.606756803228120173e-02, -9.806236191917293565e-01,
         9.997851490640013994e-01, 1.289102727932428397e-03, 1.907638741177997491e-02]
   # camera_ext_t: [0.050166, 0.0474116, -0.0312415] 
   camera_ext_t: [8.440789485113883472e-02, -1.508930501043006700e-01,-1.517150432428751117e-01] 
   # Rough extrinsic value, form CAD model, is not correct enough, but can be online calibrated in our datasets.
   # camera_ext_R:
   #    [0, 0, 1,
   #     -1, 0, 0,
   #     0, -1, 0]
   # camera_ext_t: [0,0,0] 
   
r3live_lio:        
   lio_update_point_step: 4   # Point step used for LIO update.  
   max_iteration: 2           # Maximum times of LIO esikf.
   lidar_time_delay: 0        # The time-offset between LiDAR and IMU, provided by user. 
   filter_size_corner: 0.30   
   filter_size_surf: 0.30
   filter_size_surf_z: 0.30
   filter_size_map: 0.30

Hello,
I hope you are doing well. I am wondering if your work is open source or if you can share the R3Live code setup with Velodyne, I would be really grateful
Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants