Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some problems running D^2-SLAM in Airsim #11

Open
Wenminggong opened this issue Jul 12, 2023 · 5 comments
Open

some problems running D^2-SLAM in Airsim #11

Wenminggong opened this issue Jul 12, 2023 · 5 comments

Comments

@Wenminggong
Copy link

Thank you so much for your excellent work!

I try to run d^2-slam in Microsoft Airsim simulator, while I meet some problems:

  1. the estimated path from the ros topic /d2vins/path is serrated, what's worse, it is shifted away. The result is as follow:
path_serrated
  1. I observe that there always have some points from ros topic /d2vins/point_cloud showed following the perspective of origin, but I do not know why? I think the points in history will disappear with time goes on. The result is as follow:
point_show

I would like to seek help from the authors.

Thanks very much.

@Wenminggong
Copy link
Author

The config_file used is as follow, which is copied from tum_single.yaml and modified for Airsim's camera and imu (use PINHOLE camera model and modify body_T_cam and IMU Measurement parameters):
%YAML:1.0

#inputs
imu: 1
is_fisheye: 0
imu_topic: "/drone_0/imu"
image0_topic: "/drone_0/left/image_raw"
image1_topic: "/drone_0/right/image_raw"
is_compressed_images: 0

imu_freq: 200
image_freq: 15
frame_step: 2

#Camera configuration
camera_configuration: 0 #STEREO_PINHOLE = 0, STEREO_FISHEYE = 1, PINHOLE_DEPTH = 2, FOURCORNER_FISHEYE = 3
num_of_cam: 2
cam0_calib: "left_pinhole.yaml"
cam1_calib: "right_pinhole.yaml"
image_width: 640
image_height: 480
body_T_cam0: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [0, -0.3420201433256687, 0.9396926207859084, 0.35,
1, 0, 0, -0.05,
0, 0.9396926207859084, 0.3420201433256687, -0.1,
0., 0., 0., 1 ]

body_T_cam1: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [0, -0.3420201433256687, 0.9396926207859084, 0.35,
1, 0, 0, 0.05,
0, 0.9396926207859084, 0.3420201433256687, -0.1,
0., 0., 0., 1 ]

#estimation
estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
estimate_td: 0 # online estimate time offset between camera and imu
td: 0.001 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
estimation_mode: 0 #0:single; 1:solve all; 2: distributed; 3:server
double_counting_common_feature: 1
min_inv_dep: 0.01 #10 meter away.

#optimization parameters
max_solver_time: 0.08 # max solver itration time (ms), to guarantee real time
max_num_iterations: 8 # max solver itrations, to guarantee real time
consensus_max_steps: 4
timout_wait_sync: 100
rho_landmark: 1.0
rho_frame_T: 100
rho_frame_theta: 100
relaxation_alpha: 0.
consensus_sync_for_averaging: 0
consensus_sync_to_start: 0 #Is sync on start of the solving..
consensus_trigger_time_err_us: 100

#depth fusing
depth_far_thres: 3.0 # The max depth in frontend
depth_near_thres: 0.3
fuse_dep: 0 #if fuse depth measurement
max_depth_to_fuse: 5.0
min_depth_to_fuse: 0.3

#Multi-drone
track_remote_netvlad_thres: 0.5

#Initialization
init_method: 0 #0 IMU, 1 PnP
depth_estimate_baseline: 0.05

#sliding window
max_sld_win_size: 11
landmark_estimate_tracks: 4 #when use depth or stereo, 3 is OK.
min_solve_frames: 6

#solver
multiple_thread: 1

#outlier rejection
thres_outlier : 10.0
perform_outlier_rejection_num: 10000
tri_max_err: 0.2

#Marginalization
enable_marginalization: 1
margin_sparse_solver: 1
always_fixed_first_pose: 0
remove_base_when_margin_remote: 2
#when set to 2, will use the all relevant measurements of the removing frames to compute the prior,
#and the baseFrame (where!=removeID) will not be removed. This may lead to double counting of this baseFrame measurement: but it's stable.
#When set to 1, will remove the baseframe's measurements of those measurements which is not base on current frame.
#set to 0 those measurements (which on a landmark's baseFrame is not been removed) will be ignore.
margin_enable_fej: 0

#feature tracker parameters
max_cnt: 200 # max feature number in feature tracking
max_superpoint_cnt: 150 # max feature extraction by superpoint
max_solve_cnt: 150
show_track_id: 0
check_essential: 1
enable_lk_optical_flow: 1 #enable lk opticalflow featuretrack to enhance ego-motion estimation.
remote_min_match_num: 40
ransacReprojThreshold: 10.0
parallex_thres: 0.022
knn_match_ratio: 0.8 #This apply to superpoint feature track & loop clouse detection.

#CNN
cnn_use_onnx: 1
enable_pca_superpoint: 1
superpoint_pca_dims: 64

#Measurement parameters. The more accurate parameters you provide, the better performance
acc_n: 3.3285e-2 # accelerometer measurement noise standard deviation.
gyr_n: 1.2341e-3 # gyroscope measurement noise standard deviation.
acc_w: 8.826e-7 # accelerometer bias random work noise standard deviation.
gyr_w: 7.0523e-8 # gyroscope bias random work noise standard deviation.
g_norm: 9.81007 # gravity magnitude 9.805 9.81007

#Loop Closure Detection
loop_detection_netvlad_thres: 0.8
enable_homography_test: 0
accept_loop_max_yaw: 10
accept_loop_max_pos: 1.0
loop_inlier_feature_num: 50
gravity_check_thres: 0.03
pgo_solver_time: 1.0
solver_timer_freq: 1.0
enable_pcm: 1
pcm_thres: 2.8

#PGO
pgo_solver_time: 0.5

#outputs
output_path: "/root/swarm_ws/output/"
debug_print_sldwin: 0
debug_print_states: 0
enable_perf_output: 0
debug_write_margin_matrix: 0

@Wenminggong
Copy link
Author

And when I read the source code, I find that: in file d2frontend_params.cpp, the parameter focal_length is setted always as 460.0, why do not we need to change focal_length's value as camera settings changed?

@xuhao1
Copy link
Collaborator

xuhao1 commented Jul 17, 2023

@Wenminggong 460.0 here is only a scale factor (not a really focal length). It only influence the judgement of keyframe .
As I know, airsim may have time sync issue. So patient is needed to debug using D2SLAM or any other SLAM system on Airsim. Please refer to my pervious post about this: microsoft/AirSim#2369

@Wenminggong
Copy link
Author

@xuhao1 Thanks.
You are right, I have modified Airsim_ros_wrapper following your advice microsoft/Airsim#2369. And VINS-Fusion performs better than D^2SLAM single estimation_mode in the same test environment, its result is shown as follow:
vins

Currently, I have no idea about this phenomenon, I hope to get some constructive suggestions from you to debug this system.
Thank you very much.

@Wenminggong
Copy link
Author

I found that when parallex_thres = 0.022, d2vins can't find new key frame almost time, as a result the estimated states are not accurate. After reset parallex_thres value, I can get the smooth VIOPath. Besides, the type of images from Airsim is BRG8, if you use STEREO_PINHOLE and need LK features, you should convert BRG8 type to GRAY type for supporting cv::cuda::createGoodFeaturesToTrackDetector.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants