Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么加入平面优化之后,回到原点的轨迹还是存在Z方向的漂移? #16

Closed
ChenHuang20 opened this issue Nov 2, 2022 · 13 comments

Comments

@ChenHuang20
Copy link

Screenshot from 2022-11-02 18-13-05

@TouchDeeper
Copy link
Owner

你可以确认一下平面优化是否开启了

@ChenHuang20
Copy link
Author

我直接使用的是原始的参数如下,请问还需要在哪里开启呢?

imu: 1
wheel: 1
only_initial_with_wheel: 0 #只利用wheel进行初始化,不加入因子图
plane: 1
num_of_cam: 2

@ChenHuang20
Copy link
Author

我确认已经开启平面优化了

@TouchDeeper
Copy link
Owner

你可以尝试一下加大平面约束的权重

@ChenHuang20
Copy link
Author

ChenHuang20 commented Nov 4, 2022 via email

@ChenHuang20
Copy link
Author

无论怎么调整平面约束的权重,都无法使得轨迹回到水平位置

@ChenHuang20
Copy link
Author

我发现不管开不开平面约束,出来的轨迹都是差不多的。平面约束好像用处不大?

@TouchDeeper
Copy link
Owner

你说的这个现象跟我之前的实验结论不太一致,你用的是哪个配置文件呢

@ChenHuang20
Copy link
Author

ChenHuang20 commented Nov 5, 2022 via email

@ChenHuang20
Copy link
Author

使用这个realsense_stereo_imu_config_ridgeback.yaml:

%YAML:1.0

#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam; 
imu: 1
wheel: 1
only_initial_with_wheel: 0 #只利用wheel进行初始化,不加入因子图
plane: 1
num_of_cam: 2

imu_topic: "/imu/data"
wheel_topic: "/ridgeback_velocity_controller/odom"   #"/ridgeback_velocity_controller/odom", “/odometry/filtered”
#TODO check the distortion
image0_topic: "/camera/infra1/image_rect_raw"
image1_topic: "/camera/infra2/image_rect_raw"
output_path: "/home/td/slam/vins_fusion_ws/src/VINS-Fusion/output"

cam0_calib: "infra1.yaml"
cam1_calib: "infra2.yaml"
image_width: 640
image_height: 480
   

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 1   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
  # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#If you choose 0 or 1, you should write down the following matrix.

extrinsic_type: 3 # 0 ALL
                  # 1 Only translation
                  # 2 Only Rotation
                  # 3 no z
                  # 4 no rotation and no z

body_T_cam0: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [ 0.0,   0.0,   1.0,  0.041,
          -1.0,   0.0,   0.0,  0.307,
           0.0,  -1.0,   0.0,  0.544,
           0,     0,     0,    1 ]

body_T_cam1: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [ 0.0,   0.0,   1.0,  0.041,
           -1.0,   0.0,   0.0, 0.258,
           0.0,  -1.0,   0.0,  0.544,
           0,     0,     0,    1 ]


# Extrinsic parameter between IMU and Wheel.
estimate_wheel_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#If you choose 0 or 1, you should write down the following matrix.

extrinsic_type_wheel: 3 # 0 ALL
                        # 1 Only translation
                        # 2 Only Rotation
                        # 3 no z
                        # 4 no rotation and no z

#wheel to body
body_T_wheel: !!opencv-matrix
  rows: 4
  cols: 4
  dt: d
  data: [1, 0, 0, -0.208,
         0, 1, 0, 0.290,
         0, 0, 1, -0.168,
         0, 0, 0, 1]


#plane noise
#mono:0.01 stereo:0.005
roll_n: 0.01
#mono:0.01  stereo:0.005
pitch_n: 0.01
#mono:0.05 stereo:0.025
zpw_n: 0.05


#Multiple thread support
multiple_thread: 1

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
flow_back: 1            # perform forward and backward optical flow to improve feature tracking accuracy

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.1         # accelerometer measurement noise standard deviation. #0.2   0.04
gyr_n: 0.05        # gyroscope measurement noise standard deviation.     #0.05  0.004
acc_w: 7.1765713730075628e-04         # accelerometer bias random work noise standard deviation.  #0.002
gyr_w: 4.0e-05       # gyroscope bias random work noise standard deviation.     #4.0e-5
g_norm: 9.805         # gravity magnitude

#wheel parameters
# rad/s mono:0.004 stereo:0.002
wheel_gyro_noise_sigma: 0.004
# m/s mono:0.01  stereo:0.006
wheel_velocity_noise_sigma: 0.01

estimate_wheel_intrinsic: 0
# 0  Have an accurate intrinsic parameters. We will trust the following sx, sy, sw, don't change it.
# 1  Have an initial guess about intrinsic parameters. We will optimize around your initial guess.
# 2  TODO Don't know anything about intrinsic parameters. You don't need to give sx, sy, sw. We will try to calibrate it. Do some rotation movement at beginning.
#If you choose 0 or 1, you should write down the following sx, sy, sw.
# wheel intrinsic
sx: 1.0
sy: 1.0
sw: 1.0


#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0.00                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#unsynchronization parameters
estimate_td_wheel: 0                      # online estimate time offset between camera and wheel
td_wheel: 0.0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#loop closure parameters
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "/home/td/slam/vins_fusion_ws/src/VINS-Fusion/output/pose_graph" # save and load path
save_image: 0                   # save image in pose graph for visualization prupose; you can close this function by setting 0 

@TouchDeeper
Copy link
Owner

@ChenHuang20 可以试下把平面约束关了,对比下结果

@ChenHuang20
Copy link
Author

开和关基本没有区别

@TouchDeeper
Copy link
Owner

@ChenHuang20 可以对比下单目情况下的平面约束开关

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants