Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expected filter performance #13

Open
NikolausDemmel opened this issue Aug 22, 2013 · 4 comments
Open

Expected filter performance #13

NikolausDemmel opened this issue Aug 22, 2013 · 4 comments

Comments

@NikolausDemmel
Copy link

Hi,

I am planning to use this sensor fusion package together with our own monocular visual SLAM framework. The existing pose sensor seems perfectly suited for that.

I started out by following the getting-started tutorial on the ROS wiki. Observing the filter performance for provided dataset I noticed the following:

  1. The scale factor is all over the place rapidly changing between roughly 1 and 2. Is this expected? The tutorial mentiones that the scale factor is brittle and prone to initialization errors. Does this also mean that the initialization described for the dataset (start rosbag about 25 seconds in, then initialize with dynamic reconfigure gui) is not appropriate?

  2. The accelerometer biases are high and changing quickly. I get values not much below 1.0. I would expect much lower values that are changing very slowly. Is this expected?

It seems to me that the filter is not behaving properly. One of our goals is to estimate scale accurately, for which these example results are obviously discouraging.

Maybe I have screwed something up while following the tutorial, so I first wanted to ask the expected behaviour for the tutorial. Plotting the output position and ground truth position from vicon, they are not too far off, however I expected a much closer match. Moreover, with an estimated scale of around 2, where the correct scale would be 1, I'm surprised the position estimates work at all.

Independently from the exemplar data, is it reasonable for me to expect the filter to give me an accurate scale estimate with prediction on a 200Hz Xsens MTi IMU and updates from 5 - 20 Hz vSLAM?

On a different note, you mention that the filter needs enough excitement in order to converge. I suppose purely planar motion does not cover this. What do you think about motion that is mostly planar, but has a little excitement in the other dimensions as well? Might there even be a way to fix the unobservable state (I suppose e.g. part of the inter-sensor calibration) with artificial measurements as described in the tutorial, if one assumes that the motion is perfectly planar?

Many questions... Maybe someone can share his insight on parts of it. Thank you already for this interesting framework!

Nikolaus

P.S.: Form the other issues I assume that this platform is ok to ask this kind of question?

@stephanweiss
Copy link
Contributor

Hi Nikolaus,

The provided dataset should work fine. The performance you see is all but normal. Please check all the parameters - most likely there is an issue in the fix parameters (e.g. measurement_world_sensor needs to be false etc).

200Hz IMU should do it, 5Hz vSlam update is very slow to give accurate scale estimates but I would not say impossible. We made the experience that all above 20Hz is perfectly fine.

Concerning the required motion, the system needs acceleration to be able to estimate the scale factor, so constant planar motion would not fulfill the requirements. Whenever the system experiences acceleration this sequence is used to estimate scale. Angular rotations are mostly to determine the inter-sensor calibration states. If you fix them, you are fine with not having angular motion.

I hope this helps

Best
Stephan


From: Nikolaus Demmel [notifications@github.com]
Sent: Thursday, August 22, 2013 4:20 AM
To: ethz-asl/ethzasl_sensor_fusion
Subject: [ethzasl_sensor_fusion] Expected filter performance (#13)

Hi,

I am planning to use this sensor fusion package together with our own monocular visual SLAM framework. The existing pose sensor seems perfectly suited for that.

I started out by following the getting-started tutorial on the ROS wiki. Observing the filter performance for provided dataset I noticed the following:

  1. The scale factor is all over the place rapidly changing between roughly 1 and 2. Is this expected? The tutorial mentiones that the scale factor is brittle and prone to initialization errors. Does this also mean that the initialization described for the dataset (start rosbag about 25 seconds in, then initialize with dynamic reconfigure gui) is not appropriate?

  2. The accelerometer biases are high and changing quickly. I get values not much below 1.0. I would expect much lower values that are changing very slowly. Is this expected?

It seems to me that the filter is not behaving properly. One of our goals is to estimate scale accurately, for which these example results are obviously discouraging.

Maybe I have screwed something up while following the tutorial, so I first wanted to ask the expected behaviour for the tutorial. Plotting the output position and ground truth position from vicon, they are not too far off, however I expected a much closer match. Moreover, with an estimated scale of around 2, where the correct scale would be 1, I'm surprised the position estimates work at all.

Independently from the exemplar data, is it reasonable for me to expect the filter to give me an accurate scale estimate with prediction on a 200Hz Xsens MTi IMU and updates from 5 - 20 Hz vSLAM?

On a different note, you mention that the filter needs enough excitement in order to converge. I suppose purely planar motion does not cover this. What do you think about motion that is mostly planar, but has a little excitement in the other dimensions as well? Might there even be a way to fix the unobservable state (I suppose e.g. part of the inter-sensor calibration) with artificial measurements as described in the tutorial, if one assumes that the motion is perfectly planar?

Many questions... Maybe someone can share his insight on parts of it. Thank you already for this interesting framework!

Nikolaus

P.S.: Form the other issues I assume that this platform is ok to ask this kind of question?


Reply to this email directly or view it on GitHubhttps://github.com//issues/13.

@NikolausDemmel
Copy link
Author

Thanks for taking the time to respond Stephan! It is very valuable.

measurement_world_sensor is true as it should be I guess. (I guess it is not even used, at least I don't use it in my current viconpos_sensor). Also the other parameters seem fine.

I experimented a bit more. It turns out with the second tutorial (the viconpose_sensor) I get better behaviour. accel bias around < 0.1 for two axis and one hovering around 0.3 (still that does not seem right, but it is stable). Scale converges to 1 in most cases. With the example dataset it seems you can get close to the real scale (<1%) in reasonable time. However, as you also mentioned, the initial value of the scale is crucial, otherwise convergence might be veeeery slow or not occuring at all (within the 80s of the dataset anyway).

One of our use cases is estimating the scale of an offline generated map, so I guess with iterated application on a fixed-length sequence this could still do the job even if the scale is completely unknown initially.

The position only sensor still diverges heavily in most tries as described in the initial post. That is fine for me at the moment, as I'm more interested in the pose sensor anyway.

One odd thing I noticed is that the IMU on the dataset is only 50Hz, whereas the vicon is 100Hz. For the position sensor I still do the suggest resampling taking only every 5th measurement update (i.e. effective 20Hz position update). Could the "slow" IMU be an issue here? The pose sensor seems to work fine with 50Hz IMU and 100Hz pose (although you suggest somewhere that IMU rate should always be > update rate).

As for the planar motion case: We will have acceleration in 2DoF of course and yaw angular motion, e.g. a robot moving around an office environment, which would be starting and stopping and changing direction. Since the basic pose sensor is working now, I will simply try this and come back to you if there are issues. Thanks for the hint about angular excitations relating mostly to inter-sensor-calib. I will experiment with fixing that.

One question about fixing parameters: In the core library you fix parts of the state by simply setting the correction to 0. In theory, is that completely equivalent to completely removing these parameters from the state?

One question about filter div-/convergence. What is typical behaviour if some state is unobservable, but not fixed via artificial measurements? If I understood you correctly, this should not affect the rest of the system? Is the issue then purely avoiding indefinitely growing covariance?

Best,
Nikolaus

@stephanweiss
Copy link
Contributor

Nikolaus,

The acc bias <0.3 is fine and perfectly possible. Concerning the odd behavior of the position sensor I would need some more info (e.g. init values, fix param values etc) if this is still a concern of yours.

If the scale is completely unknown initially, then I recommend to use a closed form approach for initialization. A good paper to start with is: "Vision-Aided Inertial Navigation: Closed-Form Determination of Absolute Scale, Speed and Attitude" by Martinelli.

Concerning the sensor rates, we should upload a better dataset at some point. But it should work and is tested for the example in the tutorial. The slow IMU (and Vicon) certainly to not improve things, but it should be ok.

Fixing states with 0-correction: it is theoretically not equivalent with omitting the states because of cross couplings in the P matrix. Using 0-corrections may alter your cov to be over- or under-confident. The same goes for artificial measurements. If you do not apply these artificial measurements, the cov should simply keep growing and thus be the correct uncertainty of the state.

Hope this helps
Best
Stephan

From: Nikolaus Demmel [notifications@github.com]

Sent: Tuesday, August 27, 2013 2:17 AM

To: ethz-asl/ethzasl_sensor_fusion

Cc: stephanweiss

Subject: Re: [ethzasl_sensor_fusion] Expected filter performance (#13)

Thanks for taking the time to respond Stephan! It is very valuable.
measurement_world_sensor is true as it should be I guess. (I guess it is not even used, at least I don't use it in my current viconpos_sensor). Also the other parameters seem fine.
I experimented a bit more. It turns out with the second tutorial (the viconpose_sensor) I get better behaviour. accel bias around < 0.1 for two axis and one hovering around 0.3 (still that does not seem right, but it is stable). Scale converges to 1 in most
cases. With the example dataset it seems you can get close to the real scale (<1%) in reasonable time. However, as you also mentioned, the initial value of the scale is crucial, otherwise convergence might be veeeery slow or not occuring at all (within the
80s of the dataset anyway).
One of our use cases is estimating the scale of an offline generated map, so I guess with iterated application on a fixed-length sequence this could still do the job even if the scale is completely unknown initially.
The position only sensor still diverges heavily in most tries as described in the initial post. That is fine for me at the moment, as I'm more interested in the pose sensor anyway.
One odd thing I noticed is that the IMU on the dataset is only 50Hz, whereas the vicon is 100Hz. For the position sensor I still do the suggest resampling taking only every 5th measurement update (i.e. effective 20Hz position update). Could the "slow" IMU
be an issue here? The pose sensor seems to work fine with 50Hz IMU and 100Hz pose (although you suggest somewhere that IMU rate should always be > update rate).
As for the planar motion case: We will have acceleration in 2DoF of course and yaw angular motion, e.g. a robot moving around an office environment, which would be starting and stopping and changing direction. Since the basic pose sensor is working now,
I will simply try this and come back to you if there are issues. Thanks for the hint about angular excitations relating mostly to inter-sensor-calib. I will experiment with fixing that.

One question about fixing parameters: In the core library you fix parts of the state by simply setting the correction to 0. In theory, is that completely equivalent to completely removing these parameters from the state?
One question about filter div-/convergence. What is typical behaviour if some state is unobservable, but not fixed via artificial measurements? If I understood you correctly, this should not affect the rest of the system? Is the issue then purely avoiding
indefinitely growing covariance?
Best,

Nikolaus

Reply to this email directly or
view it on GitHub.

@NikolausDemmel
Copy link
Author

Hi Stefan,

thanks for your reply and hints!

The position sensor is not direct concern of mine, but I would like to know if I have screwed something up or misunderstood something to cause this failure. I thought I followed the tutorial pretty much exactly. I included the config at the end.

I start the filter node, then the bag file 25 seconds into the sequence as suggested, let it run for around 1 second. I start reconfigure GUI. I hit init filter (no fixed params). I start un-pause the bag file and observe the divergence in particular of scale.

Things are similar if I reinitialize with the same settings while the bag file is running. At most initialization times it continues to diverge.

Let me know if I can provide more info to shed light on this. I already compared my viconpos_* files to the position_* files, but only noticed the expected differences concerning different message types etc (unless I missed something).

Best,
Nikolaus

scale_init: 1
fixed_scale: 0
fixed_bias: 0
noise_acc: 0.083
noise_accbias: 0.0083
noise_gyr: 0.0013
noise_gyrbias: 0.00013
noise_scale: 0.0
noise_qwv: 0.0
noise_qci: 0.0
noise_pic: 0.0
delay: 0.00
meas_noise1: 0.005
meas_noise2: 0.0

data_playback: true

# initialization of camera w.r.t. IMU
init/q_ci/w: 1.0
init/q_ci/x: 0.0    
init/q_ci/y: 0.0
init/q_ci/z: 0.0

init/p_ci/x: 0.0    
init/p_ci/y: 0.0
init/p_ci/z: 0.0

# initialization of world w.r.t. vision
init/q_wv/w: 1.0
init/q_wv/x: 0.0
init/q_wv/y: 0.0
init/q_wv/z: 0.0

use_fixed_covariance: true
measurement_world_sensor: true  # selects if sensor measures its position w.r.t. world (true, e.g. Vicon) or the position of the world coordinate system w.r.t. the sensor (false, e.g. ethzasl_ptam)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants