Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The problem of the 3 sigma bounds #3

Closed
Xiarain opened this issue May 3, 2019 · 2 comments
Closed

The problem of the 3 sigma bounds #3

Xiarain opened this issue May 3, 2019 · 2 comments

Comments

@Xiarain
Copy link

Xiarain commented May 3, 2019

Hi @huaizheng
Thanks for the awesome work and have a good performance on the Euroc dataset. But when I have the 3 sigma bounds experiment with the R-VIO results and there are some problems in the Euroc V102 dataset. Here are my results.

image_V102

image_V102_2

3sigma

Figure 3 shows that the consistency of the system is problematic, the covariance of the pose( PKK(3, 3) PKK(4, 4) PKK(5, 5)) is too small.

Thank you in advance.

@huaizheng
Copy link
Member

huaizheng commented May 3, 2019

Thanks @Xiarain , this is a good try and I can clearly see the accuracy of R-VIO from the colors in the 1st plot. While, actually it is not surprise to get a result, such as the 3rd plot, on a real dataset. The reasons are as follows:
i) R-VIO realizes an EKF-based visual-inertial state estimator, which means it is formulated based on the assumption of zero-mean white Gaussian noise, i.e., an approximation to the real-world noise. Typically, to test the consistency performance of given estimator, we need to do multi-trials Monte-Carlo simulations using the synthetic data generated with zero-mean white Gaussian noise so that we can get statistical results for computing some metric (for example, the NEES) as we described in the paper, and the 3-sigma bound and error-plot will be different with what you show here by using the real data, because the assumption of zero-mean white Gaussian noise may not always hold for real world.
ii) In order to analyze the consistency, we need to get the state error compared with the true state, x_err = x_true - x_vio. However, in real world it is very hard to get x_true, and this is why the simulation is essential for validating the consistency of estimator. Although, the EuRoC dataset provides "ground truth" for each sequence, it is obtained from a maximum likelihood (ML) estimator by fusing the vicon information as described in their original paper, and is definitely not the true state. So the error that you show in the 3rd plot is actually x_err = x_euroc - x_vio, and theoretically this cannot reflect the consistency of R-VIO.

Thus, the 3rd plot is not showing the real performance of consistency of R-VIO, and theoretically the best way to test consistency is to do Monte-Carlo simulation with synthetic zero-mean white Gaussian noise corrupted sensor data. Hope those could help you.

@Xiarain
Copy link
Author

Xiarain commented May 6, 2019

@huaizheng Anyway, thank you for your detailed answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants