New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GN vs Ceres Optimization #12
Comments
Hello Louis, Thanks for the feedback ! For now GN does indeed perform some approximation on the jacobian, plus defines a rigid outlier rejection (based on a distance). It does not really affects precision metrics, but seriously impacts robustness. On driving datasets, this has no effect There is however (currently) a significant gain in runtime with the GN implementation, with respect to the mono threaded Ceres implementation (with multiple threads the difference is less significant). Note: for branch |
Thanks a lot for the answer! What do you mean with robustness? When you have outliers, or under big rotation changes? |
Both. Basically ICP-based registrations (like the one in CT-ICP or LOAM) will fail in a number of cases. Furthermore you need The robustness (which I don't really know yet how to properly quantify) of the system basically describes how poor of an initialization you can tolerate to have a precise SLAM, and how you handle these complicated environments. |
Okay, thanks a lot. This was very helpful! |
Hello, thanks for this great work!
I saw that the default optimization (in default_config.yaml) is using the Gauss Newton optimization and for the robust configs Ceres is used. The Jacobians for the rotation look for me like an approximation. Is this true, and if yes do you think the approximation error is relevant?
Is Ceres mainly used for the robust loss functions or also to get better Jacobians by the autodiff?
Thanks and best regards,
Louis
The text was updated successfully, but these errors were encountered: