Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

residual format #20

Closed
narutojxl opened this issue Jul 26, 2020 · 11 comments
Closed

residual format #20

narutojxl opened this issue Jul 26, 2020 · 11 comments

Comments

@narutojxl
Copy link

narutojxl commented Jul 26, 2020

Hi doctor @koide3,
I want to figure out the jacobians of residual in code.

  • I see the residual is RCR_inv * d 209 line of fast_gicp_st_impl.hpp, which is 3 dimensional, not d.transpose() * RCR_inv * d, which is a scalar matching with the paper's cost function, why the residual is this?
  • How to calculate the inverse matrix in $r$? Does its inverse exist?
    2020-07-26 22:08:59屏幕截图

Thanks for your help!
Jiao

@koide3
Copy link
Member

koide3 commented Jul 27, 2020

Hi @narutojxl ,

  • In this work, we used 3D (XYZ) residuals that result in the same objective function as the scalar one.
  • In the paper, $C^*$ are 3x3 covariance matrices, and thus there should be inverse. In the code, we used expanded 4x4 matrices to take advantage of SSE optimization, and we filled the right bottom corner with 1 before taking inverse to obtain a reasonable result.

@narutojxl
Copy link
Author

Thanks for your help @koide3!
2020-07-27 15-07-07屏幕截图

Could you please give some advice how you calculate the jacobian, thanks for your help very much!

@koide3
Copy link
Member

koide3 commented Jul 27, 2020

Calculating the jacobian of (B + RAR^T)^-1 is very complicated. I did it (you can find the code at the following links), but it was very slow and impractical.

In practice, we approximate RAR^T as a constant matrix during each optimization iteration step. Then, dr/dR can be simply given by (B + RAR^T)^-1 * dRa/dR. This approximation doesn't affect the accuracy while making the derivatives simple and fast.

https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_loss.hpp
https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_derivatives.hpp

@narutojxl
Copy link
Author

Thanks very very much@koide3 :)
BTW, $dRa/dR$ should be skew(Ra) according to left perturbation formula?
I see in the code it is skew(Ra + t).
Js[count].block<3, 3>(0, 0) = RCR_inv.block<3, 3>(0, 0) * skew(transed_mean_A.head<3>());

@koide3
Copy link
Member

koide3 commented Jul 27, 2020

It's a trick to calculate the jacobian of expmap. While the jacobian of expmap around r=0 is simply given by the skew symmetric function, the jacobian at an arbitrary point is not easy to obtain. To avoid complicated calculation, we calculate the jacobian at r=0 with the transformed point (p = Ra + t) instead of calculating the jacobian at r=R with the original point p.

@narutojxl
Copy link
Author

I refer to this book 4.3.4 Perturbation Model
image

@plusk01
Copy link

plusk01 commented Jan 20, 2022

@narutojxl, see Section 3.3.5 (the subsubsection after what you referenced) of the same book or eq (94) of Eade

image

@Gatsby23
Copy link

Gatsby23 commented May 6, 2023

Hi @narutojxl ,

  • In this work, we used 3D (XYZ) residuals that result in the same objective function as the scalar one.
  • In the paper, $C^*$ are 3x3 covariance matrices, and thus there should be inverse. In the code, we used expanded 4x4 matrices to take advantage of SSE optimization, and we filled the right bottom corner with 1 before taking inverse to obtain a reasonable result.

Question about the covariance.
In the linearization process, why directly multiply the M^{-1} * d_i as the residual function can work?From my point of view, I think maybe we should do the LLDT to the M^{-1} matrix and then build the update function?

@YZH-bot
Copy link

YZH-bot commented Jan 15, 2024

Hi, doctor @koide3
I have a problem about the objective function, why the log term can be ignored as shown in the red box which also includes the optimized variable $\mathbf{T}$? Cloud please you give me some advice if you have time, thanks very much!
image

@koide3
Copy link
Member

koide3 commented Jan 15, 2024

As explained at #20 (comment), we fix the fused covariance matrix at the linearization point. This approximation makes the log term constant and negligible during optimization.

@YZH-bot
Copy link

YZH-bot commented Jan 15, 2024

Got it #20 (comment), Thanks for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants