-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question regarding Jacobian of inverse action #266
Comments
Hello @mnissov Morten Regarding our Jacobian, it is correct according to the definition of the right Jacobian. The proof is through the chain rule and all formulas in the paper, and is supported by extensive unit testing in manif (which tests ALL jacobians for exactness using the small-perturbation approximations similar to those you use above):
This Jacobian needs to be interpreted as follows: when The second Jacobian that you present is probably different, although you do not provide details on how If this is the case, then both Jacobians are different. Regarding your test, you should test the first one using right-plus and regular minus
and the second one using left-plus and regular minus
Since you are only evaluating with (1), you should find that our Jacobian performs well, and the other one does not. However, if you use random X, it may happen that in some occasions X is close to the identity, in which case both Jacobians will be practically the same. Then, it may happen that the second Jacobian performs better than the first, just by some random effect. The first Jacobian should however perform well in all cases using test (1). Does this make sense? |
I realize now I wasn't consistent between text and code, in that I introduce the Lie theory derived Jacobian first but assign it to the function
I went back to the book to find this and I think you may be right. Groves defines the attitude error as $$ for the error $$ so you're right, this corresponds to a global perturbation rather than a local I suppose. In hindsight, I think I made a typo in transcribing the Jacobian, his error function was written Note I tweaked the plot a bit to run N simulations of L length. Otherwise the same: |
Correct, first one is Lie, second one is Groves So manif uses right-Jacobians, therefore local perturbations, and Groves uses left-Jacobians, therefore global perturbations. It seems then it all fits perfectly! |
yes! thanks so much for the help |
Maybe this is a little out of scope for this platform, if yes I understand.
The basic problem is in understanding what is more correct between various derivations of what amounts to the inverse action of SO(3).
Looking at the paper and cheatsheet one would conclude that
for$\mathcal{X}\in SO(3)$ and $v\in \mathcal{R}^{3}$ .
However, this is not always the results which is used/found by other sources with a similar equation. Looking at an alternative source, something like this GNSS/INS textbook from Paul Groves, in chapter 16 he discusses doppler aided INS systems, which will inevitably have a similar inverse action to consider. He derives the Jacobian of the measurement function, in equation 16.69, to be
Note I've used his notation here and simplified the equation a bit. But here$C_{w}^{b}$ is the rotation from {w}->{b}, $v^{w}$ is the {w}-frame velocity, and $\delta \psi_{b}^{w}$ is the error in the orientation of {b} in {w}, since this is an error state formulation. This is the inverse action because the rotation which directly corresponds to $\delta \psi_{b}^{w}$ should be $C_{b}^{w}$ , and we're using it's transpose here.
I tried also to quantify the difference between these two numerically, using a python script to perturb a rotation and calculate the error by
$$
e = \lVert \underbrace{\left( \mathcal{X} \oplus \tau \right)^{-1}\cdot v}{\text{true}} - \underbrace{ \left( v + J \tau \right) }{\text{approximate}} \rVert_2
$$
where$\mathcal{X}\in SO(3)$ and $v, \tau \in \mathcal{R}^3$ are random and $J$ is each of the two before-mentioned Jacobians. Note, I scale the perturbation with factor $k\in [0, 1)$ to see the error of this first order approximation grow.
What is strange is quite often the Lie theory derived Jacobian will perform much better, and then sometimes not depending on the specific simulation. This behavior I don't quite understand.
Code for the python analysis
The text was updated successfully, but these errors were encountered: