New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SO3 diff impl use quaternion instead of rotation matrix #1702
Conversation
Thanks @Toefinder. Thanks for contributing to Pinocchio. In Pinocchio-3x, we have largely improved the numerical accuracies of this operation while also making it differentiable for autodiff. Could you wait for the new release of Pinocchio-3x? Best, |
I am surprised. When debugging, I find that the matrix multiplication results in an identity matrix and then that identity matrix is passed into the |
Could you provide a tiny example in Python of what you are saying please? |
This issue was encountered when used in hpp-core but I will give the full python example below. Note that the default matrix value is In [1]: import pinocchio as pin
In [2]: import numpy as np
In [7]: quat_x = pin.Quaternion(0.9807293794421349169,0,0,-0.1953711450011105244)
In [8]: quat_y = pin.Quaternion(0.98072937944213492244,0,0,-0.19537114500111049664)
In [9]: (quat_y.conjugate()*quat_x).matrix()
Out[9]:
array([[ 1.00000000e+00, 5.55111512e-17, 0.00000000e+00],
[-5.55111512e-17, 1.00000000e+00, -0.00000000e+00],
[-0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
In [20]: np.matmul(quat_y.matrix().T, quat_x.matrix())
Out[20]:
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
In [21]: np.matmul(np.array(quat_y.matrix(), np.float128).T, np.array(quat_x.matrix(), np.float128))
Out[21]:
array([[ 1.00000000e+00, 5.12827628e-17, 0.00000000e+00],
[-5.12827628e-17, 1.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]],
dtype=float128)
|
unittest/joint-configurations.cpp
Outdated
@@ -282,7 +282,7 @@ BOOST_AUTO_TEST_CASE ( integrate_difference_test ) | |||
Eigen::VectorXd q1(randomConfiguration(model, -1 * Eigen::VectorXd::Ones(model.nq), Eigen::VectorXd::Ones(model.nq) )); | |||
Eigen::VectorXd qdot(Eigen::VectorXd::Random(model.nv)); | |||
|
|||
BOOST_CHECK_MESSAGE(isSameConfiguration(model, integrate(model, q0, difference(model, q0,q1)), q1), "Integrate (difference) - wrong results"); | |||
BOOST_CHECK_MESSAGE(isSameConfiguration(model, integrate(model, q0, difference(model, q0,q1)), q1, 1.5e-12), "Integrate (difference) - wrong results"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you change the prec here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the new diff_impl, integrate(q0, difference(q0,q1))
is less similar to the expected result q1
. I am not totally sure why this is the case, but probably because the current
implementation of integrate
also suffers from floating point error... The threshold for the integrate_difference_test needs to be changed slightly to accommodate the new implementation of difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is highly surprising and this needs additional investigation for this PR to be merged. Could you look at it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will need help for this since I am not familiar with the mathematics 😅 For this issue I actually found the alternative implementation thanks to your
// TODO: perform first the Quaternion multiplications and then return a Rotation Matrix
note in special-orthogonal.hpp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you see with @jmirabel? I guess you are already in contact with him.
Now I understand your issue. We approve your fix, while I think it has been already done in Pinocchio-3x. |
The current distance function does not give a positive distance for two SO3 values that are very close to each other (about 3e-17).
For implementation of difference between two SO3 values, it is more accurate to use quaternion multiplication instead of the corresponding rotation matrix form. However, with the new diff_impl, integrate(q0, difference(q0,q1)) is less similar to the expected result q1. This is probably because the current implementation of integrate also suffers from floating point error. The threshold for the integrate_difference_test needs to be changed slightly to accommodate the new implementation of difference.
I've solved the issues concerning accurary. I've in fact imported the fixes we made with @ManifoldFR on Pinocchio 3. |
It seems that using quaternion multiplication (
q0.conjugate()*q1
) then converting to matrix is actually more accurate than using matrix multiplication (q0.matrix().transpose()*q1.matrix()
) in calculating the difference betweenq1
andq0
.When two SO3 configurations are very similar, with coordinates differing approximately by the smallest floating point division for
double
(I checked this usingnumpy.spacing(value)
for thenumpy.float64
value of the quaternion coordinate), quaternion multiplication followed by converting to matrix gives a non-identity result (non-zero distance) while the matrix conversion followed by matrix multiplication gives an identity matrix.