Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test J虈 computation against AD #171

Merged
merged 1 commit into from
Jun 10, 2024
Merged

Conversation

diegoferigo
Copy link
Member

@diegoferigo diegoferigo commented Jun 10, 2024

This PR extends the test included in #169 to compare the computation of $\dot{J}$ with the corresponding derivative computed as:

$$\dot{J}(\mathbf{q},\, \boldsymbol{\nu}) = \frac{\text{d} J(\mathbf{q})}{\text{d} t} = \frac{\partial J}{\partial \mathbf{q}} \frac{\partial \mathbf{q}}{\partial t} = \frac{\partial J}{\partial \mathbf{q}} \dot{\mathbf{q}}$$

As usual, it's worth noting that since $\mathbf{q} = ( {}^W \mathbf{p}_B \,; {}^W \mathtt{Q}_B\,; \mathbf{s} ) \in \mathbb{R}^{7+n}$, we have that $\dot{\mathbf{q}} \neq \boldsymbol{\nu}$. However, by taking extra care of the derivative of the base quaternion, we can compute $\dot{\mathbf{q}}$ and use it in the test (that, btw, is independent from the velocity representation).

Triggered by #169 (comment) from @DanielePucci.


馃摎 Documentation preview 馃摎: https://jaxsim--171.org.readthedocs.build//171/

@diegoferigo diegoferigo self-assigned this Jun 10, 2024
@diegoferigo
Copy link
Member Author

Out of curiosity, I've tried to benchmark on CPU the two implementations:

Analytical $\dot{J}$ AD $\dot{J}$
JIT compilation 2.08 s 4.04 s
Runtime 248 碌s 卤 18.5 碌s 6.65 ms 卤 633 碌s

This numbers refer to the computation of $\dot{J}$ of a single link of the ErgoCub robot (57 DoFs, pretty large). It seems pretty clear that the analytical computation has always to be preferred.

@diegoferigo diegoferigo marked this pull request as ready for review June 10, 2024 08:15
@diegoferigo
Copy link
Member Author

This could be interesting to all @ami-iit/vertical_control-oriented-learning.

Copy link
Collaborator

@flferretti flferretti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! Thanks Diego

Comment on lines +314 to +316
assert jnp.einsum("l6g,g->l6", O_J虈_ad_WL_I, I_谓) == pytest.approx(
jnp.einsum("l6g,g->l6", O_J虈_WL_I, I_谓)
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm missing something, can we just compare the two ${} ^O\dot{J} _{WL, I}$?

Suggested change
assert jnp.einsum("l6g,g->l6", O_J虈_ad_WL_I, I_谓) == pytest.approx(
jnp.einsum("l6g,g->l6", O_J虈_WL_I, I_谓)
)
assert O_J虈_ad_WL_I == pytest.approx(O_J虈_WL_I)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the elements of $\dot{J}$ can be really close to $0$. Computing the link bias acceleration, instead, provides larger values since they sum up along kinematic chains.

I think it's more robust having also the projected values checked. I fear that comparing the Jacobians with too small numbers may provide a 0 = 0 even if the values are wrong (but so small that are within the default tolerances of pytest). And, I don't want to mess us with the tolerances. If something related to them is wrong, the last assert will fail.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks for the explanation

@diegoferigo diegoferigo merged commit ef02e71 into main Jun 10, 2024
30 checks passed
@diegoferigo diegoferigo deleted the jacobian_dot_test_with_ad branch June 10, 2024 12:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants