Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] Restore constraint lambda in case of integration failure. #725

Closed
duburcqa opened this issue Feb 14, 2024 · 3 comments · Fixed by #726
Closed

[core] Restore constraint lambda in case of integration failure. #725

duburcqa opened this issue Feb 14, 2024 · 3 comments · Fixed by #726
Labels
core enhancement New feature or request P0 Highest priority issue

Comments

@duburcqa
Copy link
Owner

duburcqa commented Feb 14, 2024

Currently, the lambda multipliers of the constraints are not reset to their original value if the integration step failed. This is an issue because they are used as initial guess for the constraint solver. This coupling seems to be problematic on gym_jiminy.envs:atlas environment. Here is a snippet to reproduce the issue:

import numpy as np
import gymnasium as gym
env = gym.make("gym_jiminy.envs:atlas", debug=True)
env.reset()
env.simulator.step(0.73)
for _ in range(10):
    print("-----")
    env.simulator.step(1e-6)

An additional buffers with the values at the previous iteration should be stored next to contactForcesPrev_, fPrev_ and aPrev_.

@duburcqa duburcqa added enhancement New feature or request core P0 Highest priority issue labels Feb 14, 2024
@duburcqa
Copy link
Owner Author

duburcqa commented Feb 14, 2024

It appears that the root cause of the exception (Too many successive constraint solving failures.) is not related to integration failure but rather inherent numerical instabilities of the contact solver, which is known to have poor convergence property. Here is a figure of the joint acceleration before raising the exception:
image

Smoothness of the accelerations is a major concern, in terms of simulation speed but also learning speed.

@duburcqa
Copy link
Owner Author

Two options at may improve convergence:

  • implement SOR (successive over-relaxation with rfactor going from 2.0 to 0.0 linearly)
  • implement SSOR (symmetric SOR) with factor fixed to 1.0

@duburcqa
Copy link
Owner Author

Before implementing under-relaxation:

Screenshot 2024-02-17 at 14 00 27

After (#726):

Screenshot 2024-02-17 at 14 03 08

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core enhancement New feature or request P0 Highest priority issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant