Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a simple flexibility observer #18

Open
andreadelprete opened this issue May 21, 2017 · 10 comments
Open

Create a simple flexibility observer #18

andreadelprete opened this issue May 21, 2017 · 10 comments
Assignees

Comments

@andreadelprete
Copy link
Collaborator

Given that the computation time of the current flexibility observer is too large for a 1 kHz control loop we could create a simpler version that gives slightly worse results but that runs much faster.

The idea is to perform a simple sensor fusion between the IMU data (which can be used to estimate pitch and roll of the robot base) and the force/torque sensor data (which can be used to estimate the base 6d pose assuming a linear spring model at the ankles).

@mehdi-benallegue
Copy link
Member

mehdi-benallegue commented May 22, 2017

A simpler state estimator can be done by setting the values of data provided by sot-dynamic to constant values instead.
I am not able to know what computation times you have and what is the target. I could help also if I know what kind of motions have to be performed.

@andreadelprete
Copy link
Collaborator Author

Hi @mehdi-benallegue , I think @amifsud already tried to fix the values coming from sot-dynamics, but the observer diverged.

The target at the moment is balancing on one foot. Then we should try walking.

@mehdi-benallegue
Copy link
Member

I think we can avoid the system diverging by increasing the covariance of the state dynamics. I think this amounts at the "simpler" state estimator you are referring to.

However, If you change the reference orientation of the IMU, this must be given to the estimator. Otherwise it is not possible for the IMU to separate the voluntary rotation from the flexibility. Fortunately, this can be given without requiring sot-dynamic.

In the longer term, we must just take into accoung all the data required by the estimator should be already computed by the control loop (position velocity and acceleration of the CoM, angular momentum and its derivative, inertia matrix, etc.) There should not be two concurrent computations of these values.

@amifsud
Copy link

amifsud commented May 22, 2017 via email

@mehdi-benallegue
Copy link
Member

mehdi-benallegue commented May 22, 2017

@amifsud My guess is the opposite of yours but you understood well the issue.

@andreadelprete
Copy link
Collaborator Author

Ok, cool! But even if this solution works, @mehdi-benallegue mentioned that we need to feed the estimator with the IMU orientation (in the control frame). Is that a direct input of the estimator?

On Wednesday we're also testing sot-torque-control with sot v3 on the robot for the first time. If that works, we can hope to replace switch also your flexibility observer to v3 soon, which should decrease a lot the computation time for the dynamics.

However, since we really need to reduce the computation time to advance with our tests on the robot, I still suggest @thomasfla to finish the development of the simple estimator (hopefully it should be done by tomorrow).

@mehdi-benallegue
Copy link
Member

Any estimator trying to make the fusion between the IMU and Force sensors will require the orientation of the IMU in the control frame, since rotations in this frame do not generate any forces of the flexibility.

@andreadelprete
Copy link
Collaborator Author

Sure, my question was simply about the format of this information. Is that an input signal in the estimator entity? Is that a quaternion?

@mehdi-benallegue
Copy link
Member

It is rather a rotation vector (u*theta), but this is just a matter of conversion.

@andreadelprete
Copy link
Collaborator Author

Quick update on this issue. @thomasfla has developed a simple base estimation algorithm, which we extensively tested during the summer, using the motion capture data as ground truth. In the beginning of September we used the base pose feedback with the balance controller and it worked well.

The base velocity estimation took us a bit more of work, because we found out that by using the numerical derivatives of the F/T sensors we would "see" oscillations (3/4 Hz) that were actually absorbed by the robot structure and did not propagate up to the base. So we decided to use only the gyroscope and the encoders for estimating the base velocity.

Today we connected also the base velocity feedback, and it seemed to work well: the controller managed to damp large oscillations in X, which made the robot unstable without velocity feedback. We managed to increase the velocity feedback gain of the CoM up to 8 (on the x axis only), which is close to a critically damped system given that the proportional gain was 30. More tests are needed to properly tune the system, because with a velocity gain of 7 (on the 3 axes) the robot became unstable when I physically interacted with it. Another problem we observed is that small oscillations are not well damped, probably because the associated error don't generate enough torque to overcome stiction.

As a side note, the robot became unstable when Thomas hit it on the chest, but maybe that is due to the effect of the impact on the FT sensors (to be investigated).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants