Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New method for friction cone constraints #19

Open
andreadelprete opened this issue May 21, 2020 · 1 comment
Open

New method for friction cone constraints #19

andreadelprete opened this issue May 21, 2020 · 1 comment

Comments

@andreadelprete
Copy link
Owner

I was not completely satisfied with the QP-based method for handling friction cones. The reason was already explained in the paper in footnote 2:

This expression is based on the assumption of constant u, but if we assume that p_0 moves at constant velocity then u is not a constant, but a linear function of time. I should investigate how this affect the equations.

In the last week I've investigated how the computation changes if we consider a linear function of time in the contact dynamics, which then takes this form:
\begin{align}
\dot{x} = A x + b_0 + t b_1
\end{align}
Another extra piece of complexity is that, besides computing the first two integrals of x, we also need their expressions as linear functions of b_0 and b_1, which will then appear in the constraints of the QP.

It turns out that, if computation time is not critical, computing these quantities is rather easy because we can simply define an augmented state including b_0, b_1, t*b_1 and the first two integrals of x, and this augmented state has a purely linear dynamics. Therefore, by computing the expm of a large and sparse matrix we can get everything we need. However, since this matrix is 6 times larger than A and the computational complexity of expm is cubic, we can expect this to be about 216 times more expensive than expm(A). This is unacceptable for our application.

The good news is that this large matrix of which we should compute expm is very sparse and structured, so I wrote down a few pages of math and I've figured out a way to compute everything by exploiting the sparsity. In the beginning I thought I could get to the same complexity of expm(A), but it turns out that I cannot: the new approach should be about 3 times more expensive than expm(A); this is a big improvement wrt what I described above, but not as good as I hoped. I've implemented this in Python, it seems to work fine, and now @olimexsmart could take care of switching it to C++.

However, I don't know whether following this path is a good idea.

  • The implementation of this computation is not trivial, and would probably take @olimexsmart several days to do it in C++.
  • We would be losing speed because it's gonna take about 50% more time than what we are currently doing (which is basically computing 2 expm of matrices slightly larger than A)
  • This new approach seems theoretically more sound: we account for a linear motion of the anchor point during the time step, instead of assuming the anchor point is fixed during the time step and the QP can choose where to place it; however, I haven't implemented it yet in the python simulator, so I have no idea whether it really works better than the QP-based approach currently described in the paper

What do you think @hammoudbilal @olimexsmart ?
Maybe before investing time in the c++ implementation I should do some tests in the Python simulator?

@andreadelprete
Copy link
Owner Author

Good news: I found out that we don't need the explicit time dependency for considering a constant-velocity anchor point. If we define the state as the difference between contact point and anchor point $\bar{x} = x-x_0$ then the dynamics takes the same form as usual:
\begin{align}
\dot{\bar{x}} = A \bar{x} + b + u
\end{align}
where $u$ is our QP decision variable, representing the anchor point velocities (to be more precise the first half of u must be zero, and the second half contains the anchor point velocities). This is very close to what currently written in the paper, which is what @hammoudbilal is implementing. In terms of computation time, this should be quite fast, but before spending time figuring out how to compute the needed terms efficiently, I'd rather to implement it in an inefficient way in the Python simulator, just to check whether it works. We can leave the computational optimization for the last days as long as they don't affect the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant