Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelization of computeIntegral and computeDoubleIntegral #12

Closed
andreadelprete opened this issue May 16, 2020 · 6 comments
Closed

Comments

@andreadelprete
Copy link
Owner

Hi @olimexsmart ,

what do you think about parallelizing the computation of the two integrals? Do you think the overhead coming from the parallelization would be higher than the time gained? Of course this depends on the time taken to compute the matrix exponentials, which in turn depends on size and norm of the matrix A, but do you have some experience on the "price" of parallelization?

@olimexsmart
Copy link
Collaborator

The only path I see worth trying are threads. Maybe an idea could be to launch two threads that wait for input data to process, this way we avoid the overhead of creating and destroying.

That said, I have no deep experience in the subject. Since we are working at the microsecond level, I think that interacting with the OS is generally not a good idea.

Clearly some experiments could be done, the problem is quite specific

@andreadelprete
Copy link
Owner Author

Maybe just google it first to see what people say about it. Then it might be worth it to do a few tests.

@andreadelprete
Copy link
Owner Author

I've thought a bit about the computation of these two integrals. Basically we are computing expm of two matrices that are very similar because one of them is contained in the other one. Since most of the computational effort in expm is dedicated to computing powers of the input matrix, it turns out that these two methods share a lot of computations. Therefore, instead of parallelizing them, it might be more efficient to just write a dedicated method that computes both at the same time. If implemented well, I think this customized computation could be faster than computing the two operations in parallel.

Here you can find some notes that I wrote down on the subject.
computing integral of expm.pdf
In the beginning I analyze the computation of the integral of expm, which is needed for the QP-based approach for dealing with friction cones. Then I've realized that the same kind of reasoning could be applied to the computation of the two integrals of x.

@andreadelprete
Copy link
Owner Author

I just found out that computing both the first and double integral together is extremely simple. In fact, we're already doing it without knowing it! It turns out that the first integral is already inside the matrix exponential that we compute to get the double integral. Here an example with a 6x6 matrix A, and the matrix C being the augmented matrix we use to compute the double integral:

e_TC
 [[4.15 5.29 4.54 4.01 4.82 6.1  3.61 6.6  1.77]
 [2.62 5.17 3.32 3.22 4.16 5.05 2.28 5.21 1.45]
 [3.54 5.72 5.22 4.79 5.27 6.38 3.15 6.78 1.82]
 [2.98 5.63 4.58 5.34 5.37 6.03 3.37 6.61 1.77]
 [1.83 3.85 2.54 3.01 4.25 4.02 1.88 3.64 0.86]
 [2.1  3.54 3.08 2.86 2.96 4.61 2.16 4.2  1.1 ]
 [0.   0.   0.   0.   0.   0.   1.   1.   0.5 ]
 [0.   0.   0.   0.   0.   0.   0.   1.   1.  ]
 [0.   0.   0.   0.   0.   0.   0.   0.   1.  ]]

int_x [6.6  5.21 6.78 6.61 3.64 4.2 ]
e_TA
 [[4.15 5.29 4.54 4.01 4.82 6.1 ]
 [2.62 5.17 3.32 3.22 4.16 5.05]
 [3.54 5.72 5.22 4.79 5.27 6.38]
 [2.98 5.63 4.58 5.34 5.37 6.03]
 [1.83 3.85 2.54 3.01 4.25 4.02]
 [2.1  3.54 3.08 2.86 2.96 4.61]]

The first 6 elements of the last column of e^{T*C} contain the double integral of x. It turns out that the first 6 elements of the before-last column instead contain the first integral of x. Moreover, it turns out that the upper-left block is instead the matrix exponential of T*A.

@olimexsmart Could you provide us with a method that computes both first and double integral based on this new knowledge? Basically you should just do exactly the same computation as for computing the double integral, but then also return the before-last column, which contains the first integral. Then @hammoudbilal could switch to this new method in consim, which will save us about 50% of the computation time!

@olimexsmart
Copy link
Collaborator

This is quite unexpected! I should manage to get it done this afternoon, I'm not at home at the moment

@olimexsmart
Copy link
Collaborator

olimexsmart commented Jun 2, 2020

#include "LDSUtility.hpp"

Matrix<double, M, 1> resultInt;
Matrix<double, M, 1> resultDoubleInt;

LDSUtility<double, Dynamic> util(M);

util.ComputeIntegrals(A, b, xInit, TIMESTEP, resultInt, resultDoubleInt);

At the moment the function is implemented only in the dynamic version of LDSUtility

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants