Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Positive Definiteness of the Quadratic Part of Linearized Tracking Error #29

Closed
AmineElhafsi opened this issue Jan 22, 2020 · 4 comments

Comments

@AmineElhafsi
Copy link

Hi,

I'm working on performing the Taylor expansion of the first two terms of equations 13a to get the first two terms of 14a. I'm using an automatic differentiation package to get the c_k and Gamma_k terms in equation 14a however I'm finding that the quadratic part is not necessarily positive semi-definite. Did you employ any specific techniques to ensure that the quadratic part was psd?

Also, just a quick question regarding notation, in the first two terms under the summation in equation 14a, should the vectors [x_k theta_A,k] be interpreted to represent [x_k-x_0,k theta_A,k-theta_A0, k], where the 0 subscripted terms represent the points about which we are Taylor expanding? Or am I misunderstanding?

Thanks!

@alexliniger
Copy link
Owner

I am using Gauss newton hessian approximation method

Q_contouring_cost = error_info.d_error.transpose()*ContouringCost*error_info.d_error;

maybe the approach is simpler to understand if you look at the beta cost.
CostMatrix Cost::getBetaCost(const State &x) const

The idea is that for nonlinear least square costs, you don't have to compute the hessian but you can approximate it using the jacobian. There are two advantages, first it is computationally cheap, second it is guaranteed PSD.

14a is correct, the x_0 and thata_0 terms are considered in the linear cost as well as in the constant term which can be neglected in an optimization problem.

Hope that helps,
Alex

@AmineElhafsi
Copy link
Author

Thanks!

@LoweDavince
Copy link

Hi @alexliniger
I don't understand For SQP approximation why following line should add the - d_contouring_error*stateToVector(x) part?
const double contouring_error_zero = error_info.error(0) - d_contouring_error*stateToVector(x);

Thanks!

@alexliniger
Copy link
Owner

y = f(x) ~ df/dx(x-x0) + f(x0) = df/dx x + [f(x0) - df/dx x0]
The contouring error zero term is the term in the [ ] brackets

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants