Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing LQR stabilization #118

Conversation

Federico-PizarroBejarano
Copy link
Contributor

Fixed small error in LQR code and changed the test to be more interesting. Essentially the problem stems from the fact we changed what X_EQ is in quadrotor, and so I'm unsure if it would cause errors in other places of the code. In quadrotor, it was always stabilizing to (0, 0) instead of the desired goal point since it expected X_EQ to hold the stabilization goal but it now is only an array of 0s.

The code in cartpole does what I believe used to be done, setting X_EQ to be the first X_GOAL:

"X_EQ": np.atleast_2d(self.X_GOAL)[0,:].T,

However, in quadrotor, this is not done (it is set to 0s instead):

I think either of these approaches is fine, but we should decide since this is inconsistent and is causing problems.

@Justin-Yuan
Copy link
Contributor

@adamhall it seems the MPC controllers also use X_EQ in linearization but is fixed as 0 as defined in the quadrotor task, can you confirm this is the option we are going with?

@Federico-PizarroBejarano
Copy link
Contributor Author

Decided to make X_EQ all zeros every time.

@Federico-PizarroBejarano
Copy link
Contributor Author

@siqizhou @adamhall I am now always getting the LQR gain around X_EQ and U_EQ which means there is no need to recalculate the gain for each tracking iteration, just calculate it once. Somehow this leads to the same behaviour as before (the gain does not change if linearized about a goal point or the equilibrium). Not sure why this is the case.

@adamhall
Copy link
Contributor

Changes look fine to me but I do have one question:

@siqizhou @adamhall I am now always getting the LQR gain around X_EQ and U_EQ which means there is no need to recalculate the gain for each tracking iteration, just calculate it once. Somehow this leads to the same behaviour as before (the gain does not change if linearized about a goal point or the equilibrium). Not sure why this is the case.

Is this true to stabilization and trajectory tracking or just stabilization? Previously, it looks like the trajectory tracking was linearizing about each point of the trajectory, which should be much better than just linearizing once about U_EQ and X_EQ.

@Federico-PizarroBejarano
Copy link
Contributor Author

@adamhall i thought in our discussion we decided that the linearization should always be about a equilibrium point (i.e. X_EQ). Additionally, I just checked and the linearization is identical regardless of the equilibrium point passed to compute_lqr_gain as long as theta and theta_dot are 0 (which they are for the goal points). That is why switching the linearization from using X_GOAL[step] to X_EQ changes nothing. Unsure if this is expected or not.

@Federico-PizarroBejarano
Copy link
Contributor Author

Tested everything and other than the minor LQR stabilization bug, everything else works the same as before but is a lot more clean and concise imo

@adamhall
Copy link
Contributor

Sorry! Yes! this makes sense because the linearization is only really a function of theta and the inputs. Since theta is just 0 in our reference, it doesn't make a difference. That checks out. I am satisfied now.

Copy link
Contributor

@adamhall adamhall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I'm not familiar with the iLQR stuff as much.

@Federico-PizarroBejarano Federico-PizarroBejarano merged commit ae51d61 into utiasDSL:dev-experiment-class Dec 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants