New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing LQR stabilization #118
Fixing LQR stabilization #118
Conversation
@adamhall it seems the MPC controllers also use |
Decided to make X_EQ all zeros every time. |
@siqizhou @adamhall I am now always getting the LQR gain around X_EQ and U_EQ which means there is no need to recalculate the gain for each tracking iteration, just calculate it once. Somehow this leads to the same behaviour as before (the gain does not change if linearized about a goal point or the equilibrium). Not sure why this is the case. |
Changes look fine to me but I do have one question:
Is this true to stabilization and trajectory tracking or just stabilization? Previously, it looks like the trajectory tracking was linearizing about each point of the trajectory, which should be much better than just linearizing once about |
@adamhall i thought in our discussion we decided that the linearization should always be about a equilibrium point (i.e. X_EQ). Additionally, I just checked and the linearization is identical regardless of the equilibrium point passed to |
Tested everything and other than the minor LQR stabilization bug, everything else works the same as before but is a lot more clean and concise imo |
Sorry! Yes! this makes sense because the linearization is only really a function of theta and the inputs. Since theta is just 0 in our reference, it doesn't make a difference. That checks out. I am satisfied now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I'm not familiar with the iLQR stuff as much.
Fixed small error in LQR code and changed the test to be more interesting. Essentially the problem stems from the fact we changed what X_EQ is in quadrotor, and so I'm unsure if it would cause errors in other places of the code. In quadrotor, it was always stabilizing to (0, 0) instead of the desired goal point since it expected X_EQ to hold the stabilization goal but it now is only an array of 0s.
The code in cartpole does what I believe used to be done, setting X_EQ to be the first X_GOAL:
safe-control-gym/safe_control_gym/envs/gym_control/cartpole.py
Line 423 in bfe74b3
However, in quadrotor, this is not done (it is set to 0s instead):
safe-control-gym/safe_control_gym/envs/gym_pybullet_drones/quadrotor.py
Line 562 in bfe74b3
I think either of these approaches is fine, but we should decide since this is inconsistent and is causing problems.