Skip to content

chitraz/Optimal_Control_via_Dynamic_Programing

Repository files navigation

Optimal Control via Dynamic Programing

Goal: Solve

						minimize J(x,u,t) = g(x(0),x(T)) + ∫_0_T_[f(x,u,t)]dt
							s.t. dx/dt=h(x,u,t) (dynamics)
								x_min <= x <= x_max (bounded state values)
								u_min <= u <= u_max (bounded control values)

whre x is the state variable, u is the control variable

Problem formulation

Results

About

C implementation solving the recursive Bellman equation for constrained non-linear optimal control problems with a fixed horizon.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors