Skip to content

Commit

Permalink
Merge pull request #372 from do-mpc/develop-book-theme-docs
Browse files Browse the repository at this point in the history
Update docs: New book-themes style
  • Loading branch information
4flixt committed Apr 27, 2023
2 parents 6f800b1 + df544c7 commit 9cfd823
Show file tree
Hide file tree
Showing 20 changed files with 482 additions and 306 deletions.
1 change: 1 addition & 0 deletions .conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ dependencies:
- docutils
- sphinx-copybutton
- sphinx-autodoc-typehints
- sphinx-book-theme
- graphviz
- myst-parser
- packaging
Expand Down
8 changes: 4 additions & 4 deletions do_mpc/controller/_controllersettings.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,9 @@ class LQRSettings(ControllerSettings):
Example to change settings:
::
::
lqr.settings.n_horizon = 20
lqr.settings.n_horizon = 20
Note:
Settings cannot be updated after calling :py:meth:`do_mpc.controller.LQR.setup`.
Expand All @@ -67,9 +67,9 @@ class MPCSettings(ControllerSettings):
Example to change settings:
::
::
mpc.settings.n_horizon = 20
mpc.settings.n_horizon = 20
Note:
Settings cannot be updated after calling :py:meth:`do_mpc.controller.MPC.setup`.
Expand Down
138 changes: 66 additions & 72 deletions do_mpc/controller/_lqr.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ class LQR(IteratedVariables):
Two types of LQR can be desgined:
1. **Finite Horizon** LQR by choosing, e.g. ``n_horizon = 20``.
1. **Finite Horizon** LQR by choosing, e.g. ``n_horizon = 20``.
2. **Infinite Horizon** LQR by choosing ``n_horizon = None``.
2. **Infinite Horizon** LQR by choosing ``n_horizon = None``.
The value for ``n_horizon`` is set using :py:meth:`set_param`.
Expand All @@ -57,17 +57,17 @@ class LQR(IteratedVariables):
1. **Standard** mode:
- Set set-point with :py:meth:`set_setpoint` (default is ``0``).
- Set ``Q`` and ``R`` values with :py:meth:`set_objective`.
- Set set-point with :py:meth:`set_setpoint` (default is ``0``).
- Set ``Q`` and ``R`` values with :py:meth:`set_objective`.
2. **Input Rate Penalization** mode:
- Setpoint can also be set using :py:meth:`set_setpoint` (default is ``0``).
- Reformulate objective with :py:meth:`set_rterm` to penalize the input rate by setting the value ``delR``.
- Setpoint can also be set using :py:meth:`set_setpoint` (default is ``0``).
- Reformulate objective with :py:meth:`set_rterm` to penalize the input rate by setting the value ``delR``.
- Set ``Q`` and ``R`` values with :py:meth:`set_objective`.
- Set ``Q`` and ``R`` values with :py:meth:`set_objective`.
Note:
The function :py:meth:`set_rterm` mode is not recommended to use if the model is converted from an DAE to an ODE system.
Expand Down Expand Up @@ -114,22 +114,22 @@ def discrete_gain(self,A:np.ndarray,B:np.ndarray)->np.ndarray:
For finite horizon :py:class:`LQR`, the problem formulation is as follows:
.. math::
\\pi(N) &= P_f\\\\
K(k) & = -(B'\\pi(k+1)B)^{-1}B'\\pi(k+1)A\\\\
\\pi(k) & = Q+A'\\pi(k+1)A-A'\\pi(k+1)B(B'\\pi(k+1)B+R)^{-1}B'\\pi(k+1)A
.. math::
\\pi(N) &= P_f\\\\
K(k) & = -(B'\\pi(k+1)B)^{-1}B'\\pi(k+1)A\\\\
\\pi(k) & = Q+A'\\pi(k+1)A-A'\\pi(k+1)B(B'\\pi(k+1)B+R)^{-1}B'\\pi(k+1)A
For infinite horizon :py:class:`LQR`, the problem formulation is as follows:
.. math::
K & = -(B'PB+P)^{-1}B'PA\\\\
P & = Q+A'PA-A'PB(R+B'PB)^{-1}B'PA\\\\
.. math::
K & = -(B'PB+P)^{-1}B'PA\\\\
P & = Q+A'PA-A'PB(R+B'PB)^{-1}B'PA\\\\
For example:
::
::
K = lqr.discrete_gain(A,B)
K = lqr.discrete_gain(A,B)
Args:
A : State matrix - constant matrix with no variables
Expand Down Expand Up @@ -170,15 +170,16 @@ def set_rterm(self,delR:np.ndarray)->None:
The input rate penalization formulation is given as:
.. math::
x(k+1) = \\tilde{A} x(k) + \\tilde{B}\\Delta u(k)\\\\
\\text{where} \\quad
\\tilde{A} = \\begin{bmatrix}
A & B \\\\
0 & I \\end{bmatrix},
\\tilde{B} = \\begin{bmatrix} B \\\\
I \\end{bmatrix}
.. math::
\\begin{aligned}
x(k+1) &= \\tilde{A} x(k) + \\tilde{B}\\Delta u(k)\\\\
\\text{where} \\quad
\\tilde{A} &= \\begin{bmatrix}
A & B \\\\
0 & I \\end{bmatrix},\\quad
\\tilde{B} = \\begin{bmatrix} B \\\\
I \\end{bmatrix}
\\end{aligned}
We introduce new states of this system as :math:`\\tilde{x} = [x,u]`
where :math:`x` and :math:`u` are the original states and input of the system.
Expand All @@ -188,14 +189,13 @@ def set_rterm(self,delR:np.ndarray)->None:
As the system state matrix and input matrix are altered,
cost matrices are also modified accordingly:
.. math::
\\tilde{Q} = \\begin{bmatrix}
Q & 0 \\\\
0 & R \\end{bmatrix},
\\tilde{R} = \\Delta R
.. math::
\\tilde{Q} = \\begin{bmatrix}
Q & 0 \\\\
0 & R \\end{bmatrix},\\quad
\\tilde{R} = \\Delta R
Args:
delR : Rated input cost matrix - constant matrix with no variables
:param delR: Rated input cost matrix - constant matrix with no variables
"""

#Modifying A and B matrix for input rate penalization
Expand All @@ -213,9 +213,12 @@ def set_param(self,**kwargs)->None:
Two different kinds of LQR can be desgined. In order to design a finite horizon LQR, ``n_horizon`` and to design a infinite horizon LQR, ``n_horizon``
should be set to ``None`` (default value).
.. deprecated:: v4.5.0
This function will be deprecated in the future
Warnings:
This method will be depreciated in a future version. Please set parameters via :py:class:`do_mpc.controller.LQRSettings`.
Note:
A comprehensive list of all available parameters can be found in :py:class:`do_mpc.controller.LQRSettings`.
Expand All @@ -233,17 +236,6 @@ def set_param(self,**kwargs)->None:
lqr.set_param(n_horizon = 20)
It is also possible and convenient to pass a dictionary with multiple parameters simultaneously as shown in the following example:
::
setup_lqr = {
'n_horizon': 20,
't_step': 0.5,
}
lqr.set_param(**setup_mpc)
This makes use of thy python "unpack" operator. See `more details here`_.
.. _`more details here`: https://codeyarns.github.io/tech/2012-04-25-unpack-operator-in-python.html
Expand Down Expand Up @@ -327,35 +319,37 @@ def set_objective(self, Q:np.ndarray, R:np.ndarray, P:np.ndarray = None)->None:
**Finite Horizon**:
For **set-point tracking** mode:
.. math::
For **set-point tracking** mode:
.. math::
\\begin{aligned}
J &= \\frac{1}{2}\\sum_{k=0} ^{N-1} (x_k - x_{ss})^T Q(x_k-x_{ss})+(u_k-u_{ss})^T R(u_k-u_{ss})\\\\
&+ (x_N-x_{ss})^T P(x_N-x_{ss})
\\end{aligned}
J = \\frac{1}{2}\\sum_{k=0} ^{N-1} (x_k - x_{ss})^T Q(x_k-x_{ss})+(u_k-u_{ss})^T R(u_k-u_{ss}) \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad\\\\
+ (x_N-x_{ss})^T P(x_N-x_{ss})
For **Input Rate Penalization** mode:
For **Input Rate Penalization** mode:
.. math::
.. math::
J = \\frac{1}{2}\\sum_{k=0} ^{N-1} (\\tilde{x}_k - \\tilde{x}_{ss})^T \\tilde{Q}(\\tilde{x}_k-\\tilde{x}_{ss})+\\Delta u_k^T \\Delta R \\Delta u_k
+ (\\tilde{x}_N-\\tilde{x}_{ss})^TP(\\tilde{x}_N-\\tilde{x}_{ss})
J = \\frac{1}{2}\\sum_{k=0} ^{N-1} (\\tilde{x}_k - \\tilde{x}_{ss})^T \\tilde{Q}(\\tilde{x}_k-\\tilde{x}_{ss})+\\Delta u_k^T \\Delta R \\Delta u_k
+ (\\tilde{x}_N-\\tilde{x}_{ss})^TP(\\tilde{x}_N-\\tilde{x}_{ss})
**Infinite Horizon**:
For **set-point tracking** mode:
.. math::
J = \\frac{1}{2}\\sum_{k=0} ^{\\inf} (x_k - x_{ss})^T Q(x_k-x_{ss})+(u_k-u_{ss})^T R(u_k-u_{ss}) \\quad \\quad \\quad \\quad \\quad \\quad \\quad
For **Input Rate Penalization** mode:
For **set-point tracking** mode:
.. math::
J = \\frac{1}{2}\\sum_{k=0} ^{\\inf} (x_k - x_{ss})^T Q(x_k-x_{ss})+(u_k-u_{ss})^T R(u_k-u_{ss}) \\quad \\quad \\quad \\quad \\quad \\quad \\quad
.. math::
J = \\frac{1}{2}\\sum_{k=0} ^{\\inf} (\\tilde{x}_k - \\tilde{x}_{ss})^T \\tilde{Q}(\\tilde{x}_k-\\tilde{x}_{ss})+ \\Delta u_k^T \\Delta R \\Delta u_k \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad
For **Input Rate Penalization** mode:
.. math::
where :math:`\\tilde{x} = [x,u]^T`
J = \\frac{1}{2}\\sum_{k=0} ^{\\inf} (\\tilde{x}_k - \\tilde{x}_{ss})^T \\tilde{Q}(\\tilde{x}_k-\\tilde{x}_{ss})+ \\Delta u_k^T \\Delta R \\Delta u_k \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad
where :math:`\\tilde{x} = [x,u]^T` .
Note:
For the problem to be solved in ``inputRatePenalization`` mode, ``Q``, ``R`` and ``delR`` should be set.
Expand Down Expand Up @@ -420,10 +414,10 @@ def set_setpoint(self,xss :np.ndarray= None, uss:np.ndarray = None)->None:
For example:
::
# For ODE models
lqr.set_setpoint(xss = np.array([[10],[15]]) ,uss = np.array([[2],[3]]))
::
# For ODE models
lqr.set_setpoint(xss = np.array([[10],[15]]) ,uss = np.array([[2],[3]]))
Args:
xss : set point for states of the system(optional)
Expand Down
51 changes: 7 additions & 44 deletions do_mpc/controller/_mpc.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,7 @@
class MPC(do_mpc.optimizer.Optimizer, do_mpc.model.IteratedVariables):
"""Model predictive controller.
For general information on model predictive control, please read our `background article`_.
.. _`background article`: ../theory_mpc.html
For general information on model predictive control, please read our `background article <../theory_mpc.html>`_ .
The MPC controller extends the :py:class:`do_mpc.optimizer.Optimizer` base class
(which is also used for the :py:class:`do_mpc.estimator.MHE` estimator).
Expand Down Expand Up @@ -447,45 +445,17 @@ def set_param(self, **kwargs)->None:
mpc.set_param(n_horizon = 20)
It is also possible and convenient to pass a dictionary with multiple parameters simultaneously as shown in the following example:
::
setup_mpc = {
'n_horizon': 20,
't_step': 0.5,
}
mpc.set_param(**setup_mpc)
This method makes use of the python "unpack" operator. See `more details here`_.
.. _`more details here`: https://codeyarns.github.io/tech/2012-04-25-unpack-operator-in-python.html
Note:
The only required parameters are ``n_horizon`` and ``t_step``. All other parameters are optional.
Note:
:py:func:`set_param` can be called multiple times. Previously passed arguments are overwritten by successive calls.
This only works prior to calling :py:func:`setup`.
Note:
We highly suggest to change the linear solver for IPOPT from `mumps` to `MA27`. In many cases this will drastically boost the speed of **do-mpc**. Change the linear solver with:
::
MPC.set_param(nlpsol_opts = {'ipopt.linear_solver': 'MA27'})
We highly suggest to change the linear solver for IPOPT from `mumps` to `MA27`.
Any available linear solver can be set using :py:meth:`do_mpc.controller.MPCSettings.set_linear_solver`.
For more details, please check the :py:class:`do_mpc.controller.MPCSettings`.
Note:
To suppress the output of IPOPT, please use:
::
suppress_ipopt = {'ipopt.print_level':0, 'ipopt.sb': 'yes', 'print_time':0}
MPC.set_param(nlpsol_opts = suppress_ipopt)
The output of IPOPT can be suppressed :py:meth:`do_mpc.controller.MPCSettings.supress_ipopt_output`.
For more details, please check the :py:class:`do_mpc.controller.MPCSettings`.
"""
Expand Down Expand Up @@ -618,9 +588,7 @@ def get_p_template(self, n_combinations:int)->None:
Low level API method to set user defined scenarios for robust multi-stage MPC by defining an arbitrary number
of combinations for the parameters defined in the model.
For more details on robust multi-stage MPC please read our `background article`_.
.. _`background article`: ../theory_mpc.html#robust-multi-stage-nmpc
For more details on robust multi-stage MPC please read our `background article <../theory_mpc.html#robust-multi-stage-nmpc>`_
The method returns a structured object which is
initialized with all zeros.
Expand Down Expand Up @@ -676,9 +644,7 @@ def set_p_fun(self, p_fun:Callable[[float],Union[castools.structure3.SXStruct,ca
This is the low-level API method to set user defined scenarios for robust multi-stage MPC by defining an arbitrary number
of combinations for the parameters defined in the model.
For more details on robust multi-stage MPC please read our `background article`_.
.. _`background article`: ../theory_mpc.html#robust-multi-stage-nmpc
For more details on robust multi-stage MPC please read our `background article <../theory_mpc.html#robust-multi-stage-nmpc>`_ .
The method takes as input a function, which MUST
return a structured object, based on the defined parameters and the number of combinations.
Expand Down Expand Up @@ -719,7 +685,7 @@ def p_fun(t_now):
which is determined by the order in the arrays above (first element is nominal).
Args:
p_fun(function): Function which returns a structure with numerical values. Must be the same structure as obtained from :py:func:`get_p_template`. Function must have a single input (time).
p_fun: Function which returns a structure with numerical values. Must be the same structure as obtained from :py:func:`get_p_template`. Function must have a single input (time).
"""
assert self.get_p_template(self.n_combinations).labels() == p_fun(0).labels(), 'Incorrect output of p_fun. Use get_p_template to obtain the required structure.'
self.flags['set_p_fun'] = True
Expand All @@ -728,9 +694,7 @@ def p_fun(t_now):
def set_uncertainty_values(self, **kwargs)->None:
"""Define scenarios for the uncertain parameters.
High-level API method to conveniently set all possible scenarios for multistage MPC.
For more details on robust multi-stage MPC please read our `background article`_.
.. _`background article`: ../theory_mpc.html#robust-multi-stage-nmpc
For more details on robust multi-stage MPC please read our `background article <../theory_mpc.html#robust-multi-stage-nmpc>`_ .
Pass a number of keyword arguments, where each keyword refers to a user defined parameter name from the model definition.
The value for each parameter must be an array (or list), with an arbitrary number of possible values for this parameter.
Expand Down Expand Up @@ -857,9 +821,8 @@ def setup(self)->None:
Note that especially for robust multi-stage MPC with a long robust horizon and many
possible combinations of the uncertain parameters very large problems will arise.
For more details on robust multi-stage MPC please read our `background article`_.
For more details on robust multi-stage MPC please read our `background article <../theory_mpc.html#robust-multi-stage-nmpc>`_ .
.. _`background article`: ../theory_mpc.html#robust-multi-stage-nmpc
"""
self.prepare_nlp()
self.create_nlp()
Expand Down

0 comments on commit 9cfd823

Please sign in to comment.