Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: add a tutorial of scipy.optimize.linprog #11847

Merged
merged 40 commits into from Apr 20, 2020
Merged
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
b5bc96b
Doc: add a tutorial of scipy.optimize.linprog
AtsushiSakai Apr 11, 2020
d08e002
improve example based on review.
AtsushiSakai Apr 12, 2020
351c5e1
Merge branch 'master' of github.com:scipy/scipy into issue_10358_1
AtsushiSakai Apr 14, 2020
911c8e0
fixed minor issues based on review.
AtsushiSakai Apr 14, 2020
0b7c315
fixed issues based on review.
AtsushiSakai Apr 14, 2020
7fa2581
fixed minor issues.
AtsushiSakai Apr 14, 2020
4744b2d
[WIP] fix issues based on review.
AtsushiSakai Apr 15, 2020
44a767d
[WIP] fix issues based on review.
AtsushiSakai Apr 15, 2020
1e13abb
fix issues based on review.
AtsushiSakai Apr 16, 2020
ae2b01d
fix issues based on review.
AtsushiSakai Apr 16, 2020
485efeb
fix issues based on review.
AtsushiSakai Apr 16, 2020
e74e31e
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
70597f6
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
a4f7654
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
5a21369
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
2edff81
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
d80d99d
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
f005dc2
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
46380d0
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
a384480
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
2e55169
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
237523a
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
740ecf9
[WIP] fix tutorial based on review.
AtsushiSakai Apr 17, 2020
697e970
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
c8ac02a
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 17, 2020
efa7b68
[WIP] fix tutorial based on review.
AtsushiSakai Apr 17, 2020
efe89c7
Merge branch 'issue_10358_1' of https://github.com/AtsushiSakai/scipy…
AtsushiSakai Apr 17, 2020
388e1df
[WIP] fix tutorial based on review.
AtsushiSakai Apr 17, 2020
dca299b
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 18, 2020
ae0c6bb
update docstring base on review.
AtsushiSakai Apr 18, 2020
a7ba283
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
28a252f
fix constraint output.
AtsushiSakai Apr 19, 2020
4dc20ff
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
6303c52
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
b2244b7
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
40e89c9
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
b77a274
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
c531c1c
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
849ba17
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
4886777
Update doc/source/tutorial/optimize.rst
AtsushiSakai Apr 19, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
237 changes: 216 additions & 21 deletions doc/source/tutorial/optimize.rst
Expand Up @@ -9,31 +9,12 @@ Optimization (:mod:`scipy.optimize`)

.. currentmodule:: scipy.optimize

.. contents::

The :mod:`scipy.optimize` package provides several commonly used
optimization algorithms. A detailed listing is available:
:mod:`scipy.optimize` (can also be found by ``help(scipy.optimize)``).

The module contains:

1. Unconstrained and constrained minimization of multivariate scalar
functions (:func:`minimize`) using a variety of algorithms (e.g., BFGS,
Nelder-Mead simplex, Newton Conjugate Gradient, COBYLA or SLSQP).

2. Global optimization routines (e.g., :func:`basinhopping`,
:func:`differential_evolution`, :func:`shgo`, :func:`dual_annealing`).

3. Least-squares minimization (:func:`least_squares`) and curve fitting
(:func:`curve_fit`) algorithms.

4. Scalar univariate functions minimizers (:func:`minimize_scalar`) and
root finders (:func:`root_scalar`).

5. Multivariate equation system solvers (:func:`root`) using a variety of
algorithms (e.g., hybrid Powell, Levenberg-Marquardt or large-scale
methods such as Newton-Krylov [KK]_).

Below, several examples demonstrate their basic usage.


Unconstrained minimization of multivariate scalar functions (:func:`minimize`)
------------------------------------------------------------------------------
Expand Down Expand Up @@ -1347,6 +1328,220 @@ Preconditioning is an art, science, and industry. Here, we were lucky
in making a simple choice that worked reasonably well, but there is a
lot more depth to this topic than is shown here.

Linear programming (:func:`linprog`)
------------------------------------

The function :func:`linprog` can minimize a linear objective function
subject to linear equality and inequality constraints. This kind of
problem is well known as linear programming. Linear programming solves
problems of the following form:

.. math::

\min_x \ & c^T x \\
\mbox{such that} \ & A_{ub} x \leq b_{ub},\\
& A_{eq} x = b_{eq},\\
& l \leq x \leq u ,

where :math:`x` is a vector of decision variables; :math:`c`, :math:`b_{ub}`,
:math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and :math:`A_{ub}` and
:math:`A_{eq}` are matrices.

In this tutorial, we will try to solve a typical linear programming
problem using :func:`linprog`.

Linear programming example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Consider the following simple linear programming problem:

.. math::
\max_{x_1, x_2, x_3, x_4} \ & 29x_1 + 45x_2 \\
\mbox{such that} \
& x_1 -x_2 -3x_3 \leq 5\\
& 2x_1 -3x_2 -7x_3 + 3x_4 \geq 10\\
& 2x_1 + 8x_2 + x_3 = 60\\
& 4x_1 + 4x_2 + x_4 = 60\\
& 0 \leq x_0\\
& 0 \leq x_1 \leq 5\\
& x_2 \leq 0.5\\
& -3 \leq x_3\\

We need some mathematical manipulations to convert the target problem to the form accepted by :func:`linprog`.

First of all, let's consider the objective function.
We want to maximize the objective
function, but :func:`linprog` can only accept a minimization problem. This is easily remedied by converting the maximize
:math:`29x_1 + 45x_2` to minimizing :math:`-29x_1 -45x_2`. Also, :math:`x_3, x_4` are not shown in the objective
function. That means the weights corresponding with :math:`x_3, x_4` are zero. So, the objective function can be
converted to:

.. math::
\min_{x_1, x_2, x_3, x_4} \ -29x_1 -45x_2 + 0x_3 + 0x_4

If we define the vector of decision variables :math:`x = [x_1, x_2, x_3, x_4]^T`, the objective weights vector :math:`c` of :func:`linprog` in this problem
should be

.. math::
c = [-29, -45, 0, 0]^T

Next, let's consider the two inequality constraints. The first one is a "less than" inequality, so it is already in the form accepted by `linprog`.
The second one is a "greater than" inequality, so we need to multiply both sides by :math:`-1` to convert it to a "less than" inequality.
Explicitly showing zero coefficients, we have:

.. math::
x_1 -x_2 -3x_3 + 0x_4 &\leq 5\\
-2x_1 + 3x_2 + 7x_3 - 3x_4 &\leq -10\\

These equations can be converted to matrix form:

.. math::
A_{ub} x \leq b_{ub}\\

where

.. math::
:nowrap:

\begin{equation*} A_{ub} =
\begin{bmatrix} 1 & -1 & -3 & 0 \\
-2 & 3 & 7 & -3
\end{bmatrix}
\end{equation*}

.. math::
:nowrap:

\begin{equation*} b_{ub} =
\begin{bmatrix} 5 \\
-10
\end{bmatrix}
\end{equation*}

Next, let's consider the two equality constraints. Showing zero weights explicitly, these are:

.. math::
2x_1 + 8x_2 + 1x_3 + 0x_4 &= 60\\
4x_1 + 4x_2 + 0x_3 + 1x_4 &= 60\\

These equations can be converted to matrix form:

.. math::
A_{eq} x = b_{eq}\\

where

.. math::
:nowrap:

\begin{equation*} A_{eq} =
\begin{bmatrix} 2 & 8 & 1 & 0 \\
4 & 4 & 0 & 1
\end{bmatrix}
\end{equation*}

.. math::
:nowrap:

\begin{equation*} b_{eq} =
\begin{bmatrix} 60 \\
60
\end{bmatrix}
\end{equation*}

Lastly, let's consider the separate inequality constraints on individual decision variables, which are known as
"box constraints" or "simple bounds". These constraints can be applied using the bounds argument of :func:`linprog`.
As noted in the :func:`linprog` documentation, the default value of bounds is ``(0, None)``, meaning that the
lower bound on each decision variable is 0, and the upper bound on each decision variable is infinity:
all the decision variables are non-negative. Our bounds are different, so we will need to specify the lower and upper bound on each
decision variable as a tuple and group these tuples into a list.


Finally, we can solve the transformed problem using :func:`linprog`.

::

>>> import numpy as np
>>> from scipy.optimize import linprog
>>> c = np.array([-29.0, -45.0, 0.0, 0.0])
>>> A_ub = np.array([[1.0, -1.0, -3.0, 0.0],
... [-2.0, 3.0, 7.0, -3.0]])
>>> b_ub = np.array([5.0, -10.0])
>>> A_eq = np.array([[2.0, 8.0, 1.0, 0.0],
... [4.0, 4.0, 0.0, 1.0]])
>>> b_eq = np.array([60.0, 60.0])
>>> x0_bounds = (0, None)
>>> x1_bounds = (0, 5.0)
>>> x2_bounds = (-np.inf, 0.5) # +/- np.inf can be used instead of None
>>> x3_bounds = (-3.0, None)
>>> bounds = [x0_bounds, x1_bounds, x2_bounds, x3_bounds]
>>> result = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=A_eq, b_eq=b_eq, bounds=bounds)
>>> print(result)
con: array([15.5361242 , 16.61288005]) # may vary
fun: -370.2321976308326 # may vary
message: 'The algorithm terminated successfully and determined that the problem is infeasible.'
nit: 6 # may vary
slack: array([ 0.79314989, -1.76308532]) # may vary
status: 2
success: False
x: array([ 6.60059391, 3.97366609, -0.52664076, 1.09007993]) # may vary

The result states that our problem is infeasible, meaning that there is no solution vector that satisfies all the
constraints. That doesn't necessarily mean we did anything wrong; some problems truly are infeasible.
Suppose, however, that we were to decide that our bound constraint on :math:`x_1` was too tight and that it could be loosened
to :math:`0 \leq x_1 \leq 6`. After adjusting our code ``x1_bounds = (0, 6)`` to reflect the change and executing it again:

::

>>> x1_bounds = (0, 6)
>>> bounds = [x0_bounds, x1_bounds, x2_bounds, x3_bounds]
>>> result = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=A_eq, b_eq=b_eq, bounds=bounds)
>>> print(result)
con: array([9.78840831e-09, 1.04662945e-08]) # may vary
fun: -505.97435889013434 # may vary
message: 'Optimization terminated successfully.'
nit: 4 # may vary
slack: array([ 6.52747190e-10, -2.26730279e-09]) # may vary
status: 0
success: True
x: array([ 9.41025641, 5.17948718, -0.25641026, 1.64102564]) # may vary

The result shows the optimization was successful.
We can check the objective value (``result.fun``) is same as :math:`c^Tx`:

::

>>> x = np.array(result.x)
>>> print(c @ x)
-505.97435889013434 # may vary

We can also check that all constraints are satisfied within reasonable tolerances:

::

>>> print(b_ub - (A_ub @ x).flatten()) # this is equivalent to result.slack
[ 6.52747190e-10, -2.26730279e-09] # may vary
>>> print(b_eq - (A_eq @ x).flatten()) # this is equivalent to result.con
[ 9.78840831e-09, 1.04662945e-08]] # may vary
>>> print([0 <= result.x[0], 0 <= result.x[1] <= 6.0, result.x[2] <= 0.5, -3.0 <= result.x[3]])
[True, True, True, True]

AtsushiSakai marked this conversation as resolved.
Show resolved Hide resolved
If we need greater accuracy, typically at the expense of speed, we can solve using the ``revised simplex`` method:

::

>>> result = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=A_eq, b_eq=b_eq, bounds=bounds, method='revised simplex')
>>> print(result)
con: array([0.00000000e+00, 7.10542736e-15]) # may vary
fun: -505.97435897435895 # may vary
message: 'Optimization terminated successfully.'
nit: 5 # may vary
slack: array([ 1.77635684e-15, -3.55271368e-15]) # may vary
status: 0
success: True
x: array([ 9.41025641, 5.17948718, -0.25641026, 1.64102564]) # may vary


.. rubric:: References

Some further reading and related software, such as Newton-Krylov [KK]_,
Expand Down