Skip to content

Commit

Permalink
[docs] why RO, and more.
Browse files Browse the repository at this point in the history
  • Loading branch information
1ozturkbe committed Jun 25, 2019
1 parent 0861a48 commit 6688d7a
Show file tree
Hide file tree
Showing 8 changed files with 109 additions and 29 deletions.
4 changes: 3 additions & 1 deletion docs/source/goal.rst
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
Goal programming
****************
****************

Work in progress...
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ Table of contents:
robust101
installation
whyro
methods
math
methods
goal
references

Expand Down
4 changes: 1 addition & 3 deletions docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,13 @@ To be able to use **robust**, you will need the following software installed on
- `GPkit`_
- `numpy`_
- `scipy`_
- `pandas`_

.. _Python 2.7 or higher: https://www.python.org/downloads/
.. _GPkit: http://gpkit.readthedocs.io/en/latest/installation.html
.. _numpy: https://docs.scipy.org/doc/numpy/user/index.html
.. _scipy: https://www.scipy.org/install.html
.. _pandas: https://pandas.pydata.org/pandas-docs/stable/install.html

Please click on each link to see installation instructions
Please click on each link to see installation instructions.

Clone + install **robust**
--------------------------
Expand Down
15 changes: 13 additions & 2 deletions docs/source/math.rst
Original file line number Diff line number Diff line change
@@ -1,2 +1,13 @@
Underlying mathematics
**********************
Mathematical moves for robust GPs/SPs
*************************************

There are 5 mathematical steps to being able to apply principles
from linear robust optimization to geometric and signomial programming.

- Linear programs (LPs) have tractable robust counterparts.
- Two-term posynomials are LP-approximable.
- All posynomials are LP-approximable.
- GPs have robust formulations.
- RSPs can be represented as sequential RGPs.

Work in progress...
28 changes: 27 additions & 1 deletion docs/source/methods.rst
Original file line number Diff line number Diff line change
@@ -1,2 +1,28 @@
Robustification methods
***********************
***********************

Within **robust**, there are 3 tractable approximate robust formulations for
GPs and SPs. The methods are detailed at a high level below, in decreasing order of conservativeness.
Please see [Saab, 2018] for further details.

Simple Conservative Approximation
---------------------------------

The simple conservative approximation maximizes each monomial term separately.

Linearized Perturbations
------------------------

The Linearized Perturbations formulation separates large posynomials
into decoupled posynomials, depending on the dependence of monomial terms.
It then robustifies these smaller posynomials using robust linear programming techniques.

Best Pairs
----------

The Best Pairs methodology separates large posynomials into decoupled
posynomials, just like Linearized Perturbations. However, it then solves an
inner-loop problem to find the least conservative combination of monomial pairs.


Work in progress...
4 changes: 3 additions & 1 deletion docs/source/references.rst
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
References
**********
**********

Work in progress...
15 changes: 12 additions & 3 deletions docs/source/robust101.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,16 @@
Robust 101
**********

This section will help you get started with **robust**
provided that you have a GP- or SP-compatible model.
This section will help you understand the basic ideas behind robust optimization (RO),
and get started with **robust** provided that you have a GP- or SP-compatible model.

Work in progress...
What is RO?
-----------

RO is a tractable method for optimization under uncertainty, and specifically under uncertain
parameters. It optimizes the worst-case objective outcome over uncertainty sets,
unlike general stochastic optimization methods which optimize statistics of the distribution
of the objective over probability distributions of uncertain parameters. As such, RO
sacrifices generality for tractability, probabilistic guarantees and engineering intuition.

Work in progress...
66 changes: 49 additions & 17 deletions docs/source/whyro.rst
Original file line number Diff line number Diff line change
@@ -1,30 +1,62 @@
Why robust optimization?
************************

Firstly we should ask, why optimization under uncertainty?
Firstly, why optimization under uncertainty? Simply put,
we want to preserve constraint feasibility under perturbations of uncertain parameters,
with as little a penalty as possible to objective performance. In other words,
we want designs that protect against uncertainty *least conservatively*, especially when compared
with designs that leverage convential methods such as design with margins and multimission design.

By using RO, we aim to reduce the sensitivity of our design performance to uncertain parameters, thereby
reducing program risk and introducing mathematical rigor to design under uncertainty.

Comparison of general SO methods with RO
========================================

|picOUC| |picSO|

.. |picOUC| image:: ouc.png
:width: 48%

Advantages of RO over SO
========================
.. |picSO| image:: so.png
:width: 48%

Tractability
------------
General optimization 'under certainty', eg. gradient descent, is done using methods that sample the
objective function, and use local information to converge towards an optimal solution.
Stochastic optimization uses the same principles, but with the addition of uncertain
parameters sampled from distributions. It then optimizes for some characteristic of the distribution
of the objective, such as some risk measure or expectation.

In general, SO methods are intractable due to the nature of
and methods for uncertainty propagation. The propagation of probability distributions of parameters through physics
Stochastic optimization has many benefits. It makes best use of available data
about parameters, and it is extremely general. However, design outcomes can be
significantly affected by the ability to sample from the parameter distribution, which
in many cases is not well known. An even worse prospect for SO is the combinatorics
and computational cost of probability distribution function (PDF) propagation through problem physics.
The propagation of probability distributions of parameters
requires the integration of PDFs with objective and constraint
outcomes. Since this is difficult, this is often achieved
outcomes. Since this is difficult, this is often achieved
through high-dimensional quadrature and the enumeration of
potential outcomes into scenarios.

Probabilistic guarantees
------------------------

RO methods give probabilistic guarantees of constraint
satisfaction for uncertain outcomes within a
defined uncertainty set.

potential outcomes into scenarios. And even so, SO has big computational requirements.

|picOUC| |picRO|

.. |picOUC| image:: ouc.png
:width: 48%

.. |picRO| image:: ro.png
:width: 48%

RO takes a different approach, choosing to optimize designs for worst-case objective outcomes
over well-defined uncertainty sets. RO takes advantage of mathematical structure, requiring that
the design problem is formulated as a program that has a tractable robust counterpart,
such as an LP, QP, SDP, GP or SP. This is restrictive, but many engineering
problems of interest can be formulated in these forms, with some significant benefits over general SO.

Within RO, the problem is monolithic; there is sampling from probability distributions, no
separate evaluation step and optimization loop. RO problems are deterministic, with probabilistic guarantees
of feasibility, and solve orders
of magnitude faster than SO formulations with the same constraints. Furthermore,
only the mild assumption of bounded uncertainty sets is required;
no problem-specific approximations, assumptions or algorithms are needed.
Any feasible GP or SP can be solved as an RO problem. As such, RO is especially
suited to problems that are data deprived, such as conceptual design problems.

0 comments on commit 6688d7a

Please sign in to comment.