Skip to content

Commit

Permalink
[docs] methods complete.
Browse files Browse the repository at this point in the history
  • Loading branch information
1ozturkbe committed Jun 28, 2019
1 parent fc0ad2f commit 56c4e28
Show file tree
Hide file tree
Showing 8 changed files with 69 additions and 30 deletions.
11 changes: 5 additions & 6 deletions docs/source/gettingstarted.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,16 @@ out of GPkit, and *robust* to describe models that have been robustified using *
The uncertainties in **robust** are defined by adding attribute *pr* to any variable
in your model. This attribute
describes the :math:`3\sigma` uncertainty for the given parameter, normalized by its mean (otherwise known
as 3 times the coefficient of variation, eg. :math:`pr = 10`
would specify a 10% 3CV). Note that these attributes
as 3 times the coefficient of variation). Note that these attributes
are carried by nominal models but only come into effect when **robust** is applied.

.. code-block:: python
from gpkit import Variable, Model
x = Variable('x', pr = 10)
% ...
% after more variables, constraints
% ...
x = Variable('x', pr = 12) # 3CV = 12%
# ...
# after more variables, constraints
# ...
m = Model(objective, constraints, substitutions)
Once you have added uncertainties to parameters, and created a GPkit model,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
contain the root `toctree` directive.
Welcome to **robust**'s documentation!
==================================
======================================

**robust** is a framework for engineering system optimization
under uncertainty using geometric and signomial programming.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/math.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ of posynomials.
|picHsiung|

.. |picHsiung| image:: picHsiung.png
:width: 80%
:width: 90%

For derivation of robust GPs, the central finding is in Corollary 1 of the paper,
which asserts that there is an analytical solution for the lowest-error
Expand Down Expand Up @@ -99,7 +99,7 @@ We show an example of such a partition, borrowed from Ozturk et al..
|partitioning|

.. |partitioning| image:: partitioning.png
:width: 80%
:width: 90%

RSPs can be represented as sequential RGPs.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
54 changes: 42 additions & 12 deletions docs/source/methods.rst
Original file line number Diff line number Diff line change
@@ -1,29 +1,59 @@
Robustification methods
***********************
Approximations for tractable robust GPs
***************************************

Within **robust**, there are 3 tractable approximate robust formulations for
GPs and SPs. The methods are detailed at a high level below, in decreasing order of conservativeness.
Please see [Saab, 2018] for further details. The following descriptions have been
borrowed from [Ozturk, 2019].
GPs, which can then be extended to SPs through heuristics
The methods are detailed at a high level below, in decreasing order of conservativeness.
Please see [Saab, 2018] for further details.

*(The following overview has been paraphrased from [Ozturk, 2019].)*

The robust counterpart of an uncertain geometric program is:

.. math::
\begin{split}
\min &~~f_0\left(\mathbf{x}\right)\\
\text{s.t.} &~~\max_{\mathbf{\zeta} \in \mathcal{Z}} \left\{\textstyle{\sum}_{k=1}^{K_i}e^{\mathbf{a_{ik}}\left(\zeta\right)\mathbf{x} + b_{ik}\left(\zeta\right)}\right\} \leq 1, ~\forall i \in 1,...,m\\
\end{split}
which is Co-NP hard in its natural posynomial form [Chassein, 2014]. We will present three approximate formulations of a RGP.

Simple Conservative Approximation
---------------------------------

The simple conservative approximation maximizes each monomial term separately.
One way to approach the intractability of the above problem is to replace each constraint by a tractable approximation.
Replacing the max-of-sum by the sum-of-max will lead to the following formulation.

.. math::
\begin{split}
\min &~~f_0\left(\mathbf{x}\right)\\
\text{s.t.} &~~\textstyle{\sum}_{k=1}^{K_i} {\displaystyle \max_{\mathbf{\zeta} \in \mathcal{Z}}} \left\{e^{\mathbf{a_{ik}}\left(\zeta\right)\mathbf{x} + b_{ik}\left(\zeta\right)}\right\} \leq 1, ~\forall i \in 1,...,m
\end{split}
Maximizing a monomial term is equivalent to maximizing an affine function, therefore the Simple Conservative approximation is tractable.

Linearized Perturbations
------------------------

The Linearized Perturbations formulation separates large posynomials
into decoupled posynomials, depending on the dependence of monomial terms.
It then robustifies these smaller posynomials using robust linear programming techniques.
If the exponents are known and certain, then large posynomial constraints can be approximated as signomial constraints.
The exponential perturbations in each posynomial are linearized using a modified least squares method, and then the
posynomial is robustified using techniques from robust linear programming. The resulting set of constraints is SP-compatible,
therefore, a robust GP can be approximated as a SP.

Best Pairs
----------

The Best Pairs methodology separates large posynomials into decoupled
posynomials, just like Linearized Perturbations. However, it then solves an
inner-loop problem to find the least conservative combination of monomial pairs.

If the exponents of a posynomial are uncertain as well as the coefficients,
then large posynomials can't be approximated as a SP, and further simplification is needed.
This formulation allows for uncertain exponents, by maximizing each pair of monomials in each posynomial,
while finding the best combination of monomials that gives the least conservative solution.
[Saab, 2018] provides a descent algorithm to find locally optimal combinations of the monomials,
and shows how the uncertain GP can be approximated as a GP for polyhedral uncertainty,
and a conic optimization problem for elliptical uncertainty with uncertain exponents.

Work in progress...
To reiterate, please refer to [Saab, 2018] for further details
on robust GP approximations.
2 changes: 2 additions & 0 deletions docs/source/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ References

[Ben-Tal, 1999] Ben-Tal, A., and Nemirovski, A., “Robust solutions of uncertain linear programs,” Operations Research Letters, 1999.

[Chassein, 2014] Chassein, A., and Goerigk, M., "Robust Geometric Programming is co-NP hard.”, Fachbereich Mathematik, Technische Universitat Kaiserslautern, Germany, 2014, pp. 1–6.

[Hsiung, 2008] Hsiung, K. L., Kim, S. J., and Boyd, S., “Tractable approximate robust geometric programming,” Optimization and Engineering, vol. 9, 2008, pp. 95–118.

[Ozturk, 2019] Ozturk, B. and Saab, A., "Optimal Aircraft Design Decisions Under Uncertainty via Robust Signomial Programming", AIAA Aviation 2019 Conference Proceedings.
Expand Down
3 changes: 1 addition & 2 deletions docs/source/robust101.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,5 @@ bounded support
\end{split}
where :math:`\Gamma` is defined by the user as a global uncertainty bound. The larger the :math:`\Gamma`,
the greater the size of the uncertainty set that is protected against.
the greater the size of the uncertainty set that is protected against!

Work in progress...
9 changes: 9 additions & 0 deletions docs/source/rspapproaches.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,13 @@
.. _rspapproaches:

Approaches to solving robust SPs
================================

*[borrowed from [Ozturk, 2019]*

|rspSolve|

.. |rspSolve| image:: rspSolve.png
:width: 80%

Work in progress...
14 changes: 7 additions & 7 deletions docs/source/whyro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,17 @@ Firstly, why optimization under uncertainty? Simply put,
we want to preserve constraint feasibility under perturbations of uncertain parameters,
with as little a penalty as possible to objective performance. In other words,
we want designs that protect against uncertainty *least conservatively*, especially when compared
with designs that leverage convential methods such as design with margins and multimission design.
with designs that leverage conventional design methods such as design with margins or multimission design.

By using RO, we aim to reduce the sensitivity of our design performance to uncertain parameters, thereby
reducing program risk and introducing mathematical rigor to design under uncertainty.
RO introduces mathematical rigor to design under uncertainty, and aims to reduce the sensitivity of
design performance to uncertain parameters, thereby reducing risk.

Comparison of general SO methods with RO
========================================

|picOUC| |picSO|
|picOUC1| |picSO|

.. |picOUC| image:: ouc.png
.. |picOUC1| image:: ouc.png
:width: 48%

.. |picSO| image:: so.png
Expand All @@ -38,9 +38,9 @@ outcomes. Since this is difficult, this is often achieved
through high-dimensional quadrature and the enumeration of
potential outcomes into scenarios. And even so, SO has big computational requirements.

|picOUC| |picRO|
|picOUC2| |picRO|

.. |picOUC| image:: ouc.png
.. |picOUC2| image:: ouc.png
:width: 48%

.. |picRO| image:: ro.png
Expand Down

0 comments on commit 56c4e28

Please sign in to comment.