Skip to content

Commit

Permalink
Merge 717e42f into 4c6293d
Browse files Browse the repository at this point in the history
  • Loading branch information
ericphanson committed Nov 29, 2019
2 parents 4c6293d + 717e42f commit 743a717
Show file tree
Hide file tree
Showing 21 changed files with 165 additions and 140 deletions.
3 changes: 3 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,6 @@ Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
SCS = "c946c3f1-0d1f-5ce8-9dea-7daa1f7e2d13"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[compat]
Documenter = "0.24"
14 changes: 7 additions & 7 deletions docs/examples_literate/general_examples/basic_usage.jl
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ solver = SCSSolver(verbose=0)
#
# $$
# \begin{array}{ll}
# \mbox{maximize} & c^T x \\
# \mbox{subject to} & A x \leq b\\
# \text{maximize} & c^T x \\
# \text{subject to} & A x \leq b\\
# & x \geq 1 \\
# & x \leq 10 \\
# & x_2 \leq 5 \\
Expand All @@ -41,8 +41,8 @@ println(evaluate(x[1] + x[4] - x[2]))
#
# $$
# \begin{array}{ll}
# \mbox{minimize} & \| X \|_F + y \\
# \mbox{subject to} & 2 X \leq 1\\
# \text{minimize} & \| X \|_F + y \\
# \text{subject to} & 2 X \leq 1\\
# & X' + y \geq 1 \\
# & X \geq 0 \\
# & y \geq 0 \\
Expand All @@ -63,7 +63,7 @@ p.optval
#
# $$
# \begin{array}{ll}
# \mbox{satisfy} & \| x \|_2 \leq 100 \\
# \text{satisfy} & \| x \|_2 \leq 100 \\
# & e^{x_1} \leq 5 \\
# & x_2 \geq 7 \\
# & \sqrt{x_3 x_4} \geq x_2
Expand Down Expand Up @@ -98,8 +98,8 @@ y.value
#
# $$
# \begin{array}{ll}
# \mbox{minimize} & \sum_{i=1}^n x_i \\
# \mbox{subject to} & x \in \mathbb{Z}^n \\
# \text{minimize} & \sum_{i=1}^n x_i \\
# \text{subject to} & x \in \mathbb{Z}^n \\
# & x \geq 0.5 \\
# \end{array}
# $$
Expand Down
25 changes: 12 additions & 13 deletions docs/examples_literate/general_examples/logistic_regression.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,28 +5,27 @@ using RDatasets
using Convex
using SCS

#-

## we'll use iris data
## predict whether the iris species is versicolor using the sepal length and width and petal length and width
iris = dataset("datasets", "iris")
## outcome variable: +1 for versicolor, -1 otherwise
# This is an example logistic regression using `RDatasets`'s iris data.
# Our goal is to gredict whether the iris species is versicolor
# using the sepal length and width and petal length and width.
iris = dataset("datasets", "iris");
iris[1:10,:]

# We'll define `Y` as the outcome variable: +1 for versicolor, -1 otherwise.
Y = [species == "versicolor" ? 1.0 : -1.0 for species in iris.Species]
## create data matrix with one column for each feature (first column corresponds to offset)
X = hcat(ones(size(iris, 1)), iris.SepalLength, iris.SepalWidth, iris.PetalLength, iris.PetalWidth);

#-
# We'll create our data matrix with one column for each feature
# (first column corresponds to offset).
X = hcat(ones(size(iris, 1)), iris.SepalLength, iris.SepalWidth, iris.PetalLength, iris.PetalWidth);

## solve the logistic regression problem
# Now to solve the logistic regression problem.
n, p = size(X)
beta = Variable(p)
problem = minimize(logisticloss(-Y.*(X*beta)))

solve!(problem, SCSSolver(verbose=false))

#-

## let's see how well the model fits
# Let's see how well the model fits.
using Plots
logistic(x::Real) = inv(exp(-x) + one(x))
perm = sortperm(vec(X*beta.value))
Expand Down
6 changes: 3 additions & 3 deletions docs/examples_literate/general_examples/max_entropy.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
#
# $$
# \begin{array}{ll}
# \mbox{maximize} & -\sum_{i=1}^n x_i \log x_i \\
# \mbox{subject to} & \mathbf{1}' x = 1 \\
# \text{maximize} & -\sum_{i=1}^n x_i \log x_i \\
# \text{subject to} & \mathbf{1}' x = 1 \\
# & Ax \leq b
# \end{array}
# $$
Expand All @@ -23,7 +23,7 @@ b = rand(m, 1);

x = Variable(n);
problem = maximize(entropy(x), sum(x) == 1, A * x <= b)
solve!(problem, SCSSolver(verbose=0))
solve!(problem, SCSSolver(verbose=false))
problem.optval

#-
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# This example is taken from <https://web.stanford.edu/~boyd/papers/pdf/cvx_applications.pdf>.

# Setup
# Setup:
#
# * We have $m$ adverts and $n$ timeslots
# * The total traffic in time slot $t$ is $T_t$
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
#
# Adapted for Convex.jl by Karanveer Mohan and David Zeng - 26/05/14
# Original cvx code and plots here:
# http://web.cvxr.com/cvx/examples/cvxbook/Ch06_approx_fitting/html/fig6_15.html
# <http://web.cvxr.com/cvx/examples/cvxbook/Ch06_approx_fitting/html/fig6_15.html>
#
# Consider the least-squares problem:
# minimize $\|(A + tB)x - b\|_2$
Expand All @@ -20,7 +20,7 @@
# (reduces to minimizing $\mathbb{E} \|(A+tB)x-b\|^2 = \|A*x-b\|^2 + x^TPx$
# where $P = \mathbb{E}(t^2) B^TB = (1/3) B^TB$ )
# 3. worst-case robust approximation:
# minimize $sup_{-1\leq u\leq 1} \|(A+tB)x - b\|_2$
# minimize $\mathrm{sup}_{-1\leq u\leq 1} \|(A+tB)x - b\|_2$
# (reduces to minimizing $\max\{\|(A-B)x - b\|_2, \|(A+B)x - b\|_2\}$ ).
#
using Convex, LinearAlgebra, SCS
Expand Down Expand Up @@ -57,7 +57,7 @@ p = minimize(max(norm((A - B) * x - b), norm((A + B) * x - b)))
solve!(p, SCSSolver(verbose=0))
x_wc = evaluate(x)

# plot residuals
# Plot residuals:
parvals = range(-2, stop=2, length=100);

errvals(x) = [ norm((A + parvals[k] * B) * x - b) for k = eachindex(parvals)]
Expand Down
2 changes: 1 addition & 1 deletion docs/examples_literate/general_examples/svm.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
#
# $$
# \begin{array}{ll}
# \mbox{minimize} & \|w\|^2 + C * (\sum_{i=1}^N \text{max} \{1 + b - w^T x_i, 0\} + \sum_{i=1}^M \text{max} \{1 - b + w^T y_i, 0\})
# \text{minimize} & \|w\|^2 + C * (\sum_{i=1}^N \text{max} \{1 + b - w^T x_i, 0\} + \sum_{i=1}^M \text{max} \{1 - b + w^T y_i, 0\})
# \end{array},
# $$
#
Expand Down
4 changes: 2 additions & 2 deletions docs/examples_literate/mixed_integer/binary_knapsack.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
#
# $$
# \begin{array}{ll}
# \mbox{maximize} & x' p \\
# \mbox{subject to} & x \in \{0, 1\} \\
# \text{maximize} & x' p \\
# \text{subject to} & x \in \{0, 1\} \\
# & w' x \leq C \\
# \end{array}
# $$
Expand Down
2 changes: 1 addition & 1 deletion docs/examples_literate/mixed_integer/n_queens.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ aux(str) = joinpath(@__DIR__, "aux", str) # path to auxiliary files
include(aux("antidiag.jl"))

n = 8
# We encode the locations of the queens with a matrix of binary random variables
# We encode the locations of the queens with a matrix of binary random variables.
x = Variable((n, n), :Bin)

# Now we impose the constraints: at most one queen on any anti-diagonal, at most one queen on any diagonal, and we must have exactly one queen per row and per column.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
#
# $$
# \begin{array}{ll}
# \mbox{maximize} & \frac{1}{2}\text{tr}(Z+Z^\dagger) \\
# \mbox{subject to} &\\
# \text{maximize} & \frac{1}{2}\text{tr}(Z+Z^\dagger) \\
# \text{subject to} &\\
# & \left[\begin{array}{cc}P&Z\\{Z}^{\dagger}&Q\end{array}\right] \succeq 0\\
# & Z \in \mathbf {C}^{n \times n}\\
# \end{array}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,34 +1,55 @@
# # Phase recovery using MaxCut
# In this example, we relax the phase retrieval problem similar to the classical [MaxCut](http://www-math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf) semidefinite program and recover the phase of the signal given the magnitude of the linear measurements.
#
# In this example, we relax the phase retrieval problem similar to the classical
# [MaxCut](http://www-math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf) semidefinite
# program and recover the phase of the signal given the magnitude of the linear
# measurements.
#
# Phase recovery has wide applications such as in X-ray and crystallography imaging, diffraction imaging or microscopy and audio signal processing. In all these applications, the detectors cannot measure the phase of the incoming wave and only record its amplitude i.e complex measurements of a signal $x \in \mathbb{C}^p$ are obtained from a linear injective operator A, **but we can only measure the magnitude vector Ax, not the phase fo Ax**.
# Phase recovery has wide applications such as in X-ray and crystallography
# imaging, diffraction imaging or microscopy and audio signal processing. In all
# these applications, the detectors cannot measure the phase of the incoming wave
# and only record its amplitude i.e complex measurements of a signal
# $x \in \mathbb{C}^p$ are obtained from a linear injective operator $A$, **but we
# can only measure the magnitude vector $Ax$, not the phase of $Ax$**.
#
# Recovering the phase of $Ax$ from $|Ax|$ is a **nonconvex optimization problem**. Using results from [this paper](https://arxiv.org/abs/1206.0102), the problem can be relaxed to a (complex) semidefinite program (complex SDP).
#
# The original reprsentation of the problem is as follows:
#
# >>>> find $x$
# $$
# \begin{array}{ll}
# \text{find} & x \in \mathbb{C}^p \\
# \text{subject to} & |Ax| = b
# \end{array}
# $$
#
# >>>> such that $|Ax| = b$
#
# >>>> where $x \in \mathbb{C}^p$, $A \in \mathbb{C}^{n \times p}$ and $b \in \mathbb{R}^n$.

#-
# where $A \in \mathbb{C}^{n \times p}$ and $b \in \mathbb{R}^n$.

# In this example, **the problem is to find the phase of Ax given the value |Ax|**. Given a linear operator $A$ and a vector $b= |Ax|$ of measured amplitudes, in the noiseless case, we can write Ax = diag(b)u where $u \in \mathbb{C}^n$ is a phase vector, satisfying |$\mathbb{u}_i$| = 1 for i = 1,. . . , n.
# In this example, **the problem is to find the phase of $Ax$ given the value $|Ax|$**.
# Given a linear operator $A$ and a vector $b= |Ax|$ of measured amplitudes,
# in the noiseless case, we can write $Ax = \text{diag}(b)u$ where
# $u \in \mathbb{C}^n$ is a phase vector, satisfying
# $|\mathbb{u}_i| = 1$ for $i = 1,\ldots, n$.
#
# We relax this problem as Complex Semidefinite Programming.
#
# ### Relaxed Problem similar to [MaxCut](http://www-math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf)
#
# Define the positive semidefinite hermitian matrix $M = \text{diag}(b) (I - A A^*) \text{diag}(b)$. The problem is:
# Define the positive semidefinite hermitian matrix
# $M = \text{diag}(b) (I - A A^*) \text{diag}(b)$. The problem is:
#
# minimize < U,M >
# subject to
# diag(U) = 1
# U in :HermitianSemiDefinite
# $$
# \begin{array}{ll}
# \text{minimize} & \langle U, M \rangle \\
# \text{subject to} & \text{diag}(U) = 1\\
# & U \succeq 0
# \end{array}
# $$
#
# Here the variable $U$ must be hermitian ($U \in \mathbb{H}_n $), and we have a solution to the phase recovery problem if $U = u u^*$ has rank one. Otherwise, the leading singular vector of $U$ can be used to approximate the solution.
# Here the variable $U$ must be hermitian ($U \in \mathbb{H}_n $),
# and we have a solution to the phase recovery problem if $U = u u^*$
# has rank one. Otherwise, the leading singular vector of $U$ can be used
# to approximate the solution.

using Convex, SCS, LinearAlgebra
if VERSION < v"1.2.0-DEV.0"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# # Portfolio Optimization
#
# In this problem, we will find the portfolio allocation that minimizes risk while achieving a given expected return $R_\mbox{target}$.
# In this problem, we will find the portfolio allocation that minimizes risk while achieving a given expected return $R_\text{target}$.
#
# Suppose that we know the mean returns $\mu \in \mathbf{R}^n$ and the covariance $\Sigma \in \mathbf{R}^{n \times n}$ of the $n$ assets. We would like to find a portfolio allocation $w \in \mathbf{R}^n$, $\sum_i w_i = 1$, minimizing the *risk* of the portfolio, which we measure as the variance $w^T \Sigma w$ of the portfolio. The requirement that the portfolio allocation achieve the target expected return can be expressed as $w^T \mu >= R_\mbox{target}$. We suppose further that our portfolio allocation must comply with some lower and upper bounds on the allocation, $w_\mbox{lower} \leq w \leq w_\mbox{upper}$.
# Suppose that we know the mean returns $\mu \in \mathbf{R}^n$ and the covariance $\Sigma \in \mathbf{R}^{n \times n}$ of the $n$ assets. We would like to find a portfolio allocation $w \in \mathbf{R}^n$, $\sum_i w_i = 1$, minimizing the *risk* of the portfolio, which we measure as the variance $w^T \Sigma w$ of the portfolio. The requirement that the portfolio allocation achieve the target expected return can be expressed as $w^T \mu >= R_\text{target}$. We suppose further that our portfolio allocation must comply with some lower and upper bounds on the allocation, $w_\text{lower} \leq w \leq w_\text{upper}$.
#
# This problem can be written as
#
# $$
# \begin{array}{ll}
# \mbox{minimize} & w^T \Sigma w \\
# \mbox{subject to} & w^T \mu >= R_\mbox{target} \\
# \text{minimize} & w^T \Sigma w \\
# \text{subject to} & w^T \mu >= R_\text{target} \\
# & \sum_i w_i = 1 \\
# & w_\mbox{lower} \leq w \leq w_\mbox{upper}
# & w_\text{lower} \leq w \leq w_\text{upper}
# \end{array}
# $$
#
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
#
# $$
# \begin{array}{ll}
# \mbox{minimize} & \lambda*w^T \Sigma w - (1-\lambda)*w^T \mu \\
# \mbox{subject to} & \sum_i w_i = 1
# \text{minimize} & \lambda*w^T \Sigma w - (1-\lambda)*w^T \mu \\
# \text{subject to} & \sum_i w_i = 1
# \end{array}
# $$
#
Expand Down
10 changes: 5 additions & 5 deletions docs/examples_literate/supplemental_material/paper_examples.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ e = 0;
end
p = minimize(e, x>=1);
end
@time solve!(p, ECOSSolver())
@time solve!(p, ECOSSolver(verbose=0))

# Indexing.
println("Indexing example")
Expand All @@ -26,7 +26,7 @@ e = 0;
end
p = minimize(e, x >= ones(1000, 1));
end
@time solve!(p, ECOSSolver())
@time solve!(p, ECOSSolver(verbose=0))

# Matrix constraints.
println("Matrix constraint example")
Expand All @@ -37,7 +37,7 @@ b = randn(p, n);
@time begin
p = minimize(norm(vec(X)), A * X == b);
end
@time solve!(p, ECOSSolver())
@time solve!(p, ECOSSolver(verbose=0))

# Transpose.
println("Transpose example")
Expand All @@ -46,12 +46,12 @@ A = randn(5, 5);
@time begin
p = minimize(norm2(X - A), X' == X);
end
@time solve!(p, ECOSSolver())
@time solve!(p, ECOSSolver(verbose=0))

n = 3
A = randn(n, n);
#@time begin
X = Variable(n, n);
p = minimize(norm(vec(X' - A)), X[1,1] == 1);
solve!(p, ECOSSolver())
solve!(p, ECOSSolver(verbose=0))
#end
4 changes: 2 additions & 2 deletions docs/examples_literate/time_series/time_series.jl
Original file line number Diff line number Diff line change
Expand Up @@ -60,13 +60,13 @@ root_mean_square_error = sqrt(sum( x -> x^2, residuals) / length(residuals))
# We now make the hypothesis that the residual temperature on a given day is some linear combination of the previous $5$ days. Such a model is called autoregressive. We are essentially trying to fit the residuals as a function of other parts of the data itself. We want to find a vector of coefficients $a$ such that
#
# $$
# \mbox{r}(i) \approx \sum_{j = 1}^5 a_j \mbox{r}(i - j)
# \text{r}(i) \approx \sum_{j = 1}^5 a_j \text{r}(i - j)
# $$
#
# This can be done by simply minimizing the following sum of squares objective
#
# $$
# \sum_{i = 6}^n \left(\mbox{r}(i) - \sum_{j = 1}^5 a_j \mbox{r}(i - j)\right)^2
# \sum_{i = 6}^n \left(\text{r}(i) - \sum_{j = 1}^5 a_j \text{r}(i - j)\right)^2
# $$
#
# The following Convex code solves this problem and plots our autoregressive model against the actual residual temperatures:
Expand Down
2 changes: 1 addition & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ makedocs(;
repo = "https://github.com/JuliaOpt/Convex.jl/blob/{commit}{path}#L{line}",
sitename = "Convex.jl")

deploydocs(repo = "github.com/JuliaOpt/Convex.jl.git")
deploydocs(repo = "github.com/JuliaOpt/Convex.jl.git", push_preview = true)

# restore the environmental variable `GKSwstype`.
ENV["GKSwstype"] = previous_GKSwstype;
2 changes: 1 addition & 1 deletion docs/src/complex-domain_optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ constraints = [partialtrace(ρ, 1, [2; 2]) == [1 0; 0 0]
tr(ρ) == 1
ρ in :SDP]
p = satisfy(constraints)
solve!(p, SCSSolver())
solve!(p, SCSSolver(verbose=false))
p.status
```

Expand Down
Loading

0 comments on commit 743a717

Please sign in to comment.