Skip to content

Commit

Permalink
Merge 7718bc6 into d645486
Browse files Browse the repository at this point in the history
  • Loading branch information
ericphanson committed Jan 14, 2020
2 parents d645486 + 7718bc6 commit 1752d4e
Show file tree
Hide file tree
Showing 35 changed files with 822 additions and 198 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@
Manifest.toml
benchmark/*.json
test/Project.toml
dev
24 changes: 24 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Changes in v0.13.0

* The intermediate layer has changed from MathProgBase.jl to
[MathOptInterface.jl](https://github.com/JuliaOpt/MathOptInterface.jl)
([#330](https://github.com/JuliaOpt/Convex.jl/pull/330)).
* `lambdamin` and `lambdamax` have been deprecated in favor of `eigmin` and
`eigmax`. ([#357](https://github.com/JuliaOpt/Convex.jl/pull/357))
* You can now create your own variable types by subtyping `AbstractVariable`.
See the
[docs](https://www.juliaopt.org/Convex.jl/dev/advanced/#Custom-Variable-Types-1)
for more information. You can also add constraints directly to a variable
using `add_constraint!`. ([#358](https://github.com/JuliaOpt/Convex.jl/pull/358))
* Accessors `vexity(x::Variable)`, `sign(x::Variable)`, and
`evaluate(x::Variable)` should now be the preferred way to access properties
of a variable; likewise use `set_value!` to set the initial value of a
variable. ([#358](https://github.com/JuliaOpt/Convex.jl/pull/358))
* To create integer or binary constraints, use the `VarType` enum (e.g.
`Variable(BinVar)`). Access or set this via `vartype` and `vartype!`.
([#358](https://github.com/JuliaOpt/Convex.jl/pull/358))
* `evaluate(x::Variable)` and `evaluate(c::Constant)` now return scalars and
vectors as appropriate, instead of `(1,1)`- and `(d,1)`-matrices.
([#359](https://github.com/JuliaOpt/Convex.jl/pull/359)). This affects
functions which used to return `(1,1)`-matrices; e.g., now
`evaluate(quadform(...))` yields a scalar.
12 changes: 6 additions & 6 deletions docs/examples_literate/general_examples/basic_usage.jl
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ p.constraints += [x >= 1; x <= 10; x[2] <= 5; x[1] + x[4] - x[2] <= 10]
solve!(p, solver)

println(round(p.optval, digits=2))
println(round.(x.value, digits=2))
println(round.(evaluate(x), digits=2))
println(evaluate(x[1] + x[4] - x[2]))

# ### Matrix Variables and promotions
Expand All @@ -55,8 +55,8 @@ y = Variable()
## X is a 2 x 2 variable, and y is scalar. X' + y promotes y to a 2 x 2 variable before adding them
p = minimize(norm(X) + y, 2 * X <= 1, X' + y >= 1, X >= 0, y >= 0)
solve!(p, solver)
println(round.(X.value, digits=2))
println(y.value)
println(round.(evaluate(X), digits=2))
println(evaluate(y))
p.optval

# ### Norm, exponential and geometric mean
Expand All @@ -75,7 +75,7 @@ x = Variable(4)
p = satisfy(norm(x) <= 100, exp(x[1]) <= 5, x[2] >= 7, geomean(x[3], x[4]) >= x[2])
solve!(p, SCS.Optimizer(verbose=0))
println(p.status)
x.value
evaluate(x)

# ### SDP cone and Eigenvalues

Expand All @@ -92,7 +92,7 @@ y = Variable((2, 2))
## SDP constraints
p = minimize(x + y[1, 1], isposdef(y), x >= 1, y[2, 1] == 1)
solve!(p, solver)
y.value
evaluate(y)

# ### Mixed integer program
#
Expand All @@ -109,6 +109,6 @@ using GLPK
x = Variable(4, :Int)
p = minimize(sum(x), x >= 0.5)
solve!(p, GLPK.Optimizer())
x.value
evaluate(x)

#-
Original file line number Diff line number Diff line change
Expand Up @@ -36,5 +36,5 @@ plot(x, x -> -x * a1[1] / a1[2] + b[1] / a1[2])
plot!(x, x -> -x * a2[1]/ a2[2] + b[2] / a2[2])
plot!(x, x -> -x * a3[1]/ a3[2] + b[3] / a3[2])
plot!(x, x -> -x * a4[1]/ a4[2] + b[4] / a4[2])
plot!(x_c.value[1] .+ r.value * cos.(theta), x_c.value[2] .+ r.value * sin.(theta), linewidth = 2)
plot!(evaluate(x_c)[1] .+ evaluate(r) * cos.(theta), evaluate(x_c)[2] .+ evaluate(r) * sin.(theta), linewidth = 2)
plot!(title ="Largest Euclidean ball lying in a 2D polyhedron", legend = nothing)
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,6 @@ solve!(problem, SCS.Optimizer(verbose=false))
# Let's see how well the model fits.
using Plots
logistic(x::Real) = inv(exp(-x) + one(x))
perm = sortperm(vec(X*beta.value))
perm = sortperm(vec(X*evaluate(beta)))
plot(1:n, (Y[perm] .+ 1)/2, st=:scatter)
plot!(1:n, logistic.(X*beta.value)[perm])
plot!(1:n, logistic.(X*evaluate(beta))[perm])
2 changes: 1 addition & 1 deletion docs/examples_literate/general_examples/max_entropy.jl
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ problem.optval

#-

x.value
evaluate(x)
3 changes: 2 additions & 1 deletion docs/examples_literate/mixed_integer/aux_files/antidiag.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
# Please read expressions.jl first.
#############################################################################
import Convex.sign, Convex.monotonicity, Convex.curvature, Convex.evaluate, Convex.conic_form!
using Convex: AbstractExpr, ConstVexity, Nondecreasing, has_conic_form, cache_conic_form, get_conic_form
export antidiag

### Diagonal
Expand Down Expand Up @@ -65,7 +66,7 @@ antidiag(x::AbstractExpr, k::Int=0) = AntidiagAtom(x, k)
# 3. We populate coeff with 1s at the correct indices
# The canonical form will then be:
# coeff * x - d = 0
function conic_form!(x::AntidiagAtom, unique_conic_forms::UniqueConicForms=UniqueConicForms())
function conic_form!(x::AntidiagAtom, unique_conic_forms::Convex.UniqueConicForms)
if !has_conic_form(unique_conic_forms, x)
(num_rows, num_cols) = x.children[1].size
k = x.k
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,12 +70,12 @@ c1 = diag(U) == 1
c2 = U in :SDP
p = minimize(objective,c1,c2)
solve!(p, SCS.Optimizer(verbose=0))
U.value
evaluate(U)

#-

# Verify if the rank of $U$ is 1:
B, C = eigen(U.value);
B, C = eigen(evaluate(U));
length([e for e in B if(abs(real(e))>1e-4)])

#-
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ output = matopen("Res.mat")
names(output)
outputData = read(output, "Wres");
Wres = outputData
real_diff = real(W.value) - real(Wres);
imag_diff = imag(W.value) - imag(Wres);
real_diff = real(evaluate(W)) - real(Wres);
imag_diff = imag(evaluate(W)) - imag(Wres);
@test real_diff zeros(n, n) atol = TOL
@test imag_diff zeros(n, n) atol = TOL

real_diff = real(W.value) - (real(W.value))';
imag_sum = imag(W.value) + (imag(W.value))';
real_diff = real(evaluate(W)) - (real(evaluate(W)))';
imag_sum = imag(evaluate(W)) + (imag(evaluate(W)))';
@test real_diff zeros(n, n) atol = TOL
@test imag_diff zeros(n, n) atol = TOL
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ solve!(p, SCS.Optimizer()) #use SCS.Optimizer(verbose = false) to suppress p
#-

# Optimal portfolio weights:
w.value
evaluate(w)

#-

sum(w.value)
sum(evaluate(w))
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ p.status

#-

x.value
evaluate(x)

#-

Expand Down
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@ makedocs(;
"Problem Depot" => "problem_depot.md",
"Contributing" => "contributing.md",
"Credits" => "credits.md",
"Reference" => "reference.md",
"Examples" => examples_nav,
],
repo = "https://github.com/JuliaOpt/Convex.jl/blob/{commit}{path}#L{line}",
Expand Down
128 changes: 103 additions & 25 deletions docs/src/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,26 +53,21 @@ lambda = 105
Fixing and freeing variables
----------------------------

Convex.jl allows you to fix a variable `x` to a value by
calling the `fix!` method. Fixing the variable essentially
turns it into a constant. Fixed variables are sometimes also called
parameters.
Convex.jl allows you to fix a variable `x` to a value by calling the `fix!`
method. Fixing the variable essentially turns it into a constant. Fixed
variables are sometimes also called parameters.

`fix!(x, v)` fixes the variable `x` to the value
`v`.
`fix!(x, v)` fixes the variable `x` to the value `v`.

`fix!(x)` fixes `x` to the value
`x.value`, which might be the value obtained by solving
another problem involving the variable `x`.
`fix!(x)` fixes `x` to its current value, which might be the value obtained by
solving another problem involving the variable `x`.

To allow the variable `x` to vary again, call
`free!(x)`.
To allow the variable `x` to vary again, call `free!(x)`.

Fixing and freeing variables can be particularly useful as a tool for
performing alternating minimization on nonconvex problems. For example,
we can find an approximate solution to a nonnegative matrix
factorization problem with alternating minimization as follows. We use
warmstarts to speed up the solution.
Fixing and freeing variables can be particularly useful as a tool for performing
alternating minimization on nonconvex problems. For example, we can find an
approximate solution to a nonnegative matrix factorization problem with
alternating minimization as follows. We use warmstarts to speed up the solution.

```julia
# initialize nonconvex problem
Expand All @@ -83,7 +78,7 @@ y = Variable(k, n)
problem = minimize(sum_squares(A - x*y), x>=0, y>=0)

# initialize value of y
y.value = rand(k, n)
set_value!(y, rand(k, n))
# we'll do 10 iterations of alternating minimization
for i=1:10
# first solve for x
Expand All @@ -99,20 +94,103 @@ for i=1:10
end
```


Custom Variable Types
---------------------

By making subtypes of [`Convex.AbstractVariable`](@ref) that conform to the appropriate
interface (see the [`Convex.AbstractVariable`](@ref) docstring for details), one can
easily provide custom variable types for specific constructions. These aren't
always necessary though; for example, one can define the following function

`probabilityvector`:
```@example prob
using Convex
function probabilityvector(d::Int)
x = Variable(d, Positive())
add_constraint!(x, sum(x) == 1)
return x
end
```
and then use, say, `p = probabilityvector(3)` in any Convex.jl problem. The
constraints that the entries of `p` are non-negative and sum to 1 will be
automatically added to any problem `p` is used in.

Custom types are necessary when one wants to dispatch on custom variables, use
them as callable types, or provide a different implementation. Continuing with
the probability vector example, let's say we often use probability vectors
variables in taking expectation values, and we want to use function notation for
this. To do so, we define

```@example 1
using Convex
mutable struct ProbabilityVector <: Convex.AbstractVariable
head::Symbol
id_hash::UInt64
size::Tuple{Int, Int}
value::Convex.ValueOrNothing
vexity::Convex.Vexity
function ProbabilityVector(d)
this = new(:ProbabilityVector, 0, (d,1), nothing, Convex.AffineVexity())
this.id_hash = objectid(this)
this
end
end
Convex.constraints(p::ProbabilityVector) = [ sum(p) == 1 ]
Convex.sign(::ProbabilityVector) = Convex.Positive()
Convex.vartype(::ProbabilityVector) = Convex.ContVar
(p::ProbabilityVector)(x) = dot(p, x)
```

Then one can call `p = ProbabilityVector(3)` to construct a our custom variable
which can be used in Convex, which already encodes the appropriate constraints
(non-negative and sums to 1), and which can act on constants via `p(x)`. For
example,

```@example 1
using SCS
p = ProbabilityVector(3)
x = [1.0, 2.0, 3.0]
prob = minimize( p(x) )
solve!(prob, SCS.Optimizer(verbose=false))
evaluate(p) # [1.0, 0.0, 0.0]
```

Subtypes of `AbstractVariable` must have the fields `head`, `id_hash`, and
`size`, and `id_hash` must be populated as shown in the example. Then they must also

* either have a field `value`, or implement [`Convex._value`](@ref) and
[`Convex.set_value!`](@ref)
* either have a field `vexity`, or implement [`Convex.vexity`](@ref) and
[`Convex.vexity!`](@ref) (though the latter is only necessary if you wish to
support [`Convex.fix!`](@ref) and [`Convex.free!`](@ref)
* have a field `constraints` or implement [`Convex.constraints`](@ref) (optionally,
implement [`Convex.add_constraint!`](@ref) to be able to add constraints to your
variable after its creation),
* either have a field `sign` or implement [`Convex.sign`](@ref), and
* either have a field `vartype`, or implement [`Convex.vartype`](@ref) (optionally,
implement [`Convex.vartype!`](@ref) to be able to change a variables' `vartype`
after construction.)


Printing and the tree structure
-------------------------------

A Convex problem is structured as a *tree*, with the *root* being the
problem object, with branches to the objective and the set of constraints.
The objective is an `AbstractExpr` which itself is a tree, with each atom
being a node and having `children` which are other atoms, variables, or
constants. Convex provides `children` methods from
A Convex problem is structured as a *tree*, with the *root* being the problem
object, with branches to the objective and the set of constraints. The objective
is an `AbstractExpr` which itself is a tree, with each atom being a node and
having `children` which are other atoms, variables, or constants. Convex
provides `children` methods from
[AbstractTrees.jl](https://github.com/Keno/AbstractTrees.jl) so that the
tree-traversal functions of that package can be used with Convex.jl problems
and structures. This is what allows powers the printing of problems, expressions,
tree-traversal functions of that package can be used with Convex.jl problems and
structures. This is what allows powers the printing of problems, expressions,
and constraints. The depth to which the tree corresponding to a problem,
expression, or constraint is printed is controlled by the global variable
[`Convex.MAXDEPTH`](@ref), which defaults to 3. This can be changed by e.g. setting
[`Convex.MAXDEPTH`](@ref), which defaults to 3. This can be changed by e.g.
setting

```julia
Convex.MAXDEPTH[] = 5
Expand Down
8 changes: 4 additions & 4 deletions docs/src/problem_depot.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,20 +46,20 @@ this should be the same for every problem, except for the name, which is a descr

Then begins the body of the problem. It is setup like any other Convex.jl problem, only `handle_problem!` is called instead of `solve!`. This allows particular solvers to be used (via e.g. choosing `handle_problem! = p -> solve!(p, solver)`), or for any other function of the problem. Tests should be included and gated behind `if test` blocks, so that tests can be skipped for benchmarking, or in the case that the problem is not in fact solved during `handle_problem!`.

The fact that the problems may not be solved during `handle_problem!` brings with it a small complication: any command that assumes the problem has been solved should be behind an `if test` check. For example, in some of the problems, `real(x.value)` is used, for a variable `x`; perhaps as
The fact that the problems may not be solved during `handle_problem!` brings with it a small complication: any command that assumes the problem has been solved should be behind an `if test` check. For example, in some of the problems, `real(evaluate(x))` is used, for a variable `x`; perhaps as

```julia
x_re = real(x.value)
x_re = real(evaluate(x))
if test
@test x_re = ...
end
```

However, if the problem `x` is used in has not been solved, then `x.value === nothing`, and `real(nothing)` throws an error. So instead, this should be rewritten as
However, if the problem `x` is used in has not been solved, then `evaluate(x) === nothing`, and `real(nothing)` throws an error. So instead, this should be rewritten as

```julia
if test
x_re = real(x.value)
x_re = real(evaluate(x))
@test x_re = ...
end
```
Expand Down
26 changes: 26 additions & 0 deletions docs/src/reference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Reference

The `AbstractVariable` interface:

```@docs
Convex.AbstractVariable
Convex._value
Convex.set_value!
Convex.constraints
Convex.add_constraint!
Convex.vexity
Convex.vexity!
Convex.sign
Convex.sign!
Convex.VarType
Convex.vartype
Convex.vartype!
```

Functions:

```@docs
Convex.fix!
Convex.free!
Convex.evaluate
```
Loading

0 comments on commit 1752d4e

Please sign in to comment.