-
Notifications
You must be signed in to change notification settings - Fork 119
/
search_index.js
3 lines (3 loc) · 226 KB
/
search_index.js
1
2
3
var documenterSearchIndex = {"docs":
[{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"__START_TIME = time_ns()\n@info \"Starting example povm_simulation\"","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/optimization_with_complex_variables/povm_simulation.jl\"","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/#POVM-simulation","page":"POVM simulation","title":"POVM simulation","text":"","category":"section"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"This notebook shows how we can check how much depolarizing noise a qubit positive operator-valued measure (POVM) can take before it becomes simulable by projective measurements. The general method is described in arXiv:1609.06139. The question of simulability by projective measurements boils down to an SDP problem. Eq. (8) from the paper defines the noisy POVM that we obtain subjecting a POVM mathbfM to a depolarizing channel Phi_t:","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"leftPhi_tleft(mathbfMright)right_i = t M_i + (1-t)fracmathrmtr(M_i)d mathbb1","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"If this visibility tin01 is one, the POVM mathbfM is simulable.","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"We will use Convex.jl to solve the SDP problem.","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"using Convex, SCS, LinearAlgebra\nif VERSION < v\"1.2.0-DEV.0\"\n (I::UniformScaling)(n::Integer) = Diagonal(fill(I.λ, n))\n LinearAlgebra.diagm(v::AbstractVector) = diagm(0 => v)\nend","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"For the qubit case, a four outcome qubit POVM mathbfM inmathcalP(24) is simulable if and only if","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"M_1=N_12^++N_13^++N_14^+","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"M_2=N_12^-+N_23^++N_24^+","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"M_3=N_13^-+N_23^-+N_34^+","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"M_4=N_14^-+N_24^-+N_34^-","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"where Hermitian operators N_ij^pm satisfy N_ij^pmgeq0 and N_ij^++N_ij^-=p_ijmathbb1, where ij , ij=1234 and p_ijgeq0 as well as sum_ijp_ij=1, that is, the p_ij values form a probability vector. This forms an SDP feasibility problem, which we can rephrase as an optimization problem by adding depolarizing noise to the left-hand side of the above equations and maximizing the visibility t:","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"max_tin01 t","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"such that","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"tM_1+(1-t)mathrmtr(M_1)fracmathbb12=N_12^++N_13^++N_14^+","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"tM_2+(1-t)mathrmtr(M_2)fracmathbb12=N_12^-+N_23^++N_24^+","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"tM_3+(1-t)mathrmtr(M_3)fracmathbb12=N_13^-+N_23^-+N_34^+","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"tM_4+(1-t)mathrmtr(M_4)fracmathbb12=N_14^-+N_24^-+N_34^-","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":".","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"We organize these constraints in a function that takes a four-output qubit POVM as its argument:","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"function get_visibility(K)\n noise = real([tr(K[i])*I(2)/2 for i=1:size(K, 1)])\n P = [[ComplexVariable(2, 2) for i=1:2] for j=1:6]\n q = Variable(6, Positive())\n t = Variable(1, Positive())\n constraints = [P[i][j] in :SDP for i=1:6 for j=1:2]\n constraints += sum(q)==1\n constraints += t<=1\n constraints += [P[i][1]+P[i][2] == q[i]*I(2) for i=1:6]\n constraints += t*K[1] + (1-t)*noise[1] == P[1][1] + P[2][1] + P[3][1]\n constraints += t*K[2] + (1-t)*noise[2] == P[1][2] + P[4][1] + P[5][1]\n constraints += t*K[3] + (1-t)*noise[3] == P[2][2] + P[4][2] + P[6][1]\n constraints += t*K[4] + (1-t)*noise[4] == P[3][2] + P[5][2] + P[6][2]\n p = maximize(t, constraints)\n solve!(p, () -> SCS.Optimizer(verbose=0))\n return p.optval\nend","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"We check this function using the tetrahedron measurement (see Appendix B in arXiv:quant-ph/0702021). This measurement is non-simulable, so we expect a value below one.","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"function dp(v)\n I(2) + v[1]*[0 1; 1 0] + v[2]*[0 -im; im 0] + v[3]*[1 0; 0 -1]\nend\nb = [ 1 1 1;\n -1 -1 1;\n -1 1 -1;\n 1 -1 -1]/sqrt(3)\nM = [dp(b[i, :]) for i=1:size(b,1)]/4;\nget_visibility(M)","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"This value matches the one we obtained using PICOS.","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/optimization_with_complex_variables/povm_simulation/","page":"POVM simulation","title":"POVM simulation","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example povm_simulation after \" * elapsed","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"__START_TIME = time_ns()\n@info \"Starting example Convex.jl_intro_ISMP2015\"","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/supplemental_material/Convex.jl_intro_ISMP2015.jl\"","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Convex-Optimization-in-Julia","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Madeleine-Udell-ISMP-2015","page":"Convex Optimization in Julia","title":"Madeleine Udell | ISMP 2015","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Convex.jl-team","page":"Convex Optimization in Julia","title":"Convex.jl team","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"Convex.jl: Madeleine Udell, Karanveer Mohan, David Zeng, Jenny Hong","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Collaborators/Inspiration:","page":"Convex Optimization in Julia","title":"Collaborators/Inspiration:","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"CVX: Michael Grant, Stephen Boyd\nCVXPY: Steven Diamond, Eric Chu, Stephen Boyd\nJuliaOpt: Miles Lubin, Iain Dunning, Joey Huchette","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# initial package installation","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# Make the Convex.jl module available\nusing Convex, SparseArrays, LinearAlgebra\nusing SCS # first order splitting conic solver [O'Donoghue et al., 2014]\n\n# Generate random problem data\nm = 50; n = 100\nA = randn(m, n)\nx♮ = sprand(n, 1, .5) # true (sparse nonnegative) parameter vector\nnoise = .1*randn(m) # gaussian noise\nb = A*x♮ + noise # noisy linear observations\n\n# Create a (column vector) variable of size n.\nx = Variable(n)\n\n# nonnegative elastic net with regularization\nλ = 1\nμ = 1\nproblem = minimize(square(norm(A * x - b)) + λ*square(norm(x)) + μ*norm(x, 1),\n x >= 0)\n\n# Solve the problem by calling solve!\nsolve!(problem, () -> SCS.Optimizer(verbose=0))\n\nprintln(\"problem status is \", problem.status) # :Optimal, :Infeasible, :Unbounded etc.\nprintln(\"optimal value is \", problem.optval)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"using Interact, Plots\n# Interact.WebIO.install_jupyter_nbextension() # might be helpful if you see `WebIO` warnings in Jupyter\n@manipulate throttle=.1 for λ=0:.1:5, μ=0:.1:5\n global A\n problem = minimize(square(norm(A * x - b)) + λ*square(norm(x)) + μ*norm(x, 1),\n x >= 0)\n solve!(problem, () -> SCS.Optimizer(verbose=0))\n histogram(evaluate(x), xlims=(0,3.5), label=\"x\")\nend","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Quick-convex-prototyping","page":"Convex Optimization in Julia","title":"Quick convex prototyping","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Variables","page":"Convex Optimization in Julia","title":"Variables","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# Scalar variable\nx = Variable()","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# (Column) vector variable\ny = Variable(4)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# Matrix variable\nZ = Variable(4, 4)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Expressions","page":"Convex Optimization in Julia","title":"Expressions","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"Convex.jl allows you to use a wide variety of functions on variables and on expressions to form new expressions.","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"x + 2x","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"e = y[1] + logdet(Z) + sqrt(x) + minimum(y)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Examine-the-expression-tree","page":"Convex Optimization in Julia","title":"Examine the expression tree","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"e.children[2]","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Constraints","page":"Convex Optimization in Julia","title":"Constraints","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"A constraint is convex if convex combinations of feasible points are also feasible. Equivalently, feasible sets are convex sets.","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"In other words, convex constraints are of the form","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"convexExpr <= 0\nconcaveExpr >= 0\naffineExpr == 0","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"x <= 0","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"square(x) <= sum(y)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"M = Z\nfor i = 1:length(y)\n global M += rand(size(Z)...)*y[i]\nend\nM ⪰ 0","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Problems","page":"Convex Optimization in Julia","title":"Problems","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"x = Variable()\ny = Variable(4)\nobjective = 2*x + 1 - sqrt(sum(y))\nconstraint = x >= maximum(y)\np = minimize(objective, constraint)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# solve the problem\nsolve!(p, () -> SCS.Optimizer(verbose=0))\np.status","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"evaluate(x)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# can evaluate expressions directly\nevaluate(objective)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Pass-to-solver","page":"Convex Optimization in Julia","title":"Pass to solver","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"call a MathProgBase solver suited for your problem class","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"see the list of Convex.jl operations to find which cones you're using\nsee the list of solvers for an up-to-date list of solvers and which cones they support","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"to solve problem using a different solver, just import the solver package and pass the solver to the solve! method: eg","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"using Mosek\nsolve!(p, Mosek.Optimizer)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#Warmstart","page":"Convex Optimization in Julia","title":"Warmstart","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# Generate random problem data\nm = 50; n = 100\nA = randn(m, n)\nx♮ = sprand(n, 1, .5) # true (sparse nonnegative) parameter vector\nnoise = .1*randn(m) # gaussian noise\nb = A*x♮ + noise # noisy linear observations\n\n# Create a (column vector) variable of size n.\nx = Variable(n)\n\n# nonnegative elastic net with regularization\nλ = 1\nμ = 1\nproblem = minimize(square(norm(A * x - b)) + λ*square(norm(x)) + μ*norm(x, 1),\n x >= 0)\n@time solve!(problem, () -> SCS.Optimizer(verbose=0))\nλ = 1.5\n@time solve!(problem, () -> SCS.Optimizer(verbose=0), warmstart = true)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/#DCP-examples","page":"Convex Optimization in Julia","title":"DCP examples","text":"","category":"section"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# affine\nx = Variable(4)\ny = Variable(2)\nsum(x) + y[2]","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"2*maximum(x) + 4*sum(y) - sqrt(y[1] + x[1]) - 7 * minimum(x[2:4])","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# not dcp compliant\nlog(x) + square(x)","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# $f$ is convex increasing and $g$ is convex\nsquare(pos(x))","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# $f$ is convex decreasing and $g$ is concave\ninvpos(sqrt(x))","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"# $f$ is concave increasing and $g$ is concave\nsqrt(sqrt(x))","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/supplemental_material/Convex.jl_intro_ISMP2015/","page":"Convex Optimization in Julia","title":"Convex Optimization in Julia","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example Convex.jl_intro_ISMP2015 after \" * elapsed","category":"page"},{"location":"problem_depot/#Problem-Depot","page":"Problem Depot","title":"Problem Depot","text":"","category":"section"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"Convex.jl has a submodule, ProblemDepot which holds a collection of convex optimization problems. The problems are used by Convex itself to test and benchmark its code, but can also be used by solvers to test and benchmark their code. These tests have been used with many solvers at ConvexTests.jl.","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"ProblemDepot has two main methods for accessing these problems: Convex.ProblemDepot.run_tests and Convex.ProblemDepot.benchmark_suite.","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"For example, to test the solver SCS on all the problems of the depot except the mixed-integer problems (which it cannot handle), run","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"using Convex, SCS, Test\n@testset \"SCS\" begin\n Convex.ProblemDepot.run_tests(; exclude=[r\"mip\"]) do p\n solve!(p, () -> SCS.Optimizer(verbose=0, eps=1e-6))\n end\nend","category":"page"},{"location":"problem_depot/#How-to-write-a-ProblemDepot-problem","page":"Problem Depot","title":"How to write a ProblemDepot problem","text":"","category":"section"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"The problems are organized into folders in src/problem_depot/problems. Each is written as a function, annotated by @add_problem, and a name, which is used to group the problems. For example, here is a simple problem:","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"@add_problem affine function affine_negate_atom(handle_problem!, ::Val{test}, atol, rtol, ::Type{T}) where {T, test}\n x = Variable()\n p = minimize(-x, [x <= 0])\n if test\n @test vexity(p) == AffineVexity()\n end\n handle_problem!(p)\n if test\n @test p.optval ≈ 0 atol=atol rtol=rtol\n @test evaluate(-x) ≈ 0 atol=atol rtol=rtol\n end\nend","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"The @add_problem call adds the problem to the registry of problems in Convex.ProblemDepot.PROBLEMS, which in turn is used by Convex.ProblemDepot.run_tests and Convex.ProblemDepot.benchmark_suite. Next, affine is the grouping of the problem; this problem came from one of the affine tests, and in particular is testing the negation atom. Next is the function signature:","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"function affine_negate_atom(handle_problem!, ::Val{test}, atol, rtol, ::Type{T}) where {T, test}","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"this should be the same for every problem, except for the name, which is a description of the problem. It should include what kind of atoms it uses (affine in this case), so that certain kinds of atoms can be ruled out by the exclude keyword to Convex.ProblemDepot.run_tests and Convex.ProblemDepot.benchmark_suite; for example, many solvers cannot solve mixed-integer problems, so mip is included in the name of such problems.","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"Then begins the body of the problem. It is setup like any other Convex.jl problem, only handle_problem! is called instead of solve!. This allows particular solvers to be used (via e.g. choosing handle_problem! = p -> solve!(p, solver)), or for any other function of the problem. Tests should be included and gated behind if test blocks, so that tests can be skipped for benchmarking, or in the case that the problem is not in fact solved during handle_problem!.","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"The fact that the problems may not be solved during handle_problem! brings with it a small complication: any command that assumes the problem has been solved should be behind an if test check. For example, in some of the problems, real(evaluate(x)) is used, for a variable x; perhaps as","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"x_re = real(evaluate(x))\nif test\n @test x_re = ...\nend","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"However, if the problem x is used in has not been solved, then evaluate(x) === nothing, and real(nothing) throws an error. So instead, this should be rewritten as","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"if test\n x_re = real(evaluate(x))\n @test x_re = ...\nend","category":"page"},{"location":"problem_depot/#Benchmark-only-problems","page":"Problem Depot","title":"Benchmark-only problems","text":"","category":"section"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"To add problems for benchmarking without tests, place problems in src/problem_depot/problems/benchmark, and include benchmark in the name. These problems will be automatically skipped during run_tests calls. For example, to benchmark the time it takes to add an SDP constraint, we have the problem","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"@add_problem constraints_benchmark function sdp_constraint(handle_problem!, args...)\n p = satisfy()\n x = Variable(44, 44) # 990 vectorized entries\n push!(p.constraints, x ⪰ 0)\n handle_problem!(p)\n nothing\nend","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"However, this \"problem\" has no tests or interesting content for testing, so we skip it during testing. Note, we use args... in the function signature so that it may be called with the standard function signature","category":"page"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"f(handle_problem!, ::Val{test}, atol, rtol, ::Type{T}) where {T, test}","category":"page"},{"location":"problem_depot/#Reference","page":"Problem Depot","title":"Reference","text":"","category":"section"},{"location":"problem_depot/","page":"Problem Depot","title":"Problem Depot","text":"Convex.ProblemDepot.run_tests\nConvex.ProblemDepot.benchmark_suite\nConvex.ProblemDepot.foreach_problem\nConvex.ProblemDepot.PROBLEMS","category":"page"},{"location":"problem_depot/#Convex.ProblemDepot.run_tests","page":"Problem Depot","title":"Convex.ProblemDepot.run_tests","text":"run_tests(\n handle_problem!::Function;\n problems::Union{Nothing, Vector{String}, Vector{Regex}} = nothing; \n exclude::Vector{Regex} = Regex[],\n T=Float64, atol=1e-3, rtol=0.0, \n)\n\nRun a set of tests. handle_problem! should be a function that takes one argument, a Convex.jl Problem and processes it (e.g. solve! the problem with a specific solver).\n\nUse exclude to exclude a subset of sets; automatically excludes r\"benchmark\". Optionally, pass a second argument problems to only allow certain problems (specified by exact names or regex). The test tolerances specified by atol and rtol. Set T to choose a numeric type for the problem. Currently this is only used for choosing the type parameter of the underlying MathOptInterface model, but not for the actual problem data.\n\nExamples\n\nrun_tests(exclude=[r\"mip\"]) do p\n solve!(p, SCSSolver(verbose=0))\nend\n\n\n\n\n\n","category":"function"},{"location":"problem_depot/#Convex.ProblemDepot.benchmark_suite","page":"Problem Depot","title":"Convex.ProblemDepot.benchmark_suite","text":"benchmark_suite(\n handle_problem!::Function,\n problems::Union{Nothing, Vector{String}, Vector{Regex}} = nothing; \n exclude::Vector{Regex} = Regex[],\n test = Val(false),\n T=Float64, atol=1e-3, rtol=0.0, \n)\n\nCreate a benchmarksuite of benchmarks. `handleproblem!should be a function that takes one argument, a Convex.jlProblemand processes it (e.g.solve!the problem with a specific solver). Pass a second argumentproblems` to specify run benchmarks only with certain problems (specified by exact names or regex).\n\nUse exclude to exclude a subset of benchmarks. Optionally, pass a second argument problems to only allow certain problems (specified by exact names or regex). Set test=true to also check the answers, with tolerances specified by atol and rtol. Set T to choose a numeric type for the problem. Currently this is only used for choosing the type parameter of the underlying MathOptInterface model, but not for the actual problem data.\n\nExamples\n\nbenchmark_suite(exclude=[r\"mip\"]) do p\n solve!(p, SCSSolver(verbose=0))\nend\n\n\n\n\n\n","category":"function"},{"location":"problem_depot/#Convex.ProblemDepot.foreach_problem","page":"Problem Depot","title":"Convex.ProblemDepot.foreach_problem","text":"foreach_problem(apply::Function, [class::String],\n problems::Union{Nothing, Vector{String}, Vector{Regex}} = nothing; \n exclude::Vector{Regex} = Regex[])\n\nProvides a convience method for iterating over problems in PROBLEMS. For each problem in PROBLEMS, apply the function apply, which takes two arguments: the name of the function associated to the problem, and the function associated to the problem itself.\n\nOptionally, pass a second argument class to only iterate over a class of problems (class should satsify class ∈ keys(PROBLEMS)), and pass third argument problems to only allow certain problems (specified by exact names or regex). Use the exclude keyword argument to exclude problems by regex.\n\n\n\n\n\n","category":"function"},{"location":"problem_depot/#Convex.ProblemDepot.PROBLEMS","page":"Problem Depot","title":"Convex.ProblemDepot.PROBLEMS","text":"const PROBLEMS = Dict{String, Dict{String, Function}}()\n\nA \"depot\" of Convex.jl problems, subdivided into categories. Each problem is stored as a function with the signature\n\nf(handle_problem!, ::Val{test}, atol, rtol, ::Type{T}) where {T, test}\n\nwhere handle_problem! specifies what to do with the Problem instance (e.g., solve! it with a chosen solver), an option test to choose whether or not to test the values (assuming it has been solved), tolerances for the tests, and a numeric type in which the problem should be specified (currently, this is not respected and all problems are specified in Float64 precision).\n\nSee also run_tests and benchmark_suite for helpers to use these problems in testing or benchmarking.\n\nExamples\n\njulia> PROBLEMS[\"affine\"][\"affine_diag_atom\"]\naffine_diag_atom (generic function with 1 method)\n\n\n\n\n\n","category":"constant"},{"location":"contributing/#Contributing","page":"Contributing","title":"Contributing","text":"","category":"section"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"We'd welcome contributions to the Convex.jl package. Here are some short instructions on how to get started. If you don't know what you'd like to contribute, you could","category":"page"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"take a look at the current issues and pick one. (Feature requests are probably the easiest to tackle.)\nadd a usage example.","category":"page"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"Then submit a pull request (PR). (Let us know if it's a work in progress by putting [WIP] in the name of the PR.)","category":"page"},{"location":"contributing/#Adding-examples","page":"Contributing","title":"Adding examples","text":"","category":"section"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"Take a look at our exising usage examples and add another in similar style.\nSubmit a PR. (Let us know if it's a work in progress by putting [WIP] in the name of the PR.)\nWe'll look it over, fix up anything that doesn't work, and merge it!","category":"page"},{"location":"contributing/#Adding-atoms","page":"Contributing","title":"Adding atoms","text":"","category":"section"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"Here are the steps to add a new function or operation (atom) to Convex.jl. Let's say you're adding the new function f.","category":"page"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"Take a look at the nuclear norm atom for an example of how to construct atoms, and see the norm atom for an example of an atom that depends on a parameter.\nCopy paste (eg) the nuclear norm file, replace anything saying nuclear norm with the name of the atom f, fill in monotonicity, curvature, etc. Save it in the appropriate subfolder of src/atoms/.\nAdd as a comment a description of what the atom does and its parameters.\nThe most mathematically interesting part is the conic_form! function. Following the example in the nuclear norm atom, you'll see that you can just construct the problem whose optimal value is f(x), introducing any auxiliary variables you need, exactly as you would normally in Convex.jl, and then call cache_conic_form! on that problem.\nAdd a test for the atom so we can verify it works in src/problem_depot/problem/<cone>, where <cone> matches the subfolder of src/atoms. See How to write a ProblemDepot problem for details on how to write the tests.\nSubmit a PR, including a description of what the atom does and its parameters. (Let us know if it's a work in progress by putting [WIP] in the name of the PR.)\nWe'll look it over, fix up anything that doesn't work, and merge it!","category":"page"},{"location":"contributing/#Fixing-the-guts","page":"Contributing","title":"Fixing the guts","text":"","category":"section"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"If you want to do a more major bug fix, you may need to understand how Convex.jl thinks about conic form. To do this, start by reading the Convex.jl paper. You may find our JuliaCon 2014 talk helpful as well; you can find the ipython notebook presented in the talk here.","category":"page"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"Then read the conic form code:","category":"page"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"We define data structures for conic objectives and conic constraints, and simple ways of combining them, in conic_form.jl\nWe load the internal conic form representation into the MathOptInterface model in the function load_MOI_model!.\nWe solve problems (that is, pass the standard form of the problem to a solver, and put the solution back into the values of the appropriate variables) in solve!.","category":"page"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"You're now armed and dangerous. Go ahead and open an issue (or comment on a previous one) if you can't figure something out, or submit a PR if you can figure it out. (Let us know if it's a work in progress by putting [WIP] in the name of the PR.)","category":"page"},{"location":"contributing/","page":"Contributing","title":"Contributing","text":"PRs that comment the code more thoroughly will also be welcomed.","category":"page"},{"location":"quick_tutorial/#Quick-Tutorial","page":"Quick Tutorial","title":"Quick Tutorial","text":"","category":"section"},{"location":"quick_tutorial/","page":"Quick Tutorial","title":"Quick Tutorial","text":"Consider a constrained least squares problem","category":"page"},{"location":"quick_tutorial/","page":"Quick Tutorial","title":"Quick Tutorial","text":"beginaligned\nbeginarrayll\ntextminimize Ax - b_2^2 \ntextsubject to x geq 0\nendarray\nendaligned","category":"page"},{"location":"quick_tutorial/","page":"Quick Tutorial","title":"Quick Tutorial","text":"with variable xin mathbfR^n, and problem data A in mathbfR^m times n, b in mathbfR^m.","category":"page"},{"location":"quick_tutorial/","page":"Quick Tutorial","title":"Quick Tutorial","text":"This problem can be solved in Convex.jl as follows: :","category":"page"},{"location":"quick_tutorial/","page":"Quick Tutorial","title":"Quick Tutorial","text":"# Make the Convex.jl module available\nusing Convex, SCS\n\n# Generate random problem data\nm = 4; n = 5\nA = randn(m, n); b = randn(m, 1)\n\n# Create a (column vector) variable of size n x 1.\nx = Variable(n)\n\n# The problem is to minimize ||Ax - b||^2 subject to x >= 0\n# This can be done by: minimize(objective, constraints)\nproblem = minimize(sumsquares(A * x - b), [x >= 0])\n\n# Solve the problem by calling solve!\nsolve!(problem, () -> SCS.Optimizer(verbose=false))\n\n# Check the status of the problem\nproblem.status # :Optimal, :Infeasible, :Unbounded etc.\n\n# Get the optimum value\nproblem.optval","category":"page"},{"location":"installation/#Installation","page":"Installation","title":"Installation","text":"","category":"section"},{"location":"installation/","page":"Installation","title":"Installation","text":"Installing Convex.jl is a one step process. Open up Julia and type :","category":"page"},{"location":"installation/","page":"Installation","title":"Installation","text":"Pkg.update()\nPkg.add(\"Convex\")","category":"page"},{"location":"installation/","page":"Installation","title":"Installation","text":"This does not install any solvers. If you don't have a solver installed already, you will want to install a solver such as SCS by running :","category":"page"},{"location":"installation/","page":"Installation","title":"Installation","text":"Pkg.add(\"SCS\")","category":"page"},{"location":"installation/","page":"Installation","title":"Installation","text":"To solve certain problems such as mixed integer programming problems you will need to install another solver as well, such as GLPK. If you wish to use other solvers, please read the section on Solvers.","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"__START_TIME = time_ns()\n@info \"Starting example huber_regression\"","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/huber_regression.jl\"","category":"page"},{"location":"examples/general_examples/huber_regression/#Huber-regression","page":"Huber regression","title":"Huber regression","text":"","category":"section"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"This example can be found here: https://web.stanford.edu/~boyd/papers/pdf/cvx_applications.pdf. Here we set big_example = false to only generate a small example which takes less time to run.","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"big_example = false\nif big_example\n n = 300\n number_tests = 50\nelse\n n = 50\n number_tests = 10\nend","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"Generate data for Huber regression.","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"using Random\nRandom.seed!(1);\nnumber_samples = round(Int,1.5*n);\nbeta_true = 5*randn(n);\nX = randn(n, number_samples);\nY = zeros(number_samples);\nv = randn(number_samples);\nnothing #hide","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"# Generate data for different values of p.\n# Solve the resulting problems.\nusing Convex, SCS, Distributions\nlsq_data = zeros(number_tests);\nhuber_data = zeros(number_tests);\nprescient_data = zeros(number_tests);\np_vals = range(0, stop=0.15, length=number_tests);\nfor i=1:length(p_vals)\n p = p_vals[i];\n # Generate the sign changes.\n factor = 2 * rand(Binomial(1, 1-p), number_samples) .- 1;\n Y = factor .* X' * beta_true + v;\n\n # Form and solve a standard regression problem.\n beta = Variable(n);\n fit = norm(beta - beta_true) / norm(beta_true);\n cost = norm(X' * beta - Y);\n prob = minimize(cost);\n solve!(prob, () -> SCS.Optimizer(verbose=0));\n lsq_data[i] = evaluate(fit);\n\n # Form and solve a prescient regression problem,\n # i.e., where the sign changes are known.\n cost = norm(factor .* (X'*beta) - Y);\n solve!(minimize(cost), () -> SCS.Optimizer(verbose=0))\n prescient_data[i] = evaluate(fit);\n\n # Form and solve the Huber regression problem.\n cost = sum(huber(X' * beta - Y, 1));\n solve!(minimize(cost), () -> SCS.Optimizer(verbose=0))\n huber_data[i] = evaluate(fit);\nend","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"using Plots\n\nplot(p_vals, huber_data, label=\"Huber\", xlabel=\"p\", ylabel=\"Fit\" )\nplot!(p_vals, lsq_data, label=\"Least squares\")\nplot!(p_vals, prescient_data, label=\"Prescient\")","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"# Plot the relative reconstruction error for Huber and prescient regression,\n# zooming in on smaller values of p.\nindices = findall(p_vals .<= 0.08);\nplot(p_vals[indices], huber_data[indices], label=\"Huber\")\nplot!(p_vals[indices], prescient_data[indices], label=\"Prescient\")","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/huber_regression/","page":"Huber regression","title":"Huber regression","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example huber_regression after \" * elapsed","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"__START_TIME = time_ns()\n@info \"Starting example trade_off_curves\"","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/trade_off_curves.jl\"","category":"page"},{"location":"examples/general_examples/trade_off_curves/#Trade-off-curves","page":"Trade-off curves","title":"Trade-off curves","text":"","category":"section"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"using Random\nRandom.seed!(1)\nm = 25;\nn = 10;\nA = randn(m, n);\nb = randn(m, 1);\nnothing #hide","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"using Convex, SCS, LinearAlgebra\n\n\ngammas = exp10.(range(-4, stop=2, length=100));\n\nx_values = zeros(n, length(gammas));\nx = Variable(n);\nfor i=1:length(gammas)\n cost = sumsquares(A*x - b) + gammas[i]*norm(x,1);\n problem = minimize(cost, [norm(x, Inf) <= 1]);\n solve!(problem, () -> SCS.Optimizer(verbose=0));\n x_values[:,i] = evaluate(x);\nend","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"Plot the regularization path.","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"using Plots\nplot(title = \"Entries of x vs lambda\", xaxis=:log, xlabel=\"lambda\", ylabel=\"x\" )\nfor i = 1:n\n plot!(gammas, x_values[i,:], label=\"x$i\")\nend\nplot!()","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/trade_off_curves/","page":"Trade-off curves","title":"Trade-off curves","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example trade_off_curves after \" * elapsed","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"__START_TIME = time_ns()\n@info \"Starting example svm_l1regularization\"","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/svm_l1regularization.jl\"","category":"page"},{"location":"examples/general_examples/svm_l1regularization/#SVM-with-L1-regularization","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"","category":"section"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"# Generate data for SVM classifier with L1 regularization.\nusing Random\nRandom.seed!(3);\nn = 20;\nm = 1000;\nTEST = m;\nDENSITY = 0.2;\nbeta_true = randn(n,1);\nidxs = randperm(n)[1:round(Int, (1-DENSITY)*n)];\nbeta_true[idxs] .= 0\noffset = 0;\nsigma = 45;\nX = 5 * randn(m, n);\nY = sign.(X * beta_true .+ offset .+ sigma * randn(m,1));\nX_test = 5 * randn(TEST, n);\nnothing #hide","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"# Form SVM with L1 regularization problem.\nusing Convex, SCS, ECOS\n\nbeta = Variable(n);\nv = Variable();\nloss = sum(pos(1 - Y .* (X*beta - v)));\nreg = norm(beta, 1);\n\n# Compute a trade-off curve and record train and test error.\nTRIALS = 100\ntrain_error = zeros(TRIALS);\ntest_error = zeros(TRIALS);\nlambda_vals = exp10.(range(-2, stop=0, length=TRIALS);)\nbeta_vals = zeros(length(beta), TRIALS);\nfor i = 1:TRIALS\n lambda = lambda_vals[i];\n problem = minimize(loss/m + lambda*reg);\n solve!(problem, () -> ECOS.Optimizer(verbose=0));\n # solve!(problem, SCS.Optimizer(verbose=0,linear_solver=SCS.Direct, eps=1e-3))\n train_error[i] = sum(float(sign.(X*beta_true .+ offset) .!= sign.(evaluate(X*beta - v))))/m;\n test_error[i] = sum(float(sign.(X_test*beta_true .+ offset) .!= sign.(evaluate(X_test*beta - v))))/TEST;\n beta_vals[:, i] = evaluate(beta);\nend","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"Plot the train and test error over the trade-off curve.","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"using Plots\nplot(lambda_vals, train_error, label=\"Train error\");\nplot!(lambda_vals, test_error, label=\"Test error\");\nplot!(xscale=:log, yscale=:log, ylabel=\"errors\", xlabel=\"lambda\")","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"Plot the regularization path for beta.","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"plot()\nfor i = 1:n\n plot!(lambda_vals, vec(beta_vals[i,:]), label=\"beta$i\")\nend\nplot!(xscale=:log, ylabel=\"betas\", xlabel=\"lambda\")","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/svm_l1regularization/","page":"SVM with L^1 regularization","title":"SVM with L^1 regularization","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example svm_l1regularization after \" * elapsed","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"__START_TIME = time_ns()\n@info \"Starting example Fidelity in Quantum Information Theory\"","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/optimization_with_complex_variables/Fidelity in Quantum Information Theory.jl\"","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/#Fidelity-in-quantum-information-theory","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"","category":"section"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"This example is inspired from a lecture of John Watrous in the course on Theory of Quantum Information.","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"The Fidelity between two Hermitian semidefinite matrices P and Q is defined as:","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"F(P Q) = P^12Q^12_texttr = max_U mathrmtr(P^12U Q^12)","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"where the trace norm cdot_texttr is the sum of the singular values, and the maximization goes over the set of all unitary matrices U. This quantity can be expressed as the optimal value of the following complex-valued SDP:","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"beginarrayll\n textmaximize frac12texttr(Z+Z^dagger) \n textsubject to \n leftbeginarrayccPZZ^daggerQendarrayright succeq 0\n Z in mathbf C^n times n\nendarray","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"using Convex, SCS, LinearAlgebra\nif VERSION < v\"1.2.0-DEV.0\"\n LinearAlgebra.diagm(v::AbstractVector) = diagm(0 => v)\nend\n\nn = 20\nP = randn(n,n) + im*randn(n,n)\nP = P*P'\nQ = randn(n,n) + im*randn(n,n)\nQ = Q*Q'\nZ = ComplexVariable(n,n)\nobjective = 0.5*real(tr(Z+Z'))\nconstraint = [P Z;Z' Q] ⪰ 0\nproblem = maximize(objective,constraint)\nsolve!(problem, () -> SCS.Optimizer(verbose=0))\ncomputed_fidelity = evaluate(objective)","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"# Verify that computer fidelity is equal to actual fidelity\nP1,P2 = eigen(P)\nsqP = P2 * diagm([p1^0.5 for p1 in P1]) * P2'\nQ1,Q2 = eigen(Q)\nsqQ = Q2 * diagm([q1^0.5 for q1 in Q1]) * Q2'","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"actual_fidelity = sum(svdvals(sqP * sqQ))","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"We can see that the actual fidelity value is very close the computed fidelity value.","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/optimization_with_complex_variables/Fidelity in Quantum Information Theory/","page":"Fidelity in quantum information theory","title":"Fidelity in quantum information theory","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example Fidelity in Quantum Information Theory after \" * elapsed","category":"page"},{"location":"complex-domain_optimization/#Optimization-with-Complex-Variables","page":"Complex-domain Optimization","title":"Optimization with Complex Variables","text":"","category":"section"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"Convex.jl also supports optimization with complex variables. Below, we present a quick start guide on how to use Convex.jl for optimization with complex variables, and then list the operations supported on complex variables in Convex.jl. In general, any operation available in Convex.jl that is well defined and DCP compliant on complex variables should be available. We list these functions below. organized by the type of cone (linear, second-order, or semidefinite) used to represent that operation.","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"Internally, Convex.jl transforms the complex-domain problem to a larger real-domain problem using a bijective mapping. It then solves the real-domain problem and transforms the solution back to the complex domain.","category":"page"},{"location":"complex-domain_optimization/#Complex-Variables","page":"Complex-domain Optimization","title":"Complex Variables","text":"","category":"section"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"Complex Variables in Convex.jl are declared in the same way as the variables are declared but using the different keyword ComplexVariable.","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":" # Scalar complex variable\n z = ComplexVariable()\n\n # Column vector variable\n z = ComplexVariable(5)\n\n # Matrix variable\n z = ComplexVariable(4, 6)\n\n # Complex Positive Semidefinite variable\n z = HermitianSemidefinite(4)","category":"page"},{"location":"complex-domain_optimization/#Linear-Program-Representable-Functions-(complex-variables)","page":"Complex-domain Optimization","title":"Linear Program Representable Functions (complex variables)","text":"","category":"section"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"All of the linear functions that are listed under Linear Program Representable Functions operate on complex variables as well. In addition, several specialized functions for complex variables are available:","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"operation description vexity slope notes\nreal(z) real part of complex of variable affine increasing none\nimag(z) imaginary part of complex variable affine increasing none\nconj(x) element-wise complex conjugate affine increasing none\ninnerproduct(x,y) real(trace(x'*y)) affine increasing PR: one argument is constant","category":"page"},{"location":"complex-domain_optimization/#Second-Order-Cone-Representable-Functions-(complex-variables)","page":"Complex-domain Optimization","title":"Second-Order Cone Representable Functions (complex variables)","text":"","category":"section"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"Most of the second order cone function listed under Second-Order Cone Representable Functions operate on complex variables as well. Notable exceptions include:","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"inverse\nsquare\nquadoverlin\nsqrt\ngeomean\nhuber","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"One new function is available:","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"operation description vexity slope notes\nabs2(z) square(abs(z)) convex increasing none","category":"page"},{"location":"complex-domain_optimization/#Semidefinite-Program-Representable-Functions-(complex-variables)","page":"Complex-domain Optimization","title":"Semidefinite Program Representable Functions (complex variables)","text":"","category":"section"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"All SDP-representable functions listed under Semidefinite Program Representable Functions work for complex variables.","category":"page"},{"location":"complex-domain_optimization/#Exponential-SDP-representable-Functions-(complex-variables)","page":"Complex-domain Optimization","title":"Exponential + SDP representable Functions (complex variables)","text":"","category":"section"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"Complex variables also support logdet function.","category":"page"},{"location":"complex-domain_optimization/#Optimizing-over-quantum-states","page":"Complex-domain Optimization","title":"Optimizing over quantum states","text":"","category":"section"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"The complex and Hermitian matrix variables, along with the kron and partialtrace operations, enable the definition of a wide range of problems in quantum information theory. As a simple example, let us consider a state rho over a composite Hilbert space mathcalH_AotimesmathcalH_B, where both component spaces are isomorphic to mathbbC^2. Assume that rho is a product state, with its component in mathcalH_A given as A, a complex-valued matrix. We can optimize over the second component B to meet some requirement. Here we simply fix the second component too, but via the partialtrace operator:","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"using Convex, SCS\nA = [ 0.47213595 0.11469794+0.48586827im; 0.11469794-0.48586827im 0.52786405]\nB = ComplexVariable(2, 2)\nρ = kron(A, B)\nconstraints = [partialtrace(ρ, 1, [2; 2]) == [1 0; 0 0]\n tr(ρ) == 1\n ρ in :SDP]\np = satisfy(constraints)\nsolve!(p, () -> SCS.Optimizer(verbose=false))\np.status","category":"page"},{"location":"complex-domain_optimization/","page":"Complex-domain Optimization","title":"Complex-domain Optimization","text":"Since we fix both components as trace-1 positive semidefinite matrices, the last two constraints are actually redundant in this case.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"__START_TIME = time_ns()\n@info \"Starting example lasso_regression\"","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/lasso_regression.jl\"","category":"page"},{"location":"examples/general_examples/lasso_regression/#Lasso,-Ridge-and-Elastic-Net-Regressions","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"","category":"section"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"This notebook presents a simple implementation of Lasso and elastic net regressions.","category":"page"},{"location":"examples/general_examples/lasso_regression/#Load-Packages-and-Extra-Functions","page":"Lasso, Ridge and Elastic Net Regressions","title":"Load Packages and Extra Functions","text":"","category":"section"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"using DelimitedFiles, LinearAlgebra, Statistics, Plots, Convex, SCS\n\nimport MathOptInterface\nconst MOI = MathOptInterface","category":"page"},{"location":"examples/general_examples/lasso_regression/#Loading-Data","page":"Lasso, Ridge and Elastic Net Regressions","title":"Loading Data","text":"","category":"section"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"We use the diabetes data from Efron et al, downloaded from https://web.stanford.edu/~hastie/StatLearnSparsity_files/DATA/diabetes.html and then converted from a tab to a comma delimited file.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"All data series are standardised (see below) to have zero means and unit standard deviation, which improves the numerical stability. (Efron et al do not standardise the scale of the response variable.)","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"(x,header) = readdlm(\"aux_files/diabetes.csv\",',',header=true)\n#display(header)\n#display(x)\n\nx = (x .- mean(x,dims=1))./std(x,dims=1) #standardise\n\n(Y,X) = (x[:,end],x[:,1:end-1]); #to get traditional names\nxNames = header[1:end-1];\nnothing #hide","category":"page"},{"location":"examples/general_examples/lasso_regression/#Lasso,-Ridge-and-Elastic-Net-Regressions-2","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"","category":"section"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"(a) The regression is Y = Xb + u, where Y and u are T times 1, X is T times K, and b is the K-vector of regression coefficients.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"(b) We want to minimize (Y-Xb)(Y-Xb) + gamma sum b_i + lambda sum b_i^2.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"(c) We can equally well minimise bQb - 2cb + gamma sum b_i + lambda sum b_i^2, where Q = XX and c=XY","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"(d) Lasso: gamma0lambda=0; Ridge: gamma=0lambda0; elastic net: gamma0lambda0.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"\"\"\"\n LassoEN(Y,X,γ,λ)\n\nDo Lasso (set γ>0,λ=0), ridge (set γ=0,λ>0) or elastic net regression (set γ>0,λ>0).\n\n\n# Input\n- `Y::Vector`: T-vector with the response (dependent) variable\n- `Y::VecOrMat`: TxK matrix of covariates (regressors)\n- `γ::Number`: penalty on sum(abs.(b))\n- `λ::Number`: penalty on sum(b.^2)\n\n\"\"\"\nfunction LassoEN(Y,X,γ,λ=0.0)\n\n K = size(X,2)\n\n b_ls = X\\Y #LS estimate of weights, no restrictions\n\n Q = X'X\n c = X'Y #c'b = Y'X*b\n\n b = Variable(K) #define variables to optimize over\n L1 = quadform(b,Q) #b'Q*b\n L2 = dot(c,b) #c'b\n L3 = norm(b,1) #sum(|b|)\n L4 = sumsquares(b) #sum(b^2)\n\n Sol = minimize(L1-2*L2+γ*L3+λ*L4) #u'u + γ*sum(|b|) + λsum(b^2), where u = Y-Xb\n solve!(Sol,()->SCS.Optimizer(verbose = false))\n Sol.status == MOI.OPTIMAL ? b_i = vec(evaluate(b)) : b_i = NaN\n\n return b_i, b_ls\n\nend","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"The next cell makes a Lasso regression for a single value of γ.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"K = size(X,2)\nγ = 100\n\n(b,b_ls) = LassoEN(Y,X,γ)\n\nprintln(\"OLS and Lasso coeffs (with γ=$γ)\")\ndisplay([[\"\" \"OLS\" \"Lasso\"];xNames b_ls b])","category":"page"},{"location":"examples/general_examples/lasso_regression/#Redo-the-Lasso-Regression-with-Different-Gamma-Values","page":"Lasso, Ridge and Elastic Net Regressions","title":"Redo the Lasso Regression with Different Gamma Values","text":"","category":"section"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"We now loop over gamma values.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"Remark: it would be quicker to put this loop inside the LassoEN() function so as to not recreate L1-L4.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"nγ = 101\nγM = range(0; stop=600, length=nγ) #different γ values\n\nbLasso = fill(NaN,size(X,2),nγ) #results for γM[i] are in bLasso[:,i]\nfor i = 1:nγ\n bLasso[:,i], = LassoEN(Y,X,γM[i])\nend","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"plot(log.(γM),bLasso',\n title = \"Lasso regression coefficients\",\n xlabel = \"log(γ)\",\n label = permutedims(xNames),\n size = (600,400))","category":"page"},{"location":"examples/general_examples/lasso_regression/#Ridge-Regression","page":"Lasso, Ridge and Elastic Net Regressions","title":"Ridge Regression","text":"","category":"section"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"We use the same function to do a ridge regression. Alternatively, do b = inv(X'X + λ*I)*X'Y.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"nλ = 101\nλM = range(0; stop=3000, length=nλ)\n\nbRidge = fill(NaN,size(X,2),nλ)\nfor i = 1:nλ\n bRidge[:,i], = LassoEN(Y,X,0,λM[i])\nend","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"plot(log.(λM),bRidge',\n title = \"Ridge regression coefficients\",\n xlabel = \"log(λ)\",\n label = permutedims(xNames),\n size = (600,400))","category":"page"},{"location":"examples/general_examples/lasso_regression/#Elastic-Net-Regression","page":"Lasso, Ridge and Elastic Net Regressions","title":"Elastic Net Regression","text":"","category":"section"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"λ = 200\nprintln(\"redo the Lasso regression, but with λ=$λ: an elastic net regression\")\n\nbEN = fill(NaN,size(X,2),nγ)\nfor i = 1:nγ\n bEN[:,i], = LassoEN(Y,X,γM[i],λ)\nend","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"plot(log.(γM),bEN',\n title = \"Elastic Net regression coefficients\",\n xlabel = \"log(γ)\",\n label = permutedims(xNames),\n size = (600,400))","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/lasso_regression/","page":"Lasso, Ridge and Elastic Net Regressions","title":"Lasso, Ridge and Elastic Net Regressions","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example lasso_regression after \" * elapsed","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"__START_TIME = time_ns()\n@info \"Starting example section_allocation\"","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/mixed_integer/section_allocation.jl\"","category":"page"},{"location":"examples/mixed_integer/section_allocation/#Section-Allocation","page":"Section Allocation","title":"Section Allocation","text":"","category":"section"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"Suppose you have n students in a class who need to be assigned to m discussion sections. Each student needs to be assigned to exactly one section. Each discussion section should have between 6 and 10 students. Suppose an n times m preference matrix P is given, where P_ij gives student i's ranking for section j (1 would mean it is the student's top choice, 10,000 or a large number would mean the student can not attend that section).","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"The goal will be to get an allocation matrix X, where X_ij = 1 if student i is assigned to section j and 0 otherwise.","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"using Convex, GLPK\naux(str) = joinpath(@__DIR__, \"aux_files\", str) # path to auxiliary files","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"Load our preference matrix, P","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"include(aux(\"data.jl\"))\n\nX = Variable(size(P), :Bin)","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"We want every student to be assigned to exactly one section. So, every row must have exactly one non-zero entry. In other words, the sum of all the columns for every row is 1. We also want each section to have between 6 and 10 students, so the sum of all the rows for every column should be between these.","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"constraints = [sum(X, dims=2) == 1, sum(X, dims=1) <= 10, sum(X, dims=1) >= 6]","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"Our objective is simple sum(X .* P), which can be more efficiently represented as vec(X)' * vec(P). Since each entry of X is either 0 or 1, this is basically summing up the rankings of students that were assigned to them. If all students got their first choice, this value will be the number of students since the ranking of the first choice is 1.","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"p = minimize(vec(X)' * vec(P), constraints)\n\nsolve!(p, GLPK.Optimizer)\np.optval","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/mixed_integer/section_allocation/","page":"Section Allocation","title":"Section Allocation","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example section_allocation after \" * elapsed","category":"page"},{"location":"solvers/#Solvers","page":"Solvers","title":"Solvers","text":"","category":"section"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"Convex.jl transforms each problem into an equivalent cone program in order to pass the problem to a specialized solver. Depending on the types of functions used in the problem, the conic constraints may include linear, second-order, exponential, or semidefinite constraints, as well as any binary or integer constraints placed on the variables.","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"By default, Convex.jl does not install any solvers. Many users use the solver SCS, which is able to solve problems with linear, second-order cone constraints (SOCPs), exponential constraints and semidefinite constraints (SDPs). Likewise, COSMO is a pure-Julia solver which can handle every cone that Convex.jl itself supports. Any other solver in JuliaOpt may also be used, so long as it supports the conic constraints used to represent the problem. Many other solvers in the JuliaOpt ecosystem can be used to solve (mixed integer) linear programs (LPs and MILPs). Mosek and Gurobi can be used to solve SOCPs (even with binary or integer constraints), and Mosek can also solve SDPs. For up-to-date information about solver capabilities, please see the table here describing which solvers can solve which kind of problems. See also ConvexTests.jl to see the results of running test problems with Convex.jl for many solvers.","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"Installing these solvers is very simple. Just follow the instructions in the documentation for that solver.","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"To use a specific solver, you can use the following syntax","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"solve!(p, Gurobi.Optimizer)\nsolve!(p, Mosek.Optimizer)\nsolve!(p, GLPK.Optimizer)\nsolve!(p, ECOS.Optimizer)\nsolve!(p, SCS.Optimizer)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"(Of course, the solver must be installed first.) For example, we can use GLPK to solve a MILP","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"using GLPK\nsolve!(p, GLPK.Optimizer)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"Many of the solvers also allow options to be passed in. More details can be found in each solver's documentation.","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"For example, if we wish to turn off printing for the SCS solver (ie, run in quiet mode), we can do so by","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"using SCS\nopt = () -> SCS.Optimizer(verbose=false)\nsolve!(p, opt)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"or equivalently,","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"solve!(p, () -> SCS.Optimizer(verbose=false))","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"If we wish to increase the maximum number of iterations for ECOS or SCS, we can do so by","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"using ECOS\nsolve!(p, () -> ECOS.Optimizer(maxit=10000))\nusing SCS\nsolve!(p, () -> SCS.Optimizer(max_iters=10000))","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"To turn off the problem status warning issued by Convex when a solver is not able to solve a problem to optimality, use the keyword argument verbose=false of the solve method, along with any desired solver parameters:","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"solve!(p, () -> SCS.Optimizer(verbose=false), verbose=false)","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"__START_TIME = time_ns()\n@info \"Starting example n_queens\"","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/mixed_integer/n_queens.jl\"","category":"page"},{"location":"examples/mixed_integer/n_queens/#N-queens","page":"N queens","title":"N queens","text":"","category":"section"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"using Convex, GLPK, LinearAlgebra, SparseArrays, Test\naux(str) = joinpath(@__DIR__, \"aux_files\", str) # path to auxiliary files\ninclude(aux(\"antidiag.jl\"))\n\nn = 8","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"We encode the locations of the queens with a matrix of binary random variables.","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"x = Variable((n, n), :Bin)","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"Now we impose the constraints: at most one queen on any anti-diagonal, at most one queen on any diagonal, and we must have exactly one queen per row and per column.","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"# At most one queen on any anti-diagonal\nconstr = Constraint[sum(antidiag(x, k)) <= 1 for k = -n+2:n-2]\n# At most one queen on any diagonal\nconstr += Constraint[sum(diag(x, k)) <= 1 for k = -n+2:n-2]\n# Exactly one queen per row and one queen per column\nconstr += Constraint[sum(x, dims=1) == 1, sum(x, dims=2) == 1]\np = satisfy(constr)\nsolve!(p, GLPK.Optimizer)","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"Let us test the results:","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"for k = -n+2:n-2\n\t@test evaluate(sum(antidiag(x, k))) <= 1\n\t@test evaluate(sum(diag(x, k))) <= 1\nend\n@test all(evaluate(sum(x, dims=1)) .≈ 1)\n@test all(evaluate(sum(x, dims=2)) .≈ 1)","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/mixed_integer/n_queens/","page":"N queens","title":"N queens","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example n_queens after \" * elapsed","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"__START_TIME = time_ns()\n@info \"Starting example optimal_advertising\"","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/optimal_advertising.jl\"","category":"page"},{"location":"examples/general_examples/optimal_advertising/#Optimal-advertising","page":"Optimal advertising","title":"Optimal advertising","text":"","category":"section"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"This example is taken from https://web.stanford.edu/~boyd/papers/pdf/cvx_applications.pdf.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"Setup:","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"We have m adverts and n timeslots\nThe total traffic in time slot t is T_t\nThe number of ad i displayed in period t is D_it geq 0\nWe require sum_i=1^m D_it leq T_t since we cannot show more than T_t ads during time slot t.\nWe require sum_t=1^n D_it geq c_i to fulfill a contract to show advertisement i at least c_i times.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"Goal: Choose D_it.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"For some empirical P_it with 0 leq P_it leq 1, we obtain C_it = P_itD_it clicks for ad i, which pays us some number R_i 0 up to a budget B_i. The ad revenue for ad i is S_i = min( R_i sum_t C_it B_i ) which is concave in D. We aim to maximize sum_i S_i.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"using Random\nusing Distributions: LogNormal\nRandom.seed!(1);\n\n\nm = 5; # number of adverts\nn = 24; # number of timeslots\nSCALE = 10000;\nB = rand(LogNormal(8), m) .+ 10000;\nB = round.(B, digits=3); # Budget\n\nP_ad = rand(m);\nP_time = rand(1,n);\nP = P_ad * P_time;\n\nT = sin.(range(-2*pi/2, stop=2*pi-2*pi/2, length=n)) * SCALE;\nT .+= -minimum(T) + SCALE; # traffic\nc = rand(m); # contractual minimum\nc *= 0.6*sum(T)/sum(c);\nc = round.(c, digits=3);\nR = [rand(LogNormal(minimum(c)/c[i]), 1) for i=1:m]; # revenue\nnothing #hide","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"# Form and solve the optimal advertising problem.\nusing Convex, SCS;\nD = Variable(m, n);\nSi = [min(R[i]*dot(P[i,:], D[i,:]'), B[i]) for i=1:m];\nproblem = maximize(sum(Si),\n [D >= 0, sum(D, dims=1)' <= T, sum(D, dims=2) >= c]);\nsolve!(problem, () -> SCS.Optimizer(verbose=0));\nnothing #hide","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"Plot traffic.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"using Plots\nplot(1:length(T), T, xlabel=\"hour\", ylabel=\"Traffic\")","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"Plot P.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"heatmap(P)","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"Plot optimal D.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"heatmap(evaluate(D))","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/optimal_advertising/","page":"Optimal advertising","title":"Optimal advertising","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example optimal_advertising after \" * elapsed","category":"page"},{"location":"advanced/#Advanced-Features","page":"Advanced","title":"Advanced Features","text":"","category":"section"},{"location":"advanced/#DCP-warnings","page":"Advanced","title":"DCP warnings","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"When an expression is created which is not of DCP form, a warning is emitted. For example,","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"x = Variable()\ny = Variable()\nx*y","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"To disable this, run","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Convex.emit_dcp_warnings() = false","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"to redefine the method. See Convex.emit_dcp_warnings for more details.","category":"page"},{"location":"advanced/#Dual-Variables","page":"Advanced","title":"Dual Variables","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Convex.jl also returns the optimal dual variables for a problem. These are stored in the dual field associated with each constraint.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"using Convex, SCS\n\nx = Variable()\nconstraint = x >= 0\np = minimize(x, constraint)\nsolve!(p, SCS.Optimizer())\n\n# Get the dual value for the constraint\np.constraints[1].dual\n# or\nconstraint.dual","category":"page"},{"location":"advanced/#Warmstarting","page":"Advanced","title":"Warmstarting","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"If you're solving the same problem many times with different values of a parameter, Convex.jl can initialize many solvers with the solution to the previous problem, which sometimes speeds up the solution time. This is called a warm start.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"To use this feature, pass the optional argument warmstart=true to the solve! method.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"# initialize data\nn = 1000\ny = rand(n)\nx = Variable(n)\n\n# first solve\nlambda = 100\nproblem = minimize(sumsquares(y - x) + lambda * sumsquares(x - 10))\n@time solve!(problem, SCS.Optimizer)\n\n# now warmstart\n# if the solver takes advantage of warmstarts, \n# this run will be faster\nlambda = 105\n@time solve!(problem, SCS.Optimizer, warmstart=true)","category":"page"},{"location":"advanced/#Fixing-and-freeing-variables","page":"Advanced","title":"Fixing and freeing variables","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Convex.jl allows you to fix a variable x to a value by calling the fix! method. Fixing the variable essentially turns it into a constant. Fixed variables are sometimes also called parameters.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"fix!(x, v) fixes the variable x to the value v.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"fix!(x) fixes x to its current value, which might be the value obtained by solving another problem involving the variable x.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"To allow the variable x to vary again, call free!(x).","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Fixing and freeing variables can be particularly useful as a tool for performing alternating minimization on nonconvex problems. For example, we can find an approximate solution to a nonnegative matrix factorization problem with alternating minimization as follows. We use warmstarts to speed up the solution.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"# initialize nonconvex problem\nn, k = 10, 1\nA = rand(n, k) * rand(k, n)\nx = Variable(n, k)\ny = Variable(k, n)\nproblem = minimize(sum_squares(A - x*y), x>=0, y>=0)\n\n# initialize value of y\nset_value!(y, rand(k, n))\n# we'll do 10 iterations of alternating minimization\nfor i=1:10 \n # first solve for x\n # with y fixed, the problem is convex\n fix!(y)\n solve!(problem, SCS.Optimizer, warmstart = i > 1 ? true : false)\n free!(y)\n\n # now solve for y with x fixed at the previous solution\n fix!(x)\n solve!(problem, SCS.Optimizer, warmstart = true)\n free!(x)\nend","category":"page"},{"location":"advanced/#Custom-Variable-Types","page":"Advanced","title":"Custom Variable Types","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"By making subtypes of Convex.AbstractVariable that conform to the appropriate interface (see the Convex.AbstractVariable docstring for details), one can easily provide custom variable types for specific constructions. These aren't always necessary though; for example, one can define the following function probabilityvector:","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"using Convex\n\nfunction probabilityvector(d::Int)\n x = Variable(d, Positive())\n add_constraint!(x, sum(x) == 1)\n return x\nend","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"and then use, say, p = probabilityvector(3) in any Convex.jl problem. The constraints that the entries of p are non-negative and sum to 1 will be automatically added to any problem p is used in.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Custom types are necessary when one wants to dispatch on custom variables, use them as callable types, or provide a different implementation. Continuing with the probability vector example, let's say we often use probability vectors variables in taking expectation values, and we want to use function notation for this. To do so, we define","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"using Convex\nmutable struct ProbabilityVector <: Convex.AbstractVariable\n head::Symbol\n id_hash::UInt64\n size::Tuple{Int, Int}\n value::Convex.ValueOrNothing\n vexity::Convex.Vexity\n function ProbabilityVector(d)\n this = new(:ProbabilityVector, 0, (d,1), nothing, Convex.AffineVexity())\n this.id_hash = objectid(this)\n this\n end\nend\n\nConvex.constraints(p::ProbabilityVector) = [ sum(p) == 1 ]\nConvex.sign(::ProbabilityVector) = Convex.Positive()\nConvex.vartype(::ProbabilityVector) = Convex.ContVar\n\n(p::ProbabilityVector)(x) = dot(p, x)","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Then one can call p = ProbabilityVector(3) to construct a our custom variable which can be used in Convex, which already encodes the appropriate constraints (non-negative and sums to 1), and which can act on constants via p(x). For example,","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"using SCS\np = ProbabilityVector(3)\nx = [1.0, 2.0, 3.0]\nprob = minimize( p(x) )\nsolve!(prob, SCS.Optimizer(verbose=false))\nevaluate(p) # [1.0, 0.0, 0.0]","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Subtypes of AbstractVariable must have the fields head, id_hash, and size, and id_hash must be populated as shown in the example. Then they must also","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"either have a field value, or implement Convex._value and Convex.set_value!\neither have a field vexity, or implement Convex.vexity and Convex.vexity! (though the latter is only necessary if you wish to support Convex.fix! and Convex.free!\nhave a field constraints or implement Convex.constraints (optionally, implement Convex.add_constraint! to be able to add constraints to your variable after its creation),\neither have a field sign or implement Convex.sign, and \neither have a field vartype, or implement Convex.vartype (optionally, implement Convex.vartype! to be able to change a variables' vartype after construction.)","category":"page"},{"location":"advanced/#Printing-and-the-tree-structure","page":"Advanced","title":"Printing and the tree structure","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"A Convex problem is structured as a tree, with the root being the problem object, with branches to the objective and the set of constraints. The objective is an AbstractExpr which itself is a tree, with each atom being a node and having children which are other atoms, variables, or constants. Convex provides children methods from AbstractTrees.jl so that the tree-traversal functions of that package can be used with Convex.jl problems and structures. This is what allows powers the printing of problems, expressions, and constraints. The depth to which the tree corresponding to a problem, expression, or constraint is printed is controlled by the global variable Convex.MAXDEPTH, which defaults to 3. This can be changed by e.g. setting","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Convex.MAXDEPTH[] = 5","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Likewise, Convex.MAXWIDTH, which defaults to 15, controls the \"width\" of the printed tree. For example, when printing a problem with 20 constraints, only the first MAXWIDTH of the constraints will be printed. Vertical dots, \"⋮\", will be printed indicating that some constraints were omitted in the printing.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"A related setting is Convex.MAXDIGITS, which controls printing the internal IDs of atoms: if the string representation of an ID is longer than double the value of MAXDIGITS, then it is shortened by printing only the first and last MAXDIGITS characters.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"The AbstractTrees methods can also be used to analyze the structure of a Convex.jl problem. For example,","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"using Convex, AbstractTrees\nx = Variable()\np = maximize( log(x), x >= 1, x <= 3 )\nfor leaf in AbstractTrees.Leaves(p)\n println(\"Here's a leaf: $(summary(leaf))\")\nend","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"We can also iterate over the problem in various orders. The following descriptions are taken from the AbstractTrees.jl docstrings, which have more information.","category":"page"},{"location":"advanced/#PostOrderDFS","page":"Advanced","title":"PostOrderDFS","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Iterator to visit the nodes of a tree, guaranteeing that children will be visited before their parents.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"for (i, node) in enumerate(AbstractTrees.PostOrderDFS(p))\n println(\"Here's node $i via PostOrderDFS: $(summary(node))\")\nend","category":"page"},{"location":"advanced/#PreOrderDFS","page":"Advanced","title":"PreOrderDFS","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Iterator to visit the nodes of a tree, guaranteeing that parents will be visited before their children.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"for (i, node) in enumerate(AbstractTrees.PreOrderDFS(p))\n println(\"Here's node $i via PreOrderDFS: $(summary(node))\")\nend","category":"page"},{"location":"advanced/#StatelessBFS","page":"Advanced","title":"StatelessBFS","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Iterator to visit the nodes of a tree, guaranteeing that all nodes of a level will be visited before their children.","category":"page"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"for (i, node) in enumerate(AbstractTrees.StatelessBFS(p))\n println(\"Here's node $i via StatelessBFS: $(summary(node))\")\nend","category":"page"},{"location":"advanced/#Reference","page":"Advanced","title":"Reference","text":"","category":"section"},{"location":"advanced/","page":"Advanced","title":"Advanced","text":"Convex.MAXDEPTH\nConvex.MAXWIDTH\nConvex.MAXDIGITS","category":"page"},{"location":"advanced/#Convex.MAXDEPTH","page":"Advanced","title":"Convex.MAXDEPTH","text":"MAXDEPTH\n\nControls depth of tree printing globally for Convex.jl; defaults to 3. Set via\n\nConvex.MAXDEPTH[] = 5\n\n\n\n\n\n","category":"constant"},{"location":"advanced/#Convex.MAXWIDTH","page":"Advanced","title":"Convex.MAXWIDTH","text":"MAXWIDTH\n\nControls width of tree printing globally for Convex.jl; defaults to 15. Set via\n\nConvex.MAXWIDTH[] = 15\n\n\n\n\n\n","category":"constant"},{"location":"advanced/#Convex.MAXDIGITS","page":"Advanced","title":"Convex.MAXDIGITS","text":"MAXDIGITS\n\nWhen priting IDs of variables, only show the initial and final digits if the full ID has more than double the number of digits specified here. So, with the default setting MAXDIGITS=3, any ID longer than 7 digits would be shortened; for example, ID 14656210999710729289 would be printed as 146…289.\n\nThis setting controls tree printing globally for Convex.jl; defaults to 3.\n\nSet via:\n\nConvex.MAXDIGITS[] = 3\n\n\n\n\n\n","category":"constant"},{"location":"faq/#FAQ","page":"FAQ","title":"FAQ","text":"","category":"section"},{"location":"faq/#Where-can-I-get-help?","page":"FAQ","title":"Where can I get help?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"For usage questions, please contact us via the Julia Discourse. If you're running into bugs or have feature requests, please use the Github Issue Tracker.","category":"page"},{"location":"faq/#How-does-Convex.jl-differ-from-JuMP?","page":"FAQ","title":"How does Convex.jl differ from JuMP?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Convex.jl and JuMP are both modelling languages for mathematical programming embedded in Julia, and both interface with solvers via MathOptInterface, so many of the same solvers are available in both. Convex.jl converts problems to a standard conic form. This approach requires (and certifies) that the problem is convex and DCP compliant, and guarantees global optimality of the resulting solution. JuMP allows nonlinear programming through an interface that learns about functions via their derivatives. This approach is more flexible (for example, you can optimize non-convex functions), but can't guarantee global optimality if your function is not convex, or warn you if you've entered a non-convex formulation.","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"For linear programming, the difference is more stylistic. JuMP's syntax is scalar-based and similar to AMPL and GAMS making it easy and fast to create constraints by indexing and summation (like sum{x[i], i=1:numLocation}). Convex.jl allows (and prioritizes) linear algebraic and functional constructions (like max(x,y) < A*z); indexing and summation are also supported in Convex.jl, but are somewhat slower than in JuMP. JuMP also lets you efficiently solve a sequence of problems when new constraints are added or when coefficients are modified, whereas Convex.jl parses the problem again whenever the [solve!]{.title-ref} method is called.","category":"page"},{"location":"faq/#Where-can-I-learn-more-about-Convex-Optimization?","page":"FAQ","title":"Where can I learn more about Convex Optimization?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"See the freely available book Convex Optimization by Boyd and Vandenberghe for general background on convex optimization. For help understanding the rules of Disciplined Convex Programming, we recommend this DCP tutorial website.","category":"page"},{"location":"faq/#Are-there-similar-packages-available-in-other-languages?","page":"FAQ","title":"Are there similar packages available in other languages?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Indeed! You might use CVXPY in Python, or CVX in Matlab.","category":"page"},{"location":"faq/#How-does-Convex.jl-work?","page":"FAQ","title":"How does Convex.jl work?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"For a detailed discussion of how Convex.jl works, see our paper.","category":"page"},{"location":"faq/#How-do-I-cite-this-package?","page":"FAQ","title":"How do I cite this package?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"If you use Convex.jl for published work, we encourage you to cite the software using the following BibTeX citation: :","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"@article{convexjl,\n title = {Convex Optimization in {J}ulia},\n author ={Udell, Madeleine and Mohan, Karanveer and Zeng, David and Hong, Jenny and Diamond, Steven and Boyd, Stephen},\n year = {2014},\n journal = {SC14 Workshop on High Performance Technical Computing in Dynamic Languages},\n archivePrefix = \"arXiv\",\n eprint = {1410.4821},\n primaryClass = \"math-oc\",\n}","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"__START_TIME = time_ns()\n@info \"Starting example tomography\"","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/tomography/tomography.jl\"","category":"page"},{"location":"examples/tomography/tomography/#Tomography","page":"Tomography","title":"Tomography","text":"","category":"section"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"Tomography is the process of reconstructing a density distribution from given integrals over sections of the distribution. In our example, we will work with tomography on black and white images. Suppose x be the vector of n pixel densities, with x_j denoting how white pixel j is. Let y be the vector of m line integrals over the image, with y_i denoting the integral for line i. We can define a matrix A to describe the geometry of the lines. Entry A_ij describes how much of pixel j is intersected by line i. Assuming our measurements of the line integrals are perfect, we have the relationship that","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":" y = Ax","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"However, anytime we have measurements, there are usually small errors that occur. Therefore it makes sense to try to minimize","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":" y - Ax_2^2","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"This is simply an unconstrained least squares problem; something we can readily solve!","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"using Convex, ECOS, DelimitedFiles, SparseArrays\naux(str) = joinpath(@__DIR__, \"aux_files\", str) # path to auxiliary files\nline_mat_x = readdlm(aux(\"tux_sparse_x.txt\"))\nsummary(line_mat_x)\n\nline_mat_y = readdlm(aux(\"tux_sparse_y.txt\"))\nsummary(line_mat_y)\n\nline_mat_val = readdlm(aux(\"tux_sparse_val.txt\"))\nsummary(line_mat_val)\n\nline_vals = readdlm(aux(\"tux_sparse_lines.txt\"))\nsummary(line_vals)","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"Form the sparse matrix from the data Image is 50 x 50","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"img_size = 50","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"The number of pixels in the image","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"num_pixels = img_size * img_size\n\nline_mat = spzeros(length(line_vals), num_pixels)\n\nnum_vals = length(line_mat_val)\n\nfor i in 1:num_vals\n x = Int(line_mat_x[i])\n y = Int(line_mat_y[i])\n line_mat[x + 1, y + 1] = line_mat_val[i]\nend\n\npixel_colors = Variable(num_pixels)\n# line_mat * pixel_colors should be close to the line_integral_values\n# to reflect that, we minimize a norm\nobjective = sumsquares(line_mat * pixel_colors - line_vals)\nproblem = minimize(objective)\nsolve!(problem, () -> ECOS.Optimizer(verbose=0))\n\nrows = zeros(img_size*img_size)\ncols = zeros(img_size*img_size)\nfor i = 1:img_size\n for j = 1:img_size\n rows[(i-1)*img_size + j] = i\n cols[(i-1)*img_size + j] = img_size + 1 - j\n end\nend","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"Plot the image using the pixel values obtained!","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"using Plots\nimage = reshape(evaluate(pixel_colors), img_size, img_size)\nheatmap(image, yflip=true, aspect_ratio=1, colorbar=nothing, color=:grays)","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/tomography/tomography/","page":"Tomography","title":"Tomography","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example tomography after \" * elapsed","category":"page"},{"location":"credits/#Credits","page":"Credits","title":"Credits","text":"","category":"section"},{"location":"credits/","page":"Credits","title":"Credits","text":"Convex.jl was created, developed, and maintained by:","category":"page"},{"location":"credits/","page":"Credits","title":"Credits","text":"Jenny Hong\nKaranveer Mohan\nMadeleine Udell\nDavid Zeng","category":"page"},{"location":"credits/","page":"Credits","title":"Credits","text":"Convex.jl is currently developed and maintained by the Julia community; see Contributors for more.","category":"page"},{"location":"credits/","page":"Credits","title":"Credits","text":"The Convex.jl developers also thank:","category":"page"},{"location":"credits/","page":"Credits","title":"Credits","text":"the JuliaOpt team: Iain Dunning, Joey Huchette and Miles Lubin\nStephen Boyd, co-author of the book Convex Optimization\nSteven Diamond, developer of CVXPY and of a DCP tutorial website to teach disciplined convex programming.\nMichael Grant, developer of CVX.\nJohn Duchi and Hongseok Namkoong for developing the representation of power cones in terms of SOCP constraints used in this package.","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"__START_TIME = time_ns()\n@info \"Starting example binary_knapsack\"","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/mixed_integer/binary_knapsack.jl\"","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/#Binary-(or-0-1)-knapsack-problem","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"","category":"section"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"Given a knapsack of some capacity C and n objects with object i having weight w_i and profit p_i, the goal is to choose some subset of the objects that can fit in the knapsack (i.e. the sum of their weights is no more than C) while maximizing profit.","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"This can be formulated as a mixed-integer program as:","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"beginarrayll\n textmaximize x p \n textsubject to x in 0 1 \n w x leq C \nendarray","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"where x is a vector is size n where x_i is one if we chose to keep the object in the knapsack, 0 otherwise.","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"# Data taken from http://people.sc.fsu.edu/~jburkardt/datasets/knapsack_01/knapsack_01.html\nw = [23; 31; 29; 44; 53; 38; 63; 85; 89; 82]\nC = 165\np = [92; 57; 49; 68; 60; 43; 67; 84; 87; 72];\nn = length(w)","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"using Convex, GLPK\nx = Variable(n, :Bin)\nproblem = maximize(dot(p, x), dot(w, x) <= C)\nsolve!(problem, GLPK.Optimizer)\nevaluate(x)","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/mixed_integer/binary_knapsack/","page":"Binary (or 0-1) knapsack problem","title":"Binary (or 0-1) knapsack problem","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example binary_knapsack after \" * elapsed","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"__START_TIME = time_ns()\n@info \"Starting example portfolio_optimization\"","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/portfolio_optimization/portfolio_optimization.jl\"","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/#Portfolio-Optimization","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"","category":"section"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"In this problem, we will find the portfolio allocation that minimizes risk while achieving a given expected return R_texttarget.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"Suppose that we know the mean returns mu in mathbfR^n and the covariance Sigma in mathbfR^n times n of the n assets. We would like to find a portfolio allocation w in mathbfR^n, sum_i w_i = 1, minimizing the risk of the portfolio, which we measure as the variance w^T Sigma w of the portfolio. The requirement that the portfolio allocation achieve the target expected return can be expressed as w^T mu = R_texttarget. We suppose further that our portfolio allocation must comply with some lower and upper bounds on the allocation, w_textlower leq w leq w_textupper.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"This problem can be written as","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"beginarrayll\n textminimize w^T Sigma w \n textsubject to w^T mu = R_texttarget \n sum_i w_i = 1 \n w_textlower leq w leq w_textupper\nendarray","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"where w in mathbfR^n is our optimization variable.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"using Convex, SCS\n\n# generate problem data\nμ = [11.5; 9.5; 6]/100 #expected returns\nΣ = [166 34 58; #covariance matrix\n 34 64 4;\n 58 4 100]/100^2\n\nn = length(μ) #number of assets\n\nR_target = 0.1\nw_lower = 0\nw_upper = 0.5;\nnothing #hide","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"If you want to try the optimization with more assets, uncomment and run the next cell. It creates a vector or average returns and a variance-covariance matrix that have scales similar to the numbers above.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"using Random Random.seed!(123)","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"n = 15 #number of assets, CHANGE IT?","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"μ = (6 .+ (11.5-6)*rand(n))/100 #mean A = randn(n,n) Σ = (A * A' + diagm(0=>rand(n)))/500; #covariance matrix","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"w = Variable(n)\nret = dot(w,μ)\nrisk = quadform(w,Σ)\n\np = minimize( risk,\n ret >= R_target,\n sum(w) == 1,\n w_lower <= w,\n w <= w_upper )\n\nsolve!(p, () -> SCS.Optimizer()) #use SCS.Optimizer(verbose = false) to suppress printing","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"Optimal portfolio weights:","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"evaluate(w)","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"sum(evaluate(w))","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization/","page":"Portfolio Optimization","title":"Portfolio Optimization","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example portfolio_optimization after \" * elapsed","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"__START_TIME = time_ns()\n@info \"Starting example svm\"","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/svm.jl\"","category":"page"},{"location":"examples/general_examples/svm/#Support-vector-machine","page":"Support vector machine","title":"Support vector machine","text":"","category":"section"},{"location":"examples/general_examples/svm/#Support-Vector-Machine-(SVM)","page":"Support vector machine","title":"Support Vector Machine (SVM)","text":"","category":"section"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"We are given two sets of points in bf R^n, x_1 ldots x_N and y_1 ldots y_M, and wish to find a function f(x) = w^T x - b that linearly separates the points, i.e. f(x_i) geq 1 for i = 1 ldots N and f(y_i) leq -1 for i = 1 ldots M. That is, the points are separated by two hyperplanes, w^T x - b = 1 and w^T x - b = -1.","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"Perfect linear separation is not always possible, so we seek to minimize the amount that these inequalities are violated. The violation of point x_i is textmax 1 + b - w^T x_i 0, and the violation of point y_i is textmax 1 - b + w^T y_i 0. We tradeoff the error sum_i=1^N textmax 1 + b - w^T x_i 0 + sum_i=1^M textmax 1 - b + w^T y_i 0 with the distance between the two hyperplanes, which we want to be large, via minimizing w^2.","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"We can write this problem as","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"beginarrayll\n textminimize w^2 + C * (sum_i=1^N textmax 1 + b - w^T x_i 0 + sum_i=1^M textmax 1 - b + w^T y_i 0)\nendarray","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"where w in bf R^n and b in bf R are our optimization variables.","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"We can solve the problem as follows.","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"using Convex, SCS","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"# Generate data.\nn = 2; # dimensionality of data\nC = 10; # inverse regularization parameter in the objective\nN = 10; # number of positive examples\nM = 10; # number of negative examples\n\nusing Distributions: MvNormal\n# positive data points\npos_data = rand(MvNormal([1.0, 2.0], 1.0), N);\n# negative data points\nneg_data = rand(MvNormal([-1.0, 2.0], 1.0), M);\nnothing #hide","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"function svm(pos_data, neg_data, solver=() -> SCS.Optimizer(verbose=0))\n # Create variables for the separating hyperplane w'*x = b.\n w = Variable(n)\n b = Variable()\n # Form the objective.\n obj = sumsquares(w) + C*sum(max(1+b-w'*pos_data, 0)) + C*sum(max(1-b+w'*neg_data, 0))\n # Form and solve problem.\n problem = minimize(obj)\n solve!(problem, solver)\n return evaluate(w), evaluate(b)\nend;\nnothing #hide","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"w, b = svm(pos_data, neg_data);\nnothing #hide","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"# Plot our results.\nusing Plots","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"Generate the separating hyperplane","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"line_x = -2:0.1:2;\nline_y = (-w[1] * line_x .+ b)/w[2];\nnothing #hide","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"Plot the positive points, negative points, and separating hyperplane.","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"plot(pos_data[1,:], pos_data[2,:], st=:scatter, label=\"Positive points\")\nplot!(neg_data[1,:], neg_data[2,:], st=:scatter, label=\"Negative points\")\nplot!(line_x, line_y, label=\"Separating hyperplane\")","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/svm/","page":"Support vector machine","title":"Support vector machine","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example svm after \" * elapsed","category":"page"},{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"The AbstractVariable interface:","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Convex.AbstractVariable\nConvex._value\nConvex.set_value!\nConvex.constraints\nConvex.add_constraint!\nConvex.vexity\nConvex.vexity!\nConvex.sign\nConvex.sign!\nConvex.VarType\nConvex.vartype\nConvex.vartype!","category":"page"},{"location":"reference/#Convex.AbstractVariable","page":"Reference","title":"Convex.AbstractVariable","text":"abstract type AbstractVariable <: AbstractExpr end\n\nAn AbstractVariable should have head field, an id_hash field and a size field to conform to the AbstractExpr interface, and implement methods (or use the field-access fallbacks) for\n\n_value, set_value!: get or set the numeric value of the variable. _value should return nothing when no numeric value is set. Note: evaluate is the user-facing method to access the value of x.\nvexity, vexity!: get or set the vexity of the variable. The vexity should be AffineVexity() unless the variable has been fix!'d, in which case it is ConstVexity().\nsign, vartype, and constraints: get the Sign, VarType, numeric type, and a (possibly empty) vector of constraints which are to be applied to any problem in which the variable is used.\n\nOptionally, also implement sign!, vartype!, and add_constraint! to allow users to modify those values or add a constraint.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Convex._value","page":"Reference","title":"Convex._value","text":"_value(x::AbstractVariable)\n\nRaw access to the current value of x; used internally by Convex.jl.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.set_value!","page":"Reference","title":"Convex.set_value!","text":"set_value!(x::AbstractVariable, v)\n\nSets the current value of x to v.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.constraints","page":"Reference","title":"Convex.constraints","text":"constraints(x::AbstractVariable)\n\nReturns the current constraints carried by x.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.add_constraint!","page":"Reference","title":"Convex.add_constraint!","text":"add_constraint!(x::AbstractVariable, C::Constraint)\n\nAdds an constraint to those carried by x.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.vexity","page":"Reference","title":"Convex.vexity","text":"vexity(x::AbstractVariable)\n\nReturns the current vexity of x.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.vexity!","page":"Reference","title":"Convex.vexity!","text":"vexity!(x::AbstractVariable, v::Vexity)\n\nSets the current vexity of x to v. Should only be called by fix! and free!.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Base.sign","page":"Reference","title":"Base.sign","text":"sign(x::AbstractVariable)\n\nReturns the current sign of x.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.sign!","page":"Reference","title":"Convex.sign!","text":"sign!(x::AbstractVariable, s::Sign)\n\nSets the current sign of x to s.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.VarType","page":"Reference","title":"Convex.VarType","text":"VarType\n\nDescribe the type of a AbstractVariable: either continuous (ContVar), integer-valued (IntVar), or binary (BinVar).\n\n\n\n\n\n","category":"type"},{"location":"reference/#Convex.vartype","page":"Reference","title":"Convex.vartype","text":"vartype(x::AbstractVariable)\n\nReturns the current VarType of x.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.vartype!","page":"Reference","title":"Convex.vartype!","text":"vartype!(x::AbstractVariable, vt::VarType)\n\nSets the current VarType of x to vt.\n\n\n\n\n\n","category":"function"},{"location":"reference/","page":"Reference","title":"Reference","text":"Functions:","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Convex.fix!\nConvex.free!\nConvex.evaluate\nConvex.solve!\nConvex.emit_dcp_warnings","category":"page"},{"location":"reference/#Convex.fix!","page":"Reference","title":"Convex.fix!","text":"fix!(x::AbstractVariable, v = value(x))\n\nFixes x to v. It is subsequently treated as a constant in future optimization problems. See also free!.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.free!","page":"Reference","title":"Convex.free!","text":"free!(x::AbstractVariable)\n\nFrees a previously fix!'d variable x, to treat it once again as a variable to optimize over.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.evaluate","page":"Reference","title":"Convex.evaluate","text":"evaluate(x::AbstractExpr)\n\nReturns the current value of x if assigned; errors otherwise.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.solve!","page":"Reference","title":"Convex.solve!","text":"solve!(problem::Problem{T}, optimizer::MOI.ModelLike;\n check_vexity::Bool = true,\n verbose::Bool = true,\n warmstart::Bool = false,\n silent_solver::Bool = false) where {T}\n\nSolves the problem, populating problem.optval with the optimal value, as well as the values of the variables (accessed by evaluate) and constraint duals (accessed by cons.dual), where applicable.\n\nOptional keyword arguments:\n\ncheck_vexity (default: true): emits a warning if the problem is not DCP\nverbose (default: true): emits a warning if the problem was not solved optimally or warmstart=true but is not supported by the solver.\nwarmstart (default: false): whether the solver should start the optimization from a previous optimal value (according to the current value of the variables in the problem, which can be set by set_value! and accessed by evaluate).\nsilent_solver: whether the solver should be silent (and not emit output or logs) during the solution process.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Convex.emit_dcp_warnings","page":"Reference","title":"Convex.emit_dcp_warnings","text":"emit_dcp_warnings\n\nControls whether or not warnings are emitted for when an expression fails to be of disciplined convex form. To turn warnings off, override the method via\n\nConvex.emit_dcp_warnings() = false\n\nThis will cause Julia's method invalidation to recompile any functions emitting DCP warnings and remove them. This should be run from top-level (not within a function).\n\n\n\n\n\n","category":"function"},{"location":"examples/general_examples/DCP_analysis/","page":"DCP analysis","title":"DCP analysis","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/DCP_analysis/","page":"DCP analysis","title":"DCP analysis","text":"__START_TIME = time_ns()\n@info \"Starting example DCP_analysis\"","category":"page"},{"location":"examples/general_examples/DCP_analysis/","page":"DCP analysis","title":"DCP analysis","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/DCP_analysis.jl\"","category":"page"},{"location":"examples/general_examples/DCP_analysis/#DCP-analysis","page":"DCP analysis","title":"DCP analysis","text":"","category":"section"},{"location":"examples/general_examples/DCP_analysis/","page":"DCP analysis","title":"DCP analysis","text":"using Convex\nx = Variable();\ny = Variable();\nexpr = quadoverlin(x - y, 1 - max(x, y));\nprintln(\"expression curvature = \", vexity(expr));\nprintln(\"expression sign = \", sign(expr));\nnothing #hide","category":"page"},{"location":"examples/general_examples/DCP_analysis/","page":"DCP analysis","title":"DCP analysis","text":"","category":"page"},{"location":"examples/general_examples/DCP_analysis/","page":"DCP analysis","title":"DCP analysis","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/DCP_analysis/","page":"DCP analysis","title":"DCP analysis","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example DCP_analysis after \" * elapsed","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"__START_TIME = time_ns()\n@info \"Starting example control\"","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/control.jl\"","category":"page"},{"location":"examples/general_examples/control/#Control","page":"Control","title":"Control","text":"","category":"section"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"A simple control problem on a system usually involves a variable x(t) that denotes the state of the system over time, and a variable u(t) that denotes the input into the system over time. Linear constraints are used to capture the evolution of the system over time:","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"x(t) = Ax(t - 1) + Bu(t) textfor t = 1ldots T","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"where the numerical matrices A and B are called the dynamics and input matrices, respectively.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"The goal of the control problem is to find a sequence of inputs u(t) that will allow the state x(t) to achieve specified values at certain times. For example, we can specify initial and final states of the system:","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"beginaligned\n x(0) = x_i \n x(T) = x_f\nendaligned","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"Additional states between the initial and final states can also be specified. These are known as waypoint constraints. Often, the input and state of the system will have physical meaning, so we often want to find a sequence inputs that also minimizes a least squares objective like the following:","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":" sum_t = 0^T Fx(t)^2_2 + sum_t = 1^TGu(t)^2_2","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"where F and G are numerical matrices.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"We'll now apply the basic format of the control problem to an example of controlling the motion of an object in a fluid over T intervals, each of h seconds. The state of the system at time interval t will be given by the position and the velocity of the object, denoted p(t) and v(t), while the input will be forces applied to the object, denoted by f(t). By the basic laws of physics, the relationship between force, velocity, and position must satisfy:","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":" beginaligned\n p(t+1) = p(t) + h v(t) \n v(t+1) = v(t) + h a(t)\n endaligned","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"Here, a(t) denotes the acceleration at time t, for which we we use a(t) = f(t) m + g - d v(t), where m, d, g are constants for the mass of the object, the drag coefficient of the fluid, and the acceleration from gravity, respectively.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"Additionally, we have our initial/final position/velocity conditions:","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":" beginaligned\n p(1) = p_i\n v(1) = v_i\n p(T+1) = p_f\n v(T+1) = 0\n endaligned","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"One reasonable objective to minimize would be","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":" textobjective = mu sum_t = 1^T+1 (v(t))^2 + sum_t = 1^T (f(t))^2","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"We would like to keep both the forces small to perhaps save fuel, and keep the velocities small for safety concerns. Here mu serves as a parameter to control which part of the objective we deem more important, keeping the velocity small or keeping the force small.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"The following code builds and solves our control example:","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"using Convex, SCS, Plots\n\n# Some constraints on our motion\n# The object should start from the origin, and end at rest\ninitial_velocity = [-20; 100]\nfinal_position = [100; 100]\n\nT = 100 # The number of timesteps\nh = 0.1 # The time between time intervals\nmass = 1 # Mass of object\ndrag = 0.1 # Drag on object\ng = [0, -9.8] # Gravity on object\n\n# Declare the variables we need\nposition = Variable(2, T)\nvelocity = Variable(2, T)\nforce = Variable(2, T - 1)\n\n# Create a problem instance\nmu = 1\n\n# Add constraints on our variables\nconstraints = Constraint[ position[:, i + 1] == position[:, i] + h * velocity[:, i] for i in 1 : T - 1]\n\n\nfor i in 1 : T - 1\n acceleration = force[:, i]/mass + g - drag * velocity[:, i]\n push!(constraints, velocity[:, i + 1] == velocity[:, i] + h * acceleration)\nend\n\n# Add position constraints\npush!(constraints, position[:, 1] == 0)\npush!(constraints, position[:, T] == final_position)\n\n# Add velocity constraints\npush!(constraints, velocity[:, 1] == initial_velocity)\npush!(constraints, velocity[:, T] == 0)\n\n# Solve the problem\nproblem = minimize(sumsquares(force), constraints)\nsolve!(problem, () -> SCS.Optimizer(verbose=0))","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"We can plot the trajectory taken by the object.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"pos = evaluate(position)\nplot([pos[1, 1]], [pos[2, 1]], st=:scatter, label=\"initial point\")\nplot!([pos[1, T]], [pos[2, T]], st=:scatter, label=\"final point\")\nplot!(pos[1, :], pos[2, :], label=\"trajectory\")","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"We can also see how the magnitude of the force changes over time.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"plot(vec(sum(evaluate(force).^2, dims=1)), label=\"force (magnitude)\")","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/control/","page":"Control","title":"Control","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example control after \" * elapsed","category":"page"},{"location":"operations/#Operations","page":"Supported Operations","title":"Operations","text":"","category":"section"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"Convex.jl currently supports the following functions. These functions may be composed according to the DCP composition rules to form new convex, concave, or affine expressions. Convex.jl transforms each problem into an equivalent conic program in order to pass the problem to a specialized solver. Depending on the types of functions used in the problem, the conic constraints may include linear, second-order, exponential, or semidefinite constraints, as well as any binary or integer constraints placed on the variables. Below, we list each function available in Convex.jl organized by the (most complex) type of cone used to represent that function, and indicate which solvers may be used to solve problems with those cones. Problems mixing many different conic constraints can be solved by any solver that supports every kind of cone present in the problem.","category":"page"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"In the notes column in the tables below, we denote implicit constraints imposed on the arguments to the function by IC, and parameter restrictions that the arguments must obey by PR. (Convex.jl will automatically impose ICs; the user must make sure to satisfy PRs.) Elementwise means that the function operates elementwise on vector arguments, returning a vector of the same size.","category":"page"},{"location":"operations/#Linear-Program-Representable-Functions","page":"Supported Operations","title":"Linear Program Representable Functions","text":"","category":"section"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"An optimization problem using only these functions can be solved by any LP solver.","category":"page"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"operation description vexity slope notes\nx+y or x.+y addition affine increasing none\nx-y or x.-y subtraction affine increasing in x decreasing in y none none\nx*y multiplication affine increasing if constant term ge 0 decreasing if constant term le 0 not monotonic otherwise PR: one argument is constant\nx/y division affine increasing PR: y is scalar constant\ndot(*)(x, y) elementwise multiplication affine increasing PR: one argument is constant\ndot(/)(x, y) elementwise division affine increasing PR: one argument is constant\nx[1:4, 2:3] indexing and slicing affine increasing none\ndiag(x, k) k-th diagonal of a matrix affine increasing none\ndiagm(x) construct diagonal matrix affine increasing PR: x is a vector\nx' transpose affine increasing none\nvec(x) vector representation affine increasing none\ndot(x,y) sum_i x_i y_i affine increasing PR: one argument is constant\nkron(x,y) Kronecker product affine increasing PR: one argument is constant\nvecdot(x,y) dot(vec(x),vec(y)) affine increasing PR: one argument is constant\nsum(x) sum_ij x_ij affine increasing none\nsum(x, k) sum elements across dimension k affine increasing none\nsumlargest(x, k) sum of k largest elements of x convex increasing none\nsumsmallest(x, k) sum of k smallest elements of x concave increasing none\ndotsort(a, b) dot(sort(a),sort(b)) convex increasing PR: one argument is constant\nreshape(x, m, n) reshape into m times n affine increasing none\nminimum(x) min(x) concave increasing none\nmaximum(x) max(x) convex increasing none\n[x y] or [x; y] hcat(x, y) or vcat(x, y) stacking affine increasing none\ntr(x) mathrmtr left(X right) affine increasing none\npartialtrace(x,sys,dims) Partial trace affine increasing none\npartialtranspose(x,sys,dims) Partial transpose affine increasing none\nconv(h,x) h in mathbbR^m, x in mathbbR^n, hstar x in mathbbR^m+n-1; entry i is given by sum_j=1^m h_jx_i-j+1 with x_k=0 for k out of bounds affine increasing if hge 0 decreasing if hle 0 not monotonic otherwise PR: h is constant\nmin(x,y) min(xy) concave increasing none\nmax(x,y) max(xy) convex increasing none\npos(x) max(x0) convex increasing none\nneg(x) max(-x0) convex decreasing none\ninvpos(x) 1x convex decreasing IC: x0\nabs(x) leftxright convex increasing on x ge 0 decreasing on x le 0 none\nopnorm(x, 1) maximum absolute column sum: max_1 j n sum_i=1^m leftx_ijright convex increasing on x ge 0 decreasing on x le 0 \nopnorm(x, Inf) maximum absolute row sum: max_1 i m sum_j=1^n leftx_ijright convex increasing on x ge 0 decreasing on x le 0 ","category":"page"},{"location":"operations/#Second-Order-Cone-Representable-Functions","page":"Supported Operations","title":"Second-Order Cone Representable Functions","text":"","category":"section"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"An optimization problem using these functions can be solved by any SOCP solver (including ECOS, SCS, Mosek, Gurobi, and CPLEX). Of course, if an optimization problem has both LP and SOCP representable functions, then any solver that can solve both LPs and SOCPs can solve the problem.","category":"page"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"operation description vexity slope notes\nnorm(x, p) (sum x_i^p)^1p convex increasing on x ge 0 decreasing on x le 0 PR: p >= 1\nquadform(x, P) x^T P x convex in x affine in P increasing on x ge 0 decreasing on x le 0 increasing in P PR: either x or P must be constant; if x is not constant, then P must be symmetric and positive semidefinite\nquadoverlin(x, y) x^T xy convex increasing on x ge 0 decreasing on x le 0 decreasing in y IC: y 0\nsumsquares(x) sum x_i^2 convex increasing on x ge 0 decreasing on x le 0 none\nsqrt(x) sqrtx concave decreasing IC: x0\nsquare(x), x^2 x^2 convex increasing on x ge 0 decreasing on x le 0 PR : x is scalar\ndot(^)(x,2) x^2 convex increasing on x ge 0 decreasing on x le 0 elementwise\ngeomean(x, y) sqrtxy concave increasing IC: xge0, yge0\nhuber(x, M=1) begincases x^2 x leq M 2Mx - M^2 x M endcases convex increasing on x ge 0 decreasing on x le 0 PR: M=1","category":"page"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"Note that for p=1 and p=Inf, the function norm(x,p) is a linear-program representable, and does not need a SOCP solver, and for a matrix x, norm(x,p) is defined as norm(vec(x), p).","category":"page"},{"location":"operations/#Exponential-Cone-Representable-Functions","page":"Supported Operations","title":"Exponential Cone Representable Functions","text":"","category":"section"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"An optimization problem using these functions can be solved by any exponential cone solver (SCS).","category":"page"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"operation description vexity slope notes\nlogsumexp(x) log(sum_i exp(x_i)) convex increasing none\nexp(x) exp(x) convex increasing none\nlog(x) log(x) concave increasing IC: x0\nentropy(x) sum_ij -x_ij log (x_ij) concave not monotonic IC: x0\nlogisticloss(x) log(1 + exp(x_i)) convex increasing none","category":"page"},{"location":"operations/#Semidefinite-Program-Representable-Functions","page":"Supported Operations","title":"Semidefinite Program Representable Functions","text":"","category":"section"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"An optimization problem using these functions can be solved by any SDP solver (including SCS and Mosek).","category":"page"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"operation description vexity slope notes\nnuclearnorm(x) sum of singular values of x convex not monotonic none\nopnorm(x, 2) (operatornorm(x)) max of singular values of x convex not monotonic none\neigmax(x) max eigenvalue of x convex not monotonic none\neigmin(x) min eigenvalue of x concave not monotonic none\nmatrixfrac(x, P) x^TP^-1x convex not monotonic IC: P is positive semidefinite\nsumlargesteigs(x, k) sum of top k eigenvalues of x convex not monotonic IC: P symmetric","category":"page"},{"location":"operations/#Exponential-SDP-representable-Functions","page":"Supported Operations","title":"Exponential + SDP representable Functions","text":"","category":"section"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"An optimization problem using these functions can be solved by any solver that supports exponential constraints and semidefinite constraints simultaneously (SCS).","category":"page"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"operation description vexity slope notes\nlogdet(x) log of determinant of x concave increasing IC: x is positive semidefinite","category":"page"},{"location":"operations/#Promotions","page":"Supported Operations","title":"Promotions","text":"","category":"section"},{"location":"operations/","page":"Supported Operations","title":"Supported Operations","text":"When an atom or constraint is applied to a scalar and a higher dimensional variable, the scalars are promoted. For example, we can do max(x, 0) gives an expression with the shape of x whose elements are the maximum of the corresponding element of x and 0.","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"__START_TIME = time_ns()\n@info \"Starting example worst_case_analysis\"","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/worst_case_analysis.jl\"","category":"page"},{"location":"examples/general_examples/worst_case_analysis/#Worst-case-risk-analysis","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"","category":"section"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"Generate data for worst-case risk analysis.","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"using Random\n\nRandom.seed!(2);\nn = 5;\nr = abs.(randn(n, 1))/15;\nSigma = 0.9 * rand(n, n) .- 0.15;\nSigma_nom = Sigma' * Sigma;\nSigma_nom .-= (maximum(Sigma_nom) - 0.9)","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"Form and solve portfolio optimization problem. Here we minimize risk while requiring a 0.1 return.","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"using Convex, SCS\nw = Variable(n);\nret = dot(r, w);\nrisk = sum(quadform(w, Sigma_nom));\nproblem = minimize(risk, [sum(w) == 1, ret >= 0.1, norm(w, 1) <= 2])\nsolve!(problem, () -> SCS.Optimizer(verbose=0));\nwval = vec(evaluate(w))","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"Form and solve worst-case risk analysis problem.","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"Sigma = Semidefinite(n);\nDelta = Variable(n, n);\nrisk = sum(quadform(wval, Sigma));\nproblem = maximize(risk, [Sigma == Sigma_nom + Delta,\n diag(Delta) == 0,\n abs(Delta) <= 0.2,\n Delta == Delta']);\nsolve!(problem, () -> SCS.Optimizer(verbose=0));\nprintln(\"standard deviation = \", round(sqrt(wval' * Sigma_nom * wval), sigdigits=2));\nprintln(\"worst-case standard deviation = \", round(sqrt(evaluate(risk)), sigdigits=2));\nprintln(\"worst-case Delta = \");\nprintln(round.(evaluate(Delta), sigdigits=2));\nnothing #hide","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/worst_case_analysis/","page":"Worst case risk analysis","title":"Worst case risk analysis","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example worst_case_analysis after \" * elapsed","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"__START_TIME = time_ns()\n@info \"Starting example phase_recovery_using_MaxCut\"","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/optimization_with_complex_variables/phase_recovery_using_MaxCut.jl\"","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/#Phase-recovery-using-MaxCut","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"","category":"section"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"In this example, we relax the phase retrieval problem similar to the classical MaxCut semidefinite program and recover the phase of the signal given the magnitude of the linear measurements.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"Phase recovery has wide applications such as in X-ray and crystallography imaging, diffraction imaging or microscopy and audio signal processing. In all these applications, the detectors cannot measure the phase of the incoming wave and only record its amplitude i.e complex measurements of a signal x in mathbbC^p are obtained from a linear injective operator A, but we can only measure the magnitude vector Ax, not the phase of Ax.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"Recovering the phase of Ax from Ax is a nonconvex optimization problem. Using results from this paper, the problem can be relaxed to a (complex) semidefinite program (complex SDP).","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"The original reprsentation of the problem is as follows:","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"beginarrayll\n textfind x in mathbbC^p \n textsubject to Ax = b\nendarray","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"where A in mathbbC^n times p and b in mathbbR^n.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"In this example, the problem is to find the phase of Ax given the value Ax. Given a linear operator A and a vector b= Ax of measured amplitudes, in the noiseless case, we can write Ax = textdiag(b)u where u in mathbbC^n is a phase vector, satisfying mathbbu_i = 1 for i = 1ldots n.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"We relax this problem as Complex Semidefinite Programming.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/#Relaxed-Problem-similar-to-[MaxCut](http://www-math.mit.edu/goemans/PAPERS/maxcut-jacm.pdf)","page":"Phase recovery using MaxCut","title":"Relaxed Problem similar to MaxCut","text":"","category":"section"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"Define the positive semidefinite hermitian matrix M = textdiag(b) (I - A A^*) textdiag(b). The problem is:","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"beginarrayll\n textminimize langle U M rangle \n textsubject to textdiag(U) = 1\n U succeq 0\nendarray","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"Here the variable U must be hermitian (U in mathbbH_n ) and we have a solution to the phase recovery problem if U = u u^* has rank one. Otherwise, the leading singular vector of U can be used to approximate the solution.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"using Convex, SCS, LinearAlgebra\nif VERSION < v\"1.2.0-DEV.0\"\n (I::UniformScaling)(n::Integer) = Diagonal(fill(I.λ, n))\n LinearAlgebra.diagm(v::AbstractVector) = diagm(0 => v)\nend\n\nn = 20\np = 2\nA = rand(n,p) + im*randn(n,p)\nx = rand(p) + im*randn(p)\nb = abs.(A*x) + rand(n)\n\nM = diagm(b)*(I(n)-A*A')*diagm(b)\nU = ComplexVariable(n,n)\nobjective = inner_product(U,M)\nc1 = diag(U) == 1\nc2 = U in :SDP\np = minimize(objective,c1,c2)\nsolve!(p, () -> SCS.Optimizer(verbose=0))\nevaluate(U)","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"Verify if the rank of U is 1:","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"B, C = eigen(evaluate(U));\nlength([e for e in B if(abs(real(e))>1e-4)])","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"Decompose U = uu^* where u is the phase of Ax","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"u = C[:,1];\nfor i in 1:n\n u[i] = u[i]/abs(u[i])\nend\nu","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/optimization_with_complex_variables/phase_recovery_using_MaxCut/","page":"Phase recovery using MaxCut","title":"Phase recovery using MaxCut","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example phase_recovery_using_MaxCut after \" * elapsed","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"__START_TIME = time_ns()\n@info \"Starting example paper_examples\"","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/supplemental_material/paper_examples.jl\"","category":"page"},{"location":"examples/supplemental_material/paper_examples/#Paper-examples","page":"Paper examples","title":"Paper examples","text":"","category":"section"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"using Convex, ECOS","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"Summation.","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"println(\"Summation example\")\nx = Variable();\ne = 0;\n@time begin\n for i = 1:1000\n global e\n e += x;\n end\n p = minimize(e, x>=1);\nend\n@time solve!(p, () -> ECOS.Optimizer(verbose=0))","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"Indexing.","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"println(\"Indexing example\")\nx = Variable(1000, 1);\ne = 0;\n@time begin\n for i = 1:1000\n global e\n e += x[i];\n end\n p = minimize(e, x >= ones(1000, 1));\nend\n@time solve!(p, () -> ECOS.Optimizer(verbose=0))","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"Matrix constraints.","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"println(\"Matrix constraint example\")\nn, m, p = 100, 100, 100\nX = Variable(m, n);\nA = randn(p, m);\nb = randn(p, n);\n@time begin\n p = minimize(norm(vec(X)), A * X == b);\nend\n@time solve!(p, () -> ECOS.Optimizer(verbose=0))","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"Transpose.","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"println(\"Transpose example\")\nX = Variable(5, 5);\nA = randn(5, 5);\n@time begin\n p = minimize(norm2(X - A), X' == X);\nend\n@time solve!(p, () -> ECOS.Optimizer(verbose=0))\n\nn = 3\nA = randn(n, n);\n#@time begin\n X = Variable(n, n);\n p = minimize(norm(vec(X' - A)), X[1,1] == 1);\n solve!(p, () -> ECOS.Optimizer(verbose=0))\n#end","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/supplemental_material/paper_examples/","page":"Paper examples","title":"Paper examples","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example paper_examples after \" * elapsed","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"__START_TIME = time_ns()\n@info \"Starting example max_entropy\"","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/max_entropy.jl\"","category":"page"},{"location":"examples/general_examples/max_entropy/#Entropy-Maximization","page":"Entropy Maximization","title":"Entropy Maximization","text":"","category":"section"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"Here is a constrained entropy maximization problem:","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"beginarrayll\n textmaximize -sum_i=1^n x_i log x_i \n textsubject to mathbf1 x = 1 \n Ax leq b\nendarray","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"where x in mathbfR^n is our optimization variable and A in mathbfR^m times n b in mathbfR^m.","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"To solve this, we can simply use the entropy operation Convex.jl provides.","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"using Convex, SCS\n\nn = 25;\nm = 15;\nA = randn(m, n);\nb = rand(m, 1);\n\nx = Variable(n);\nproblem = maximize(entropy(x), sum(x) == 1, A * x <= b)\nsolve!(problem, () -> SCS.Optimizer(verbose=false))\nproblem.optval","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"evaluate(x)","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/max_entropy/","page":"Entropy Maximization","title":"Entropy Maximization","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example max_entropy after \" * elapsed","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"__START_TIME = time_ns()\n@info \"Starting example logistic_regression\"","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/logistic_regression.jl\"","category":"page"},{"location":"examples/general_examples/logistic_regression/#Logistic-regression","page":"Logistic regression","title":"Logistic regression","text":"","category":"section"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"using DataFrames\nusing Plots\nusing RDatasets\nusing Convex\nusing SCS","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"This is an example logistic regression using RDatasets's iris data. Our goal is to gredict whether the iris species is versicolor using the sepal length and width and petal length and width.","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"iris = dataset(\"datasets\", \"iris\");\niris[1:10,:]","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"We'll define Y as the outcome variable: +1 for versicolor, -1 otherwise.","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"Y = [species == \"versicolor\" ? 1.0 : -1.0 for species in iris.Species]","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"We'll create our data matrix with one column for each feature (first column corresponds to offset).","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"X = hcat(ones(size(iris, 1)), iris.SepalLength, iris.SepalWidth, iris.PetalLength, iris.PetalWidth);\nnothing #hide","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"Now to solve the logistic regression problem.","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"n, p = size(X)\nbeta = Variable(p)\nproblem = minimize(logisticloss(-Y.*(X*beta)))\nsolve!(problem, () -> SCS.Optimizer(verbose=false))","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"Let's see how well the model fits.","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"using Plots\nlogistic(x::Real) = inv(exp(-x) + one(x))\nperm = sortperm(vec(X*evaluate(beta)))\nplot(1:n, (Y[perm] .+ 1)/2, st=:scatter)\nplot!(1:n, logistic.(X*evaluate(beta))[perm])","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/logistic_regression/","page":"Logistic regression","title":"Logistic regression","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example logistic_regression after \" * elapsed","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"__START_TIME = time_ns()\n@info \"Starting example portfolio_optimization2\"","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/portfolio_optimization/portfolio_optimization2.jl\"","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/#Portfolio-Optimization-Markowitz-Efficient-Frontier","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"","category":"section"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"In this problem, we will find the unconstrained portfolio allocation where we introduce the weighting parameter lambda (0 leq lambda leq 1) and minimize lambda * textrisk - (1-lambda)* textexpected return. By varying the values of lambda, we trace out the efficient frontier.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"Suppose that we know the mean returns mu in mathbfR^n of each asset and the covariance Sigma in mathbfR^n times n between the assets. Our objective is to find a portfolio allocation that minimizes the risk (which we measure as the variance w^T Sigma w) and maximizes the expected return (w^T mu) of the portfolio of the simulataneously. We require w in mathbfR^n and sum_i w_i = 1.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"This problem can be written as","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"beginarrayll\n textminimize lambda*w^T Sigma w - (1-lambda)*w^T mu \n textsubject to sum_i w_i = 1\nendarray","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"where w in mathbfR^n is the vector containing weights allocated to each asset.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"using Convex, SCS #We are using SCS solver. Install using Pkg.add(\"SCS\")\n\n# generate problem data\nμ = [11.5; 9.5; 6]/100 #expected returns\nΣ = [166 34 58; #covariance matrix\n 34 64 4;\n 58 4 100]/100^2\n\nn = length(μ) #number of assets","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"If you want to try the optimization with more assets, uncomment and run the next cell. It creates a vector or average returns and a variance-covariance matrix that have scales similar to the numbers above.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"using Random Random.seed!(123)","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"n = 15 #number of assets, CHANGE IT?","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"μ = (6 .+ (11.5-6)*rand(n))/100 #mean A = randn(n,n) Σ = (A * A' + diagm(0=>rand(n)))/500; #covariance matrix","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"First we solve without any bounds on w","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"N = 101\nλ_vals = range(0.01,stop=0.99,length=N)\n\nw = Variable(n)\nret = dot(w,μ)\nrisk = quadform(w,Σ)\n\nMeanVarA = zeros(N,2)\nfor i = 1:N\n λ = λ_vals[i]\n p = minimize( λ*risk - (1-λ)*ret,\n sum(w) == 1 )\n solve!(p, () -> SCS.Optimizer(verbose = false))\n MeanVarA[i,:]= [evaluate(ret),evaluate(risk)]\nend","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"Now we solve with the bounds 0le w_i le 1","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"w_lower = 0 #bounds on w\nw_upper = 1\n\nMeanVarB = zeros(N,2) #repeat, but with 0<w[i]<1\nfor i = 1:N\n λ = λ_vals[i]\n p = minimize( λ*risk - (1-λ)*ret,\n sum(w) == 1,\n w_lower <= w, #w[i] is bounded\n w <= w_upper )\n solve!(p, () -> SCS.Optimizer(verbose = false))\n MeanVarB[i,:]= [evaluate(ret),evaluate(risk)]\nend","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"using Plots\nplot( sqrt.([MeanVarA[:,2] MeanVarB[:,2]]),\n [MeanVarA[:,1] MeanVarB[:,1]],\n xlim = (0,0.25),\n ylim = (0,0.15),\n title = \"Markowitz Efficient Frontier\",\n xlabel = \"Standard deviation\",\n ylabel = \"Expected return\",\n label = [\"no bounds on w\" \"with 0<w<1\"])\nscatter!(sqrt.(diag(Σ)),μ,color=:red,label = \"assets\")","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"We now instead impose a restriction on sum_i w_i - 1, allowing for varying degrees of \"leverage\".","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"Lmax = 0.5\n\nMeanVarC = zeros(N,2) #repeat, but with restriction on Sum(|w[i]|)\nfor i = 1:N\n λ = λ_vals[i]\n p = minimize( λ*risk - (1-λ)*ret,\n sum(w) == 1,\n (norm(w, 1)-1) <= Lmax)\n solve!(p, () -> SCS.Optimizer(verbose = false))\n MeanVarC[i,:]= [evaluate(ret),evaluate(risk)]\nend","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"plot( sqrt.([MeanVarA[:,2] MeanVarB[:,2] MeanVarC[:,2]]),\n [MeanVarA[:,1] MeanVarB[:,1] MeanVarC[:,1]],\n xlim = (0,0.25),\n ylim = (0,0.15),\n title = \"Markowitz Efficient Frontier\",\n xlabel = \"Standard deviation\",\n ylabel = \"Expected return\",\n label = [\"no bounds on w\" \"with 0<w<1\" \"restriction on sum(|w|)\"])\nscatter!(sqrt.(diag(Σ)),μ,color=:red,label = \"assets\")","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/portfolio_optimization/portfolio_optimization2/","page":"Portfolio Optimization - Markowitz Efficient Frontier","title":"Portfolio Optimization - Markowitz Efficient Frontier","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example portfolio_optimization2 after \" * elapsed","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"__START_TIME = time_ns()\n@info \"Starting example power_flow_optimization\"","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/optimization_with_complex_variables/power_flow_optimization.jl\"","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/#Power-flow-optimization","page":"Power flow optimization","title":"Power flow optimization","text":"","category":"section"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"The data for example is taken from MATPOWER website. MATPOWER is Matlab package for solving power flow and optimal power flow problems.","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"using Convex, SCS\nusing Test\nusing MAT #Pkg.add(\"MAT\")\naux(str) = joinpath(@__DIR__, \"aux_files\", str) # path to auxiliary files\n\nTOL = 1e-2;\ninput = matopen(aux(\"Data.mat\"))\nvarnames = names(input)\nData = read(input, \"inj\", \"Y\");\n\nn = size(Data[2], 1);\nY = Data[2];\ninj = Data[1];\nW = ComplexVariable(n, n);\nobjective = real(sum(diag(W)));\nc1 = Constraint[];\nfor i = 2:n\n push!(c1, sum(W[i,:] .* (Y[i,:]')) == inj[i]);\nend\nc2 = W in :SDP\nc3 = real(W[1,1]) == 1.06^2;\npush!(c1, c2)\npush!(c1, c3)\np = maximize(objective, c1);\nsolve!(p, () -> SCS.Optimizer(verbose = 0))\np.optval\n#15.125857662600703\nevaluate(objective)\n#15.1258578588357\n\n\noutput = matopen(\"Res.mat\")\nnames(output)\noutputData = read(output, \"Wres\");\nWres = outputData\nreal_diff = real(evaluate(W)) - real(Wres);\nimag_diff = imag(evaluate(W)) - imag(Wres);\n@test real_diff ≈ zeros(n, n) atol = TOL\n@test imag_diff ≈ zeros(n, n) atol = TOL\n\nreal_diff = real(evaluate(W)) - (real(evaluate(W)))';\nimag_sum = imag(evaluate(W)) + (imag(evaluate(W)))';\n@test real_diff ≈ zeros(n, n) atol = TOL\n@test imag_diff ≈ zeros(n, n) atol = TOL","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/optimization_with_complex_variables/power_flow_optimization/","page":"Power flow optimization","title":"Power flow optimization","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example power_flow_optimization after \" * elapsed","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"__START_TIME = time_ns()\n@info \"Starting example basic_usage\"","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/basic_usage.jl\"","category":"page"},{"location":"examples/general_examples/basic_usage/#Basic-Usage","page":"Basic Usage","title":"Basic Usage","text":"","category":"section"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"using Convex\nusing LinearAlgebra\nif VERSION < v\"1.2.0-DEV.0\"\n (I::UniformScaling)(n::Integer) = Diagonal(fill(I.λ, n))\nend\n\nusing SCS\n# passing in verbose=0 to hide output from SCS\nsolver = () -> SCS.Optimizer(verbose=0)","category":"page"},{"location":"examples/general_examples/basic_usage/#Linear-program","page":"Basic Usage","title":"Linear program","text":"","category":"section"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"beginarrayll\n textmaximize c^T x \n textsubject to A x leq b\n x geq 1 \n x leq 10 \n x_2 leq 5 \n x_1 + x_4 - x_2 leq 10 \nendarray","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"x = Variable(4)\nc = [1; 2; 3; 4]\nA = I(4)\nb = [10; 10; 10; 10]\np = minimize(dot(c, x)) # or c' * x\np.constraints += A * x <= b\np.constraints += [x >= 1; x <= 10; x[2] <= 5; x[1] + x[4] - x[2] <= 10]\nsolve!(p, solver)\n\nprintln(round(p.optval, digits=2))\nprintln(round.(evaluate(x), digits=2))\nprintln(evaluate(x[1] + x[4] - x[2]))","category":"page"},{"location":"examples/general_examples/basic_usage/#Matrix-Variables-and-promotions","page":"Basic Usage","title":"Matrix Variables and promotions","text":"","category":"section"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"beginarrayll\n textminimize X _F + y \n textsubject to 2 X leq 1\n X + y geq 1 \n X geq 0 \n y geq 0 \nendarray","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"X = Variable(2, 2)\ny = Variable()\n# X is a 2 x 2 variable, and y is scalar. X' + y promotes y to a 2 x 2 variable before adding them\np = minimize(norm(vec(X)) + y, 2 * X <= 1, X' + y >= 1, X >= 0, y >= 0)\nsolve!(p, solver)\nprintln(round.(evaluate(X), digits=2))\nprintln(evaluate(y))\np.optval","category":"page"},{"location":"examples/general_examples/basic_usage/#Norm,-exponential-and-geometric-mean","page":"Basic Usage","title":"Norm, exponential and geometric mean","text":"","category":"section"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"beginarrayll\n textsatisfy x _2 leq 100 \n e^x_1 leq 5 \n x_2 geq 7 \n sqrtx_3 x_4 geq x_2\nendarray","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"x = Variable(4)\np = satisfy(norm(x) <= 100, exp(x[1]) <= 5, x[2] >= 7, geomean(x[3], x[4]) >= x[2])\nsolve!(p, solver)\nprintln(p.status)\nevaluate(x)","category":"page"},{"location":"examples/general_examples/basic_usage/#SDP-cone-and-Eigenvalues","page":"Basic Usage","title":"SDP cone and Eigenvalues","text":"","category":"section"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"y = Semidefinite(2)\np = maximize(eigmin(y), tr(y)<=6)\nsolve!(p, solver)\np.optval","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"x = Variable()\ny = Variable((2, 2))\n# SDP constraints\np = minimize(x + y[1, 1], isposdef(y), x >= 1, y[2, 1] == 1)\nsolve!(p, solver)\nevaluate(y)","category":"page"},{"location":"examples/general_examples/basic_usage/#Mixed-integer-program","page":"Basic Usage","title":"Mixed integer program","text":"","category":"section"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"beginarrayll\n textminimize sum_i=1^n x_i \n textsubject to x in mathbbZ^n \n x geq 05 \nendarray","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"using GLPK\nx = Variable(4, :Int)\np = minimize(sum(x), x >= 0.5)\nsolve!(p, GLPK.Optimizer)\nevaluate(x)","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/basic_usage/","page":"Basic Usage","title":"Basic Usage","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example basic_usage after \" * elapsed","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"__START_TIME = time_ns()\n@info \"Starting example time_series\"","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/time_series/time_series.jl\"","category":"page"},{"location":"examples/time_series/time_series/#Time-Series-Analysis","page":"Time Series Analysis","title":"Time Series Analysis","text":"","category":"section"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"A time series is a sequence of data points, each associated with a time. In our example, we will work with a time series of daily temperatures in the city of Melbourne, Australia over a period of a few years. Let x be the vector of the time series, and x_i denote the temperature in Melbourne on day i. Here is a picture of the time series:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"using Plots, Convex, ECOS, DelimitedFiles\naux(str) = joinpath(@__DIR__, \"aux_files\", str) # path to auxiliary files\n\ntemps = readdlm(aux(\"melbourne_temps.txt\"), ',')\nn = size(temps, 1)\nplot(1:n, temps[1:n], ylabel=\"Temperature (°C)\", label=\"data\", xlabel = \"Time (days)\", xticks=0:365:n)","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"We can quickly compute the mean of the time series to be 112. If we were to always guess the mean as the temperature of Melbourne on a given day, the RMS error of our guesswork would be 41. We'll try to lower this RMS error by coming up with better ways to model the temperature than guessing the mean.","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"A simple way to model this time series would be to find a smooth curve that approximates the yearly ups and downs. We can represent this model as a vector s where s_i denotes the temperature on the i-th day. To force this trend to repeat yearly, we simply want","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":" s_i = s_i + 365","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"for each applicable i.","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"We also want our model to have two more properties:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"The first is that the temperature on each day in our model should be relatively close to the actual temperature of that day.\nThe second is that our model needs to be smooth, so the change in temperature from day to day should be relatively small. The following objective would capture both properties:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":" sum_i = 1^n (s_i - x_i)^2 + lambda sum_i = 2^n(s_i - s_i - 1)^2","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"where lambda is the smoothing parameter. The larger lambda is, the smoother our model will be.","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"The following code uses Convex to find and plot the model:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"yearly = Variable(n)\neq_constraints = [ yearly[i] == yearly[i - 365] for i in 365 + 1 : n ]\n\nsmoothing = 100\nsmooth_objective = sumsquares(yearly[1 : n - 1] - yearly[2 : n])\nproblem = minimize(sumsquares(temps - yearly) + smoothing * smooth_objective, eq_constraints);\nsolve!(problem, () -> ECOS.Optimizer(maxit=200, verbose=0))\nresiduals = temps - evaluate(yearly)\n\n# Plot smooth fit\nplot(1:n, temps[1:n], label=\"data\")\nplot!(1:n, evaluate(yearly)[1:n], linewidth=2, label=\"smooth fit\", ylabel=\"Temperature (°C)\", xticks=0:365:n, xlabel=\"Time (days)\")","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"We can also plot the residual temperatures, r, defined as r = x - s.","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"# Plot residuals for a few days\nplot(1:100, residuals[1:100], ylabel=\"Residuals\", xlabel=\"Time (days)\")","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"root_mean_square_error = sqrt(sum( x -> x^2, residuals) / length(residuals))","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"Our smooth model has a RMS error of 27, a significant improvement from just guessing the mean, but we can do better.","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"We now make the hypothesis that the residual temperature on a given day is some linear combination of the previous 5 days. Such a model is called autoregressive. We are essentially trying to fit the residuals as a function of other parts of the data itself. We want to find a vector of coefficients a such that","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":" textr(i) approx sum_j = 1^5 a_j textr(i - j)","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"This can be done by simply minimizing the following sum of squares objective","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":" sum_i = 6^n left(textr(i) - sum_j = 1^5 a_j textr(i - j)right)^2","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"The following Convex code solves this problem and plots our autoregressive model against the actual residual temperatures:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"# Generate the residuals matrix\nar_len = 5\n\nresiduals_mat = Matrix{Float64}(undef, length(residuals) - ar_len, ar_len)\nfor i = 1:ar_len\n residuals_mat[:, i] = residuals[ar_len - i + 1 : n - i]\nend\n\n# Solve autoregressive problem\nar_coef = Variable(ar_len)\nproblem = minimize(sumsquares(residuals_mat * ar_coef - residuals[ar_len + 1 : end]))\nsolve!(problem, () -> ECOS.Optimizer(max_iters=200, verbose=0))\n\n# plot autoregressive fit of daily fluctuations for a few days\nar_range = 1:145\nday_range = ar_range .+ ar_len\nplot(day_range, residuals[day_range], label=\"fluctuations from smooth fit\", ylabel=\"Temperature difference (°C)\")\nplot!(day_range, residuals_mat[ar_range, :] * evaluate(ar_coef), label=\"autoregressive estimate\", xlabel=\"Time (days)\")","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"Now, we can add our autoregressive model for the residual temperatures to our smooth model to get an better fitting model for the daily temperatures in the city of Melbourne:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"total_estimate = evaluate(yearly)\ntotal_estimate[ar_len + 1 : end] += residuals_mat * evaluate(ar_coef)","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"We can plot the final fit of data across the whole time range:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"plot(1:n, temps, label=\"data\", ylabel=\"Temperature (°C)\")\nplot!(1:n, total_estimate, label=\"estimate\", xticks=0:365:n, xlabel=\"Time (days)\")","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"The RMS error of this final model is sim 23:","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"root_mean_square_error = sqrt(sum( x -> x^2, total_estimate - temps) / length(temps))","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/time_series/time_series/","page":"Time Series Analysis","title":"Time Series Analysis","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example time_series after \" * elapsed","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"__START_TIME = time_ns()\n@info \"Starting example robust_approx_fitting\"","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/robust_approx_fitting.jl\"","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/#Robust-approximate-fitting","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"","category":"section"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Section 6.4.2 Boyd & Vandenberghe \"Convex Optimization\" Original by Lieven Vandenberghe Adapted for Convex by Joelle Skaf - 10/03/05","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Adapted for Convex.jl by Karanveer Mohan and David Zeng - 26/05/14 Original cvx code and plots here: http://web.cvxr.com/cvx/examples/cvxbook/Ch06_approx_fitting/html/fig6_15.html","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Consider the least-squares problem: minimize (A + tB)x - b_2 where t is an uncertain parameter in [-1,1] Three approximate solutions are found:","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"nominal optimal (i.e. letting t=0)\nstochastic robust approximation: minimize mathbbE(A+tB)x - b_2 assuming u is uniformly distributed on [-1,1]. (reduces to minimizing mathbbE (A+tB)x-b^2 = A*x-b^2 + x^TPx where P = mathbbE(t^2) B^TB = (13) B^TB )\nworst-case robust approximation: minimize mathrmsup_-1leq uleq 1 (A+tB)x - b_2 (reduces to minimizing max(A-B)x - b_2 (A+B)x - b_2 ).","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"using Convex, LinearAlgebra, SCS\nif VERSION < v\"1.2.0-DEV.0\"\n LinearAlgebra.diagm(v::AbstractVector) = diagm(0 => v)\nend","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Input Data","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"m = 20;\nn = 10;\nA = randn(m,n);\n(U,S,V) = svd(A);\nS = diagm(exp10.(range(-1, stop=1, length=n)));\nA = U[:, 1:n] * S * V';\n\nB = randn(m, n);\nB = B / norm(B);\n\nb = randn(m, 1);\nx = Variable(n)","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Case 1: Nominal optimal solution","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"p = minimize(norm(A * x - b, 2))\nsolve!(p, () -> SCS.Optimizer(verbose=0))\nx_nom = evaluate(x)","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Case 2: Stochastic robust approximation","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"P = 1 / 3 * B' * B;\np = minimize(square(pos(norm(A * x - b))) + quadform(x, Symmetric(P)))\nsolve!(p, () -> SCS.Optimizer(verbose=0))\nx_stoch = evaluate(x)","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Case 3: Worst-case robust approximation","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"p = minimize(max(norm((A - B) * x - b), norm((A + B) * x - b)))\nsolve!(p, () -> SCS.Optimizer(verbose=0))\nx_wc = evaluate(x)","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"Plot residuals:","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"parvals = range(-2, stop=2, length=100);\n\nerrvals(x) = [ norm((A + parvals[k] * B) * x - b) for k = eachindex(parvals)]\nerrvals_ls = errvals(x_nom)\nerrvals_stoch = errvals(x_stoch)\nerrvals_wc = errvals(x_wc)\n\n\nusing Plots\nplot(parvals, errvals_ls, label=\"Nominal problem\")\nplot!(parvals, errvals_stoch, label=\"Stochastic Robust Approximation\")\nplot!(parvals, errvals_wc, label=\"Worst-Case Robust Approximation\")\nplot!(title=\"Residual r(u) vs a parameter u for three approximate solutions\", xlabel=\"u\", ylabel=\"r(u) = ||A(u)x-b||_2\")","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/robust_approx_fitting/","page":"Robust approximate fitting","title":"Robust approximate fitting","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example robust_approx_fitting after \" * elapsed","category":"page"},{"location":"#Convex.jl-Convex-Optimization-in-Julia","page":"Home","title":"Convex.jl - Convex Optimization in Julia","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Convex.jl is a Julia package for Disciplined Convex Programming (DCP). Convex.jl makes it easy to describe optimization problems in a natural, mathematical syntax, and to solve those problems using a variety of different (commercial and open-source) solvers. Convex.jl can solve","category":"page"},{"location":"","page":"Home","title":"Home","text":"linear programs\nmixed-integer linear programs and mixed-integer second-order cone programs\ndcp-compliant convex programs including\nsecond-order cone programs (SOCP)\nexponential cone programs\nsemidefinite programs (SDP)","category":"page"},{"location":"","page":"Home","title":"Home","text":"Convex.jl supports many solvers, including COSMO, Mosek, Gurobi, ECOS, SCS and GLPK, through MathOptInterface.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Note that Convex.jl was previously called CVX.jl. This package is under active development; we welcome bug reports and feature requests. For usage questions, please contact us via the Julia Discourse.","category":"page"},{"location":"#Extended-formulations-and-the-DCP-ruleset","page":"Home","title":"Extended formulations and the DCP ruleset","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Convex.jl works by transforming the problem—which possibly has nonsmooth, nonlinear constructions like the nuclear norm, the log determinant, and so forth—into a linear optimization problem subject to conic constraints. This reformulation often involves adding auxiliary variables, and is called an \"extended formulation\", since the original problem has been extended with additional variables. These formulations rely on the problem being modelled by combining Convex.jl's \"atoms\" or primitives according to certain rules which ensure convexity, called the disciplined convex programming (DCP) ruleset. If these atoms are combined in a way that does not ensure convexity, the extended formulations are often invalid. As a simple example, consider the problem","category":"page"},{"location":"","page":"Home","title":"Home","text":"minimize( abs(x), x >= 1, x <= 2)","category":"page"},{"location":"","page":"Home","title":"Home","text":"Obviously, the optimum occurs at x=1, but let us imagine we want to solve this problem via Convex.jl using a linear programming (LP) solver. Since abs is a nonlinear function, we need to reformulate the problem to pass it to the LP solver. We do this by introducing an auxiliary variable t and instead solving","category":"page"},{"location":"","page":"Home","title":"Home","text":"minimize(t, x >= 1, x <= 2, t >= x, t >= -x)","category":"page"},{"location":"","page":"Home","title":"Home","text":"That is, we add the constraints t >= x and t >= -x, and replace abs(x) by t. Since we are minimizing over t and the smallest possible t satisfying these constraints is the absolute value of x, we get the right answer. That is, this reformulation worked because we were minimizing abs(x), and that is a valid way to use the primitive abs.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If we were maximizing abs, Convex.jl would print","category":"page"},{"location":"","page":"Home","title":"Home","text":"Warning: Problem not DCP compliant: objective is not DCP","category":"page"},{"location":"","page":"Home","title":"Home","text":"Why? Well, let us consider the same reformulation for a maximization problem. The original problem is now","category":"page"},{"location":"","page":"Home","title":"Home","text":"maximize( abs(x), x >= 1, x <= 2)","category":"page"},{"location":"","page":"Home","title":"Home","text":"and trivially the optimum is 2, obtained at x=2. If we do the same replacements as above, however, we arrive at the problem","category":"page"},{"location":"","page":"Home","title":"Home","text":"maximize(t, x >= 1, x <= 2, t >= x, t >= -x)","category":"page"},{"location":"","page":"Home","title":"Home","text":"whose solution is infinity. In other words, we got the wrong answer by using the reformulation, since the extended formulation was only valid for a minimization problem. Convex.jl always performs these reformulations, but they are only guaranteed to be valid when the DCP ruleset is followed. Therefore, Convex.jl programatically checks the whether or not these rules were satisfied and warns if they were not. One should not take these DCP warnings lightly!","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"All of the examples can be found in Jupyter notebook form here.","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"__START_TIME = time_ns()\n@info \"Starting example chebyshev_center\"","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"EditURL = \"https://github.com/jump-dev/Convex.jl/blob/master/docs/examples_literate/general_examples/chebyshev_center.jl\"","category":"page"},{"location":"examples/general_examples/chebyshev_center/#Chebyshev-center","page":"Chebyshev center","title":"Chebyshev center","text":"","category":"section"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"Boyd & Vandenberghe, \"Convex Optimization\" Joëlle Skaf - 08/16/05","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"Adapted for Convex.jl by Karanveer Mohan and David Zeng - 26/05/14","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"The goal is to find the largest Euclidean ball (i.e. its center and radius) that lies in a polyhedron described by affine inequalites in this fashion: P = x a_i*x leq b_i i=1ldotsm where x in mathbbR^2.","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"using Convex, LinearAlgebra, SCS","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"Generate the input data","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"a1 = [ 2; 1];\na2 = [ 2; -1];\na3 = [-1; 2];\na4 = [-1; -2];\nb = ones(4, 1);\nnothing #hide","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"Create and solve the model","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"r = Variable(1)\nx_c = Variable(2)\np = maximize(r)\np.constraints += a1' * x_c + r * norm(a1, 2) <= b[1];\np.constraints += a2' * x_c + r * norm(a2, 2) <= b[2];\np.constraints += a3' * x_c + r * norm(a3, 2) <= b[3];\np.constraints += a4' * x_c + r * norm(a4, 2) <= b[4];\nsolve!(p, () -> SCS.Optimizer(verbose=0))\np.optval","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"Generate the figure","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"x = range(-1.5, stop=1.5, length=100);\ntheta = 0:pi/100:2*pi;\nusing Plots\nplot(x, x -> -x * a1[1] / a1[2] + b[1] / a1[2])\nplot!(x, x -> -x * a2[1]/ a2[2] + b[2] / a2[2])\nplot!(x, x -> -x * a3[1]/ a3[2] + b[3] / a3[2])\nplot!(x, x -> -x * a4[1]/ a4[2] + b[4] / a4[2])\nplot!(evaluate(x_c)[1] .+ evaluate(r) * cos.(theta), evaluate(x_c)[2] .+ evaluate(r) * sin.(theta), linewidth = 2)\nplot!(title =\"Largest Euclidean ball lying in a 2D polyhedron\", legend = nothing)","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/general_examples/chebyshev_center/","page":"Chebyshev center","title":"Chebyshev center","text":"__END_TIME = time_ns()\nelapsed = string(round((__END_TIME - __START_TIME)*1e-9; sigdigits = 3), \"s\")\n@info \"Finished example chebyshev_center after \" * elapsed","category":"page"},{"location":"types/#Basic-Types","page":"Basic Types","title":"Basic Types","text":"","category":"section"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"The basic building block of Convex.jl is called an expression, which can represent a variable, a constant, or a function of another expression. We discuss each kind of expression in turn.","category":"page"},{"location":"types/#Variables","page":"Basic Types","title":"Variables","text":"","category":"section"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"The simplest kind of expression in Convex.jl is a variable. Variables in Convex.jl are declared using the Variable keyword, along with the dimensions of the variable.","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"# Scalar variable\nx = Variable()\n\n# Column vector variable\nx = Variable(5)\n\n# Matrix variable\nx = Variable(4, 6)","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Variables may also be declared as having special properties, such as being","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"(entrywise) positive: x = Variable(4, Positive())\n(entrywise) negative: x = Variable(4, Negative())\nintegral: x = Variable(4, IntVar)\nbinary: x = Variable(4, BinVar)\n(for a matrix) being symmetric, with nonnegative eigenvalues (ie, positive semidefinite): z = Semidefinite(4)","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"The order of the arguments is the size, the sign, and then the Convex.VarType (i.e., integer, binary, or continuous), and any may be omitted to use the default. The current value of a variable x can be accessed with evaluate(x). After solve!ing a problem, the value of each variable used in the problem is set to its optimal value.","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"See also Custom Variable Types for how to implement your own variable types.","category":"page"},{"location":"types/#Constants","page":"Basic Types","title":"Constants","text":"","category":"section"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Numbers, vectors, and matrices present in the Julia environment are wrapped automatically into a Constant expression when used in a Convex.jl expression.","category":"page"},{"location":"types/#Expressions","page":"Basic Types","title":"Expressions","text":"","category":"section"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Expressions in Convex.jl are formed by applying any atom (mathematical function defined in Convex.jl) to variables, constants, and other expressions. For a list of these functions, see Operations. Atoms are applied to expressions using operator overloading. For example, 2+2 calls Julia's built-in addition operator, while 2+x calls the Convex.jl addition method and returns a Convex.jl expression. Many of the useful language features in Julia, such as arithmetic, array indexing, and matrix transpose are overloaded in Convex.jl so they may be used with variables and expressions just as they are used with native Julia types.","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Expressions that are created must be DCP-compliant. More information on DCP can be found here. :","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"x = Variable(5)\n# The following are all expressions\ny = sum(x)\nz = 4 * x + y\nz_1 = z[1]","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Convex.jl allows the values of the expressions to be evaluated directly.","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"x = Variable()\ny = Variable()\nz = Variable()\nexpr = x + y + z\nproblem = minimize(expr, x >= 1, y >= x, 4 * z >= y)\nsolve!(problem, SCS.Optimizer)\n\n# Once the problem is solved, we can call evaluate() on expr:\nevaluate(expr)","category":"page"},{"location":"types/#Constraints","page":"Basic Types","title":"Constraints","text":"","category":"section"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Constraints in Convex.jl are declared using the standard comparison operators <=, >=, and ==. They specify relations that must hold between two expressions. Convex.jl does not distinguish between strict and non-strict inequality constraints.","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"x = Variable(5, 5)\n# Equality constraint\nconstraint = x == 0\n# Inequality constraint\nconstraint = x >= 1","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Matrices can also be constrained to be positive semidefinite.","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"x = Variable(3, 3)\ny = Variable(3, 1)\nz = Variable()\n# constrain [x y; y' z] to be positive semidefinite\nconstraint = ([x y; y' z] in :SDP)\n# or equivalently,\nconstraint = ([x y; y' z] ⪰ 0)","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Constraints can also be added to variables after their construction, to automatically apply constraints to any problem which uses the variable. For example,","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"x = Variable(3)\nadd_constraint!(x, sum(x) == 1)","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Now, in any problem in which x is used, the constraint sum(x) == 1 will be added.","category":"page"},{"location":"types/#Objective","page":"Basic Types","title":"Objective","text":"","category":"section"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"The objective of the problem is a scalar expression to be maximized or minimized by using maximize or minimize respectively. Feasibility problems can be expressed by either giving a constant as the objective, or using problem = satisfy(constraints).","category":"page"},{"location":"types/#Problem","page":"Basic Types","title":"Problem","text":"","category":"section"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"A problem in Convex.jl consists of a sense (minimize, maximize, or satisfy), an objective (an expression to which the sense verb is to be applied), and zero or more constraints that must be satisfied at the solution. Problems may be constructed as","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"problem = minimize(objective, constraints)\n# or\nproblem = maximize(objective, constraints)\n# or\nproblem = satisfy(constraints)","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"Constraints can be added at any time before the problem is solved.","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"# No constraints given\nproblem = minimize(objective)\n# Add some constraint\nproblem.constraints += constraint\n# Add many more constraints\nproblem.constraints += [constraint1, constraint2, ...]","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"A problem can be solved by calling solve!","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"solve!(problem, solver)","category":"page"},{"location":"types/","page":"Basic Types","title":"Basic Types","text":"passing a solver such as SCS.Optimizer() from the package SCS as the second argument. After the problem is solved, problem.status records the status returned by the optimization solver, and can be :Optimal, :Infeasible, :Unbounded, :Indeterminate or :Error. If the status is :Optimal, problem.optval will record the optimum value of the problem. The optimal value for each variable x participating in the problem can be found in evaluate(x). The optimal value of an expression can be found by calling the evaluate() function on the expression as follows: evaluate(expr).","category":"page"}]
}