From 9fc0bde5be34eda5bf24b96bdf9f237a566bb750 Mon Sep 17 00:00:00 2001 From: James Foster Date: Fri, 6 Aug 2021 14:01:40 +1000 Subject: [PATCH] Stylistic and typo fixes in intro docs --- docs/src/background/duality.md | 16 +++++++-------- docs/src/manual/standard_form.md | 31 +++++++++++++++--------------- docs/src/tutorials/example.md | 6 ++++++ docs/src/tutorials/implementing.md | 10 +++++----- 4 files changed, 34 insertions(+), 29 deletions(-) diff --git a/docs/src/background/duality.md b/docs/src/background/duality.md index 34b125e8fe..5f6b4f1586 100644 --- a/docs/src/background/duality.md +++ b/docs/src/background/duality.md @@ -108,8 +108,8 @@ and similarly, the dual is: ``` !!! warning - For the LP case is that the signs of the feasible duals depend only on the - sense of the inequality and not on the objective sense. + For the LP case, the signs of the feasible dual variables depend only on the + sense of the corresponding primal inequality and not on the objective sense. ## Duality and scalar product @@ -133,7 +133,7 @@ Given a problem with quadratic functions: & \;\;\text{s.t.} & \frac{1}{2}x^TQ_ix + a_i^T x + b_i & \in \mathcal{C}_i & i = 1 \ldots m \end{align*} ``` -Consider the Lagrangian function +with cones ``\mathcal{C}_i \subseteq \mathbb{R}`` for ``i = 1 \ldots m``, consider the Lagrangian function ```math L(x, y) = \frac{1}{2}x^TQ_0x + a_0^T x + b_0 - \sum_{i = 1}^m y_i (\frac{1}{2}x^TQ_ix + a_i^T x + b_i) ``` @@ -152,9 +152,9 @@ A pair of primal-dual variables $(x^\star, y^\star)$ is optimal if ``` That is, for all ``i = 1, \ldots, m``, ``\frac{1}{2}x^TQ_ix + a_i^T x + b_i`` is either zero or in the normal cone of ``\mathcal{C}_i^*`` at ``y^\star``. - For instance, if ``\mathcal{C}_i`` is ``\{ x \in \mathbb{R} : x \le 0 \}``, it means that - if ``\frac{1}{2}x^TQ_ix + a_i^T x + b_i`` is nonzero then ``\lambda_i = 0``, - this is the classical complementary slackness condition. + For instance, if ``\mathcal{C}_i`` is ``\{ x \in \mathbb{R} : x \le 0 \}``, this means that + if ``\frac{1}{2}x^TQ_ix + a_i^T x + b_i`` is nonzero at ``x^\star`` then ``y_i^\star = 0``. + This is the classical complementary slackness condition. If ``\mathcal{C}_i`` is a vector set, the discussion remains valid with ``y_i(\frac{1}{2}x^TQ_ix + a_i^T x + b_i)`` replaced with the scalar product @@ -162,13 +162,13 @@ between ``y_i`` and the vector of scalar-valued quadratic functions. !!! note For quadratic programs with only affine constraints, the optimality condition - ``\nabla_x L(x, y^\star) = 0`` can be simplified as follows + ``\nabla_x L(x, y^\star) = 0`` can be simplified as follows: ```math 0 = \nabla_x L(x, y^\star) = Q_0x + a_0 - \sum_{i = 1}^m y_i^\star a_i ``` which gives ```math - Q_0x = \sum_{i = 1}^m y_i^\star a_i - a_0 + Q_0x = \sum_{i = 1}^m y_i^\star a_i - a_0 . ``` The Lagrangian function ```math diff --git a/docs/src/manual/standard_form.md b/docs/src/manual/standard_form.md index 4a0fcc9343..59508a9cc6 100644 --- a/docs/src/manual/standard_form.md +++ b/docs/src/manual/standard_form.md @@ -63,9 +63,9 @@ The one-dimensional set types implemented in MathOptInterface.jl are: * [`Integer()`](@ref MathOptInterface.Integer): ``\mathbb{Z}`` * [`ZeroOne()`](@ref MathOptInterface.ZeroOne): ``\{ 0, 1 \}`` * [`Semicontinuous(lower,upper)`](@ref MathOptInterface.Semicontinuous): - ``\{ 0\} \cup [lower,upper]`` + ``\{ 0\} \cup [\mbox{lower},\mbox{upper}]`` * [`Semiinteger(lower,upper)`](@ref MathOptInterface.Semiinteger): - ``\{ 0\} \cup \{lower,lower+1,\ldots,upper-1,upper\}`` + ``\{ 0\} \cup \{\mbox{lower},\mbox{lower}+1,\ldots,\mbox{upper}-1,\mbox{upper}\}`` ## Vector cones @@ -85,18 +85,17 @@ The vector-valued set types implemented in MathOptInterface.jl are: * [`ExponentialCone()`](@ref MathOptInterface.ExponentialCone): ``\{ (x,y,z) \in \mathbb{R}^3 : y \exp (x/y) \le z, y > 0 \}`` * [`DualExponentialCone()`](@ref MathOptInterface.DualExponentialCone): - ``\{ (u,v,w) \in \mathbb{R}^3 : -u \exp (v/u) \le exp(1) w, u < 0 \}`` + ``\{ (u,v,w) \in \mathbb{R}^3 : -u \exp (v/u) \le \exp(1) w, u < 0 \}`` * [`GeometricMeanCone(dimension)`](@ref MathOptInterface.GeometricMeanCone): ``\{ (t,x) \in \mathbb{R}^{n+1} : x \ge 0, t \le \sqrt[n]{x_1 x_2 \cdots x_n} \}`` - where ``n`` is ``dimension - 1`` + where ``n`` is ``\mbox{dimension} - 1`` * [`PowerCone(exponent)`](@ref MathOptInterface.PowerCone): ``\{ (x,y,z) \in \mathbb{R}^3 : x^\mbox{exponent} y^{1-\mbox{exponent}} \ge |z|, x,y \ge 0 \}`` * [`DualPowerCone(exponent)`](@ref MathOptInterface.DualPowerCone): ``\{ (u,v,w) \in \mathbb{R}^3 : \frac{u}{\mbox{exponent}}^\mbox{exponent}\frac{v}{1-\mbox{exponent}}^{1-\mbox{exponent}} \ge |w|, u,v \ge 0 \}`` -* [`NormOneCone(dimension)`](@ref MathOptInterface.NormOneCone): -``\{ (t,x) \in \mathbb{R}^\mbox{dimension} : t \ge \lVert x \rVert_1 = \sum_i \lvert x_i \rvert \}`` +* [`NormOneCone(dimension)`](@ref MathOptInterface.NormOneCone): ``\{ (t,x) \in \mathbb{R}^\mbox{dimension} : t \ge \lVert x \rVert_1 \}`` where ``\lVert x \rVert_1 = \sum_i \lvert x_i \rvert`` * [`NormInfinityCone(dimension)`](@ref MathOptInterface.NormInfinityCone): - ``\{ (t,x) \in \mathbb{R}^\mbox{dimension} : t \ge \lVert x \rVert_\infty = \max_i \lvert x_i \rvert \}`` + ``\{ (t,x) \in \mathbb{R}^\mbox{dimension} : t \ge \lVert x \rVert_\infty \}`` where ``\lVert x \rVert_\infty = \max_i \lvert x_i \rvert``. * [`RelativeEntropyCone(dimension)`](@ref MathOptInterface.RelativeEntropyCone): ``\{ (u, v, w) \in \mathbb{R}^\mbox{dimension} : u \ge \sum_i w_i \log (\frac{w_i}{v_i}), v_i \ge 0, w_i \ge 0 \}`` @@ -105,24 +104,24 @@ The vector-valued set types implemented in MathOptInterface.jl are: The matrix-valued set types implemented in MathOptInterface.jl are: * [`RootDetConeTriangle(dimension)`](@ref MathOptInterface.RootDetConeTriangle): - ``\{ (t,X) \in \mathbb{R}^{1+\mbox{dimension}(1+\mbox{dimension})/2} : t \le det(X)^{1/\mbox{dimension}}, X \mbox{is the upper triangle of a PSD matrix} \}`` + ``\{ (t,X) \in \mathbb{R}^{1+\mbox{dimension}(1+\mbox{dimension})/2} : t \le \det(X)^{1/\mbox{dimension}}, X \mbox{ is the upper triangle of a PSD matrix} \}`` * [`RootDetConeSquare(dimension)`](@ref MathOptInterface.RootDetConeSquare): - ``\{ (t,X) \in \mathbb{R}^{1+\mbox{dimension}^2} : t \le \det(X)^{1/\mbox{dimension}}, X \mbox{is a PSD matrix} \}`` + ``\{ (t,X) \in \mathbb{R}^{1+\mbox{dimension}^2} : t \le \det(X)^{1/\mbox{dimension}}, X \mbox{ is a PSD matrix} \}`` * [`PositiveSemidefiniteConeTriangle(dimension)`](@ref MathOptInterface.PositiveSemidefiniteConeTriangle): - ``\{ X \in \mathbb{R}^{\mbox{dimension}(\mbox{dimension}+1)/2} : X \mbox{is the upper triangle of a PSD matrix} \}`` + ``\{ X \in \mathbb{R}^{\mbox{dimension}(\mbox{dimension}+1)/2} : X \mbox{ is the upper triangle of a PSD matrix} \}`` * [`PositiveSemidefiniteConeSquare(dimension)`](@ref MathOptInterface.PositiveSemidefiniteConeSquare): - ``\{ X \in \mathbb{R}^{\mbox{dimension}^2} : X \mbox{is a PSD matrix} \}`` + ``\{ X \in \mathbb{R}^{\mbox{dimension}^2} : X \mbox{ is a PSD matrix} \}`` * [`LogDetConeTriangle(dimension)`](@ref MathOptInterface.LogDetConeTriangle): - ``\{ (t,u,X) \in \mathbb{R}^{2+\mbox{dimension}(1+\mbox{dimension})/2} : t \le u\log(\det(X/u)), X \mbox{is the upper triangle of a PSD matrix}, u > 0 \}`` + ``\{ (t,u,X) \in \mathbb{R}^{2+\mbox{dimension}(1+\mbox{dimension})/2} : t \le u\log(\det(X/u)), X \mbox{ is the upper triangle of a PSD matrix}, u > 0 \}`` * [`LogDetConeSquare(dimension)`](@ref MathOptInterface.LogDetConeSquare): - ``\{ (t,u,X) \in \mathbb{R}^{2+\mbox{dimension}^2} : t \le u \log(\det(X/u)), X \mbox{is a PSD matrix}, u > 0 \}`` + ``\{ (t,u,X) \in \mathbb{R}^{2+\mbox{dimension}^2} : t \le u \log(\det(X/u)), X \mbox{ is a PSD matrix}, u > 0 \}`` * [`NormSpectralCone(row_dim, column_dim)`](@ref MathOptInterface.NormSpectralCone): - ``\{ (t, X) \in \mathbb{R}^{1 + \mbox{row_dim} \times \mbox{column_dim}} : t \ge \sigma_1(X), X \mbox{is a matrix with row_dim rows and column_dim columns} \}`` + ``\{ (t, X) \in \mathbb{R}^{1 + \mbox{row_dim} \times \mbox{column_dim}} : t \ge \sigma_1(X), X \mbox{ is a matrix with row_dim rows and column_dim columns} \}`` * [`NormNuclearCone(row_dim, column_dim)`](@ref MathOptInterface.NormNuclearCone): - ``\{ (t, X) \in \mathbb{R}^{1 + \mbox{row_dim} \times \mbox{column_dim}} : t \ge \sum_i \sigma_i(X), X \mbox{is a matrix with row_dim rows and column_dim columns} \}`` + ``\{ (t, X) \in \mathbb{R}^{1 + \mbox{row_dim} \times \mbox{column_dim}} : t \ge \sum_i \sigma_i(X), X \mbox{ is a matrix with row_dim rows and column_dim columns} \}`` Some of these cones can take two forms: `XXXConeTriangle` and `XXXConeSquare`. @@ -151,5 +150,5 @@ or solver developers. A special ordered set of Type II. * [`Indicator(set)`](@ref MathOptInterface.Indicator): A set to specify indicator constraints. -* [`Complements`](@ref MathOptInterface.Complements): +* [`Complements(dimension)`](@ref MathOptInterface.Complements): A set for mixed complementarity constraints. diff --git a/docs/src/tutorials/example.md b/docs/src/tutorials/example.md index 1d44178a8f..5a1d46f040 100644 --- a/docs/src/tutorials/example.md +++ b/docs/src/tutorials/example.md @@ -18,6 +18,12 @@ s.t. \; & w^\top x \le C \\ \end{aligned} ``` +Load the MathOptInterface module and define the shorthand `MOI`: +```julia +using MathOptInterface +const MOI = MathOptInterface +``` + As an optimizer, we choose GLPK: ```julia using GLPK diff --git a/docs/src/tutorials/implementing.md b/docs/src/tutorials/implementing.md index 8b5a0523fc..f8e54d68be 100644 --- a/docs/src/tutorials/implementing.md +++ b/docs/src/tutorials/implementing.md @@ -51,7 +51,7 @@ has a good list of solvers, along with the problem classes they support. ### Create a low-level interface -Before writing a MathOptInterface, you first need to be able to call the solver +Before writing a MathOptInterface wrapper, you first need to be able to call the solver from Julia. #### Wrapping solvers written in Julia @@ -77,7 +77,7 @@ easiest way to do this is to copy an existing solver. Good examples to follow are the [COIN-OR solvers](https://github.com/JuliaPackaging/Yggdrasil/tree/master/C/Coin-OR). !!! warning - Building the solver via Yggdrasil is non-trivial. please ask the + Building the solver via Yggdrasil is non-trivial. Please ask the [Developer chatroom](https://gitter.im/JuliaOpt/JuMP-dev) for help. If the code is commercial or not publicly available, the user will need to @@ -252,7 +252,7 @@ Now that we have an `Optimizer`, we need to implement a few basic methods. It is also very helpful to look at an existing wrapper for a similar solver. You should also implement `Base.show(::IO, ::Optimizer)` to print a nice string -when some prints your model. For example +when someone prints your model. For example ```julia function Base.show(io::IO, model::Optimizer) return print(io, "NewSolver with the pointer $(model.ptr)") @@ -660,12 +660,12 @@ For example, Gurobi.jl adds attributes for multiobjective optimization by struct NumberOfObjectives <: MOI.AbstractModelAttribute end function MOI.set(model::Optimizer, ::NumberOfObjectives, n::Integer) - # Code to set NumberOfOBjectives + # Code to set NumberOfObjectives return end function MOI.get(model::Optimizer, ::NumberOfObjectives) - n = # Code to get NumberOfobjectives + n = # Code to get NumberOfObjectives return n end ```