From 909a251a985e2bac6750bff75d1740531e38d47c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Legat?= Date: Thu, 11 Jul 2019 13:23:52 +0200 Subject: [PATCH 1/6] Address TODOs --- docs/Project.toml | 2 +- docs/src/apimanual.md | 155 ++++++++++++++++++++++++++++++++---------- 2 files changed, 121 insertions(+), 36 deletions(-) diff --git a/docs/Project.toml b/docs/Project.toml index 0a586d964f..1b561e25f2 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -3,4 +3,4 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4" MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee" [compat] -Documenter = "~0.21" +Documenter = "~0.22" diff --git a/docs/src/apimanual.md b/docs/src/apimanual.md index bfccf15eab..e1dc6d45f7 100644 --- a/docs/src/apimanual.md +++ b/docs/src/apimanual.md @@ -1,5 +1,9 @@ ```@meta CurrentModule = MathOptInterface +DocTestSetup = quote + using MathOptInterface + const MOI = MathOptInterface +end ``` # Manual @@ -138,15 +142,25 @@ from the [`ModelLike`](@ref) abstract type. Notably missing from the model API is the method to solve an optimization problem. `ModelLike` objects may store an instance (e.g., in memory or backed by a file format) without being linked to a particular solver. In addition to the model API, MOI -defines [`AbstractOptimizer`](@ref). *Optimizers* (or solvers) implement the model API (inheriting from `ModelLike`) and additionally -provide methods to solve the model. +defines [`AbstractOptimizer`](@ref). *Optimizers* (or solvers) implement the +model API (inheriting from `ModelLike`) and additionally provide methods to +solve the model. Through the rest of the manual, `model` is used as a generic `ModelLike`, and `optimizer` is used as a generic `AbstractOptimizer`. -[Discuss how models are constructed, optimizer attributes.] +Models are constructed by +* adding variables using [`add_variables`](@ref) (or [`add_variables`](@ref)), + see [Adding variables](@ref); +* setting an objective sense and function using [`set`](@ref), + see [Setting objective](@ref). +* and adding constraints using [`add_constraint`](@ref) (or + [`add_constraints`](@ref)), see [Sets and Constraints](@ref). + +The way the problem is solved by the optimimizer is controlled by +[`AbstractOptimizerAttribute`](@ref)s, see [Solver-specific attributes](@ref). -## Variables +## Adding variables All variables in MOI are scalar variables. New scalar variables are created with [`add_variable`](@ref) or @@ -210,6 +224,8 @@ the function ``5x_1 - 2.3x_2 + 1``. `[ScalarAffineTerm(5.0, x[1]), ScalarAffineTerm(-2.3, x[2])]`. This is Julia's broadcast syntax and is used quite often. +### Setting objective + Objective functions are assigned to a model by setting the [`ObjectiveFunction`](@ref) attribute. The [`ObjectiveSense`](@ref) attribute is used for setting the optimization sense. @@ -290,9 +306,8 @@ add_constraint(model, VectorOfVariables([x,y,z]), SecondOrderCone(3)) Below is a list of common constraint types and how they are represented as function-set pairs in MOI. In the notation below, ``x`` is a vector of decision variables, -``x_i`` is a scalar decision variable, and all other terms are fixed constants. - -[Define notation more precisely. ``a`` vector; ``A`` matrix; don't reuse ``u,l,b`` as scalar and vector] +``x_i`` is a scalar decision variable, ``\alpha, \beta`` are scalar constants, +``a, b`` are a constant vectors and `A` is a constant matrix. #### Linear constraints @@ -301,11 +316,11 @@ as function-set pairs in MOI. In the notation below, ``x`` is a vector of decisi | ``a^Tx \le u`` | `ScalarAffineFunction` | `LessThan` | | ``a^Tx \ge l`` | `ScalarAffineFunction` | `GreaterThan` | | ``a^Tx = b`` | `ScalarAffineFunction` | `EqualTo` | -| ``l \le a^Tx \le u`` | `ScalarAffineFunction` | `Interval` | -| ``x_i \le u`` | `SingleVariable` | `LessThan` | -| ``x_i \ge l`` | `SingleVariable` | `GreaterThan` | -| ``x_i = b`` | `SingleVariable` | `EqualTo` | -| ``l \le x_i \le u`` | `SingleVariable` | `Interval` | +| ``\alpha \le a^Tx \le \beta`` | `ScalarAffineFunction` | `Interval` | +| ``x_i \le \beta | `SingleVariable` | `LessThan` | +| ``x_i \ge \alpha | `SingleVariable` | `GreaterThan` | +| ``x_i = \beta | `SingleVariable` | `EqualTo` | +| ``\alpha \le x_i \le \beta | `SingleVariable` | `Interval` | | ``Ax + b \in \mathbb{R}_+^n`` | `VectorAffineFunction` | `Nonnegatives` | | ``Ax + b \in \mathbb{R}_-^n`` | `VectorAffineFunction` | `Nonpositives` | | ``Ax + b = 0`` | `VectorAffineFunction` | `Zeros` | @@ -470,58 +485,128 @@ non-global tree search solvers like ## A complete example: solving a knapsack problem [ needs formatting help, doc tests ] - +We first need to select a solver supporting the given problem (see +[`supports`](@ref) and [`supports_constraint`](@ref)). In this example, we +want to solve a binary-constrained knapsack problem: +`max c'x: w'x <= C, x binary`. Suppose we choose GLPK: ```julia -using MathOptInterface -const MOI = MathOptInterface using GLPK +optimizer = GLPK.Optimizer() +``` +we can check that it supports the objective as follows: +```jldoctest knapsack; setup = :(optimizer = MOI.Utilities.MockOptimizer(MOI.Utilities.Model{Float64}()); MOI.Utilities.set_mock_optimize!(optimizer, mock -> MOI.Utilities.mock_optimize!(mock, ones(3)))) +MOI.supports(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}()) -# Solves the binary-constrained knapsack problem: -# max c'x: w'x <= C, x binary using GLPK. +# output +true +``` +we can check that it supports the knapsack constraint as follows: +```jldoctest knapsack +MOI.supports_constraint(optimizer, MOI.ScalarAffineFunction{Float64}, MOI.LessThan{Float64}) + +# output + +true +``` +and we can check that it supports binary variables as follows: +```jldoctest knapsack +MOI.supports_constraint(optimizer, MOI.SingleVariable, MOI.ZeroOne) + +# output + +true +``` +We first define the constants of the problem: +```jldoctest knapsack c = [1.0, 2.0, 3.0] w = [0.3, 0.5, 1.0] C = 3.2 num_variables = length(c) -optimizer = GLPK.Optimizer() +# output -# Create the variables in the problem. +3 +``` +We create the variables of the problem and set the objective function: +```jldoctest knapsack x = MOI.add_variables(optimizer, num_variables) - -# Set the objective function. objective_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0) MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), objective_function) MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE) -# Add the knapsack constraint. +# output + +MAX_SENSE::OptimizationSense = 1 +``` +We add the knapsack constraint and integrality constraints: +```jldoctest knapsack knapsack_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(w, x), 0.0) MOI.add_constraint(optimizer, knapsack_function, MOI.LessThan(C)) - -# Add integrality constraints. for i in 1:num_variables MOI.add_constraint(optimizer, MOI.SingleVariable(x[i]), MOI.ZeroOne()) end -# All set! +# output + +``` +We are all set! We can now call [`optimize!`](@ref) and wait for the solver to +find the solution: +```jldoctest knapsack MOI.optimize!(optimizer) -termination_status = MOI.get(optimizer, MOI.TerminationStatus()) -obj_value = MOI.get(optimizer, MOI.ObjectiveValue()) -if termination_status != MOI.OPTIMAL - error("Solver terminated with status $termination_status") -end +# output + +``` +The first thing to check after optimization is why the solver stopped, e.g., +did it stop because of a time limit or did it stop because it found the optimal +solution ? +```jldoctest knapsack +MOI.get(optimizer, MOI.TerminationStatus()) -@assert MOI.get(optimizer, MOI.ResultCount()) > 0 +# output -@assert MOI.get(optimizer, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT -primal_variable_result = MOI.get(optimizer, MOI.VariablePrimal(), x) +OPTIMAL::TerminationStatusCode = 1 +``` +It found the optimal solution! Now let's see what is that solution. +But first, let's check if it has more than one solution to share: +```jldoctest knapsack +MOI.get(optimizer, MOI.ResultCount()) + +# output + +1 +``` +Only one. As the termination status is `MOI.OPTIMAL` and there is only one +result, this result should be a feasible solution, let's check to confirm: +```jldoctest knapsack +MOI.get(optimizer, MOI.PrimalStatus()) + +# output + +FEASIBLE_POINT::ResultStatusCode = 1 +``` +Good, so this is indeed the optimal solution! What is its objective value: +```jldoctest knapsack +MOI.get(optimizer, MOI.ObjectiveValue()) + +# output + +6.0 +``` +And what is the value of the variables `x`? +```jldoctest knapsack +MOI.get(optimizer, MOI.VariablePrimal(), x) + +# output -@show obj_value -@show primal_variable_result +3-element Array{Float64,1}: + 1.0 + 1.0 + 1.0 ``` ## Problem modification From af821d93ce337178540e6d2179ded4f8d5c77f81 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Legat?= Date: Sun, 14 Jul 2019 18:03:28 -0400 Subject: [PATCH 2/6] Address @mlubin comments --- docs/src/apimanual.md | 52 ++++++++++++++++--------------------------- 1 file changed, 19 insertions(+), 33 deletions(-) diff --git a/docs/src/apimanual.md b/docs/src/apimanual.md index e1dc6d45f7..b8d4b70d2d 100644 --- a/docs/src/apimanual.md +++ b/docs/src/apimanual.md @@ -153,7 +153,7 @@ Models are constructed by * adding variables using [`add_variables`](@ref) (or [`add_variables`](@ref)), see [Adding variables](@ref); * setting an objective sense and function using [`set`](@ref), - see [Setting objective](@ref). + see [Setting an objective](@ref). * and adding constraints using [`add_constraint`](@ref) (or [`add_constraints`](@ref)), see [Sets and Constraints](@ref). @@ -224,7 +224,7 @@ the function ``5x_1 - 2.3x_2 + 1``. `[ScalarAffineTerm(5.0, x[1]), ScalarAffineTerm(-2.3, x[2])]`. This is Julia's broadcast syntax and is used quite often. -### Setting objective +### Setting an objective Objective functions are assigned to a model by setting the [`ObjectiveFunction`](@ref) attribute. The [`ObjectiveSense`](@ref) attribute is @@ -484,7 +484,6 @@ non-global tree search solvers like ## A complete example: solving a knapsack problem -[ needs formatting help, doc tests ] We first need to select a solver supporting the given problem (see [`supports`](@ref) and [`supports_constraint`](@ref)). In this example, we want to solve a binary-constrained knapsack problem: @@ -493,37 +492,13 @@ want to solve a binary-constrained knapsack problem: using GLPK optimizer = GLPK.Optimizer() ``` -we can check that it supports the objective as follows: -```jldoctest knapsack; setup = :(optimizer = MOI.Utilities.MockOptimizer(MOI.Utilities.Model{Float64}()); MOI.Utilities.set_mock_optimize!(optimizer, mock -> MOI.Utilities.mock_optimize!(mock, ones(3)))) -MOI.supports(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}()) - -# output - -true -``` -we can check that it supports the knapsack constraint as follows: -```jldoctest knapsack -MOI.supports_constraint(optimizer, MOI.ScalarAffineFunction{Float64}, MOI.LessThan{Float64}) - -# output - -true -``` -and we can check that it supports binary variables as follows: -```jldoctest knapsack -MOI.supports_constraint(optimizer, MOI.SingleVariable, MOI.ZeroOne) - -# output - -true -``` We first define the constants of the problem: -```jldoctest knapsack +```jldoctest knapsack; setup = :(optimizer = MOI.Utilities.MockOptimizer(MOI.Utilities.Model{Float64}()); MOI.Utilities.set_mock_optimize!(optimizer, mock -> MOI.Utilities.mock_optimize!(mock, ones(3)))) c = [1.0, 2.0, 3.0] w = [0.3, 0.5, 1.0] C = 3.2 -num_variables = length(c) +num_variables_to_create = length(c) # output @@ -531,7 +506,7 @@ num_variables = length(c) ``` We create the variables of the problem and set the objective function: ```jldoctest knapsack -x = MOI.add_variables(optimizer, num_variables) +x = MOI.add_variables(optimizer, num_variables_to_create) objective_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0) MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), objective_function) @@ -545,7 +520,7 @@ We add the knapsack constraint and integrality constraints: ```jldoctest knapsack knapsack_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(w, x), 0.0) MOI.add_constraint(optimizer, knapsack_function, MOI.LessThan(C)) -for i in 1:num_variables +for i in 1:num_variables_to_create MOI.add_constraint(optimizer, MOI.SingleVariable(x[i]), MOI.ZeroOne()) end @@ -580,8 +555,19 @@ MOI.get(optimizer, MOI.ResultCount()) 1 ``` -Only one. As the termination status is `MOI.OPTIMAL` and there is only one -result, this result should be a feasible solution, let's check to confirm: +Only one. + +!!! note + While the value of `MOI.get(optimizer, MOI.ResultCount())` is often one, it + is important to check its value in order to write a robust code. For + instance, when the problem is unbounded, the solver might return two + results: one feasible primal solution `x` showing that the primal is + feasible and one infeasibility ray `r` showing that the dual in infeasible. + The unbounded ray is given by `x + λ * r` with `λ ≥ 0`. Note that each + result is insufficient alone to certify unboundedness. + +As the termination status is `MOI.OPTIMAL` and there is only one result, this +result should be a feasible solution. Let's check to confirm: ```jldoctest knapsack MOI.get(optimizer, MOI.PrimalStatus()) From e3e1943185cb76f99412a72afb970df9bddaadea Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Legat?= Date: Sun, 14 Jul 2019 18:39:26 -0400 Subject: [PATCH 3/6] Address comments --- docs/src/apimanual.md | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/docs/src/apimanual.md b/docs/src/apimanual.md index b8d4b70d2d..1f88b5c4d1 100644 --- a/docs/src/apimanual.md +++ b/docs/src/apimanual.md @@ -305,9 +305,11 @@ add_constraint(model, VectorOfVariables([x,y,z]), SecondOrderCone(3)) ### Constraints by function-set pairs Below is a list of common constraint types and how they are represented -as function-set pairs in MOI. In the notation below, ``x`` is a vector of decision variables, -``x_i`` is a scalar decision variable, ``\alpha, \beta`` are scalar constants, -``a, b`` are a constant vectors and `A` is a constant matrix. +as function-set pairs in MOI. In the notation below, ``x`` is a vector of +decision variables, ``x_i`` is a scalar decision variable, ``\alpha, \beta`` are +scalar constants, ``a, b`` are constant vectors, `A` is a constant matrix and +``\mathbb{R}_+`` (resp. ``\mathbb{R}_-``) is the set of nonnegative (resp. +nonpositive) real numbers. #### Linear constraints @@ -329,8 +331,6 @@ By convention, solvers are not expected to support nonzero constant terms in the Constraints with `SingleVariable` in `LessThan`, `GreaterThan`, `EqualTo`, or `Interval` sets have a natural interpretation as variable bounds. As such, it is typically not natural to impose multiple lower or upper bounds on the same variable, and by convention we do not ask solver interfaces to support this. It is natural, however, to impose upper and lower bounds separately as two different constraints on a single variable. The difference between imposing bounds by using a single `Interval` constraint and by using separate `LessThan` and `GreaterThan` constraints is that the latter will allow the solver to return separate dual multipliers for the two bounds, while the former will allow the solver to return only a single dual for the interval constraint. -[Define ``\mathbb{R}_+, \mathbb{R}_-``] - #### Conic constraints @@ -341,12 +341,14 @@ Constraints with `SingleVariable` in `LessThan`, `GreaterThan`, `EqualTo`, or `I | ``2yz \ge \lVert x \rVert_2^2, y,z \ge 0`` | `VectorOfVariables` | `RotatedSecondOrderCone` | | ``(a_1^Tx + b_1,a_2^Tx + b_2,a_3^Tx + b_3) \in \mathcal{E}`` | `VectorAffineFunction` | `ExponentialCone` | | ``A(x) \in \mathcal{S}_+`` | `VectorAffineFunction` | `PositiveSemidefiniteConeTriangle` | -| ``A(x) \in \mathcal{S}'_+`` | `VectorAffineFunction` | `PositiveSemidefiniteConeSquare` | +| ``B(x) \in \mathcal{S}_+`` | `VectorAffineFunction` | `PositiveSemidefiniteConeSquare` | | ``x \in \mathcal{S}_+`` | `VectorOfVariables` | `PositiveSemidefiniteConeTriangle` | -| ``x \in \mathcal{S}'_+`` | `VectorOfVariables` | `PositiveSemidefiniteConeSquare` | - +| ``x \in \mathcal{S}_+`` | `VectorOfVariables` | `PositiveSemidefiniteConeSquare` | -[Define ``\mathcal{E}`` (exponential cone), ``\mathcal{S}_+`` (smat), ``\mathcal{S}'_+`` (svec). ``A(x)`` is an affine function of ``x`` that outputs a matrix.] +where ``\mathcal{E}`` is the exponential cone (see [`ExponentialCone`](@ref)), +``\mathcal{S}_+`` is the set of positive semidefinite symmetric matrices, +``A`` is an affine map that outputs symmetric matrices and +``B`` is an affine map that outputs square matrices. #### Quadratic constraints @@ -537,7 +539,7 @@ MOI.optimize!(optimizer) ``` The first thing to check after optimization is why the solver stopped, e.g., did it stop because of a time limit or did it stop because it found the optimal -solution ? +solution? ```jldoctest knapsack MOI.get(optimizer, MOI.TerminationStatus()) @@ -558,13 +560,13 @@ MOI.get(optimizer, MOI.ResultCount()) Only one. !!! note - While the value of `MOI.get(optimizer, MOI.ResultCount())` is often one, it - is important to check its value in order to write a robust code. For - instance, when the problem is unbounded, the solver might return two - results: one feasible primal solution `x` showing that the primal is - feasible and one infeasibility ray `r` showing that the dual in infeasible. - The unbounded ray is given by `x + λ * r` with `λ ≥ 0`. Note that each - result is insufficient alone to certify unboundedness. + While the value of `MOI.get(optimizer, MOI.ResultCount())` is often one, + robust code should check its value. For instance, when the problem is + unbounded, the solver might return two results: one feasible primal solution + `x` showing that the primal is feasible and one infeasibility ray `r` + showing that the dual in infeasible. The unbounded ray is given by + `x + λ * r` with `λ ≥ 0`. Note that each result is insufficient alone to + certify unboundedness. As the termination status is `MOI.OPTIMAL` and there is only one result, this result should be a feasible solution. Let's check to confirm: From 63dd7221ab159c53cbffda880a9fd517f8e70b84 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Legat?= Date: Sun, 14 Jul 2019 18:43:23 -0400 Subject: [PATCH 4/6] Remove resultcount --- docs/src/apimanual.md | 23 +---------------------- 1 file changed, 1 insertion(+), 22 deletions(-) diff --git a/docs/src/apimanual.md b/docs/src/apimanual.md index 1f88b5c4d1..8ef3d7a5b0 100644 --- a/docs/src/apimanual.md +++ b/docs/src/apimanual.md @@ -549,27 +549,6 @@ MOI.get(optimizer, MOI.TerminationStatus()) OPTIMAL::TerminationStatusCode = 1 ``` It found the optimal solution! Now let's see what is that solution. -But first, let's check if it has more than one solution to share: -```jldoctest knapsack -MOI.get(optimizer, MOI.ResultCount()) - -# output - -1 -``` -Only one. - -!!! note - While the value of `MOI.get(optimizer, MOI.ResultCount())` is often one, - robust code should check its value. For instance, when the problem is - unbounded, the solver might return two results: one feasible primal solution - `x` showing that the primal is feasible and one infeasibility ray `r` - showing that the dual in infeasible. The unbounded ray is given by - `x + λ * r` with `λ ≥ 0`. Note that each result is insufficient alone to - certify unboundedness. - -As the termination status is `MOI.OPTIMAL` and there is only one result, this -result should be a feasible solution. Let's check to confirm: ```jldoctest knapsack MOI.get(optimizer, MOI.PrimalStatus()) @@ -577,7 +556,7 @@ MOI.get(optimizer, MOI.PrimalStatus()) FEASIBLE_POINT::ResultStatusCode = 1 ``` -Good, so this is indeed the optimal solution! What is its objective value: +What is its objective value? ```jldoctest knapsack MOI.get(optimizer, MOI.ObjectiveValue()) From c728a4961878fde7c1c5fe80b14081298573efc7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Legat?= Date: Sun, 14 Jul 2019 22:14:00 -0600 Subject: [PATCH 5/6] Remove num_variables --- docs/src/apimanual.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/docs/src/apimanual.md b/docs/src/apimanual.md index 8ef3d7a5b0..3236595c28 100644 --- a/docs/src/apimanual.md +++ b/docs/src/apimanual.md @@ -500,15 +500,13 @@ c = [1.0, 2.0, 3.0] w = [0.3, 0.5, 1.0] C = 3.2 -num_variables_to_create = length(c) - # output -3 +3.2 ``` We create the variables of the problem and set the objective function: ```jldoctest knapsack -x = MOI.add_variables(optimizer, num_variables_to_create) +x = MOI.add_variables(optimizer, length(c)) objective_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0) MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), objective_function) @@ -522,8 +520,8 @@ We add the knapsack constraint and integrality constraints: ```jldoctest knapsack knapsack_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(w, x), 0.0) MOI.add_constraint(optimizer, knapsack_function, MOI.LessThan(C)) -for i in 1:num_variables_to_create - MOI.add_constraint(optimizer, MOI.SingleVariable(x[i]), MOI.ZeroOne()) +for x_i in x + MOI.add_constraint(optimizer, MOI.SingleVariable(x_i), MOI.ZeroOne()) end # output From fc2dc7e64c44857f20777f94051b354503e34f2e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Beno=C3=AEt=20Legat?= Date: Tue, 16 Jul 2019 07:49:40 -0600 Subject: [PATCH 6/6] [ci skip] . -> ; --- docs/src/apimanual.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/apimanual.md b/docs/src/apimanual.md index 3236595c28..012a551300 100644 --- a/docs/src/apimanual.md +++ b/docs/src/apimanual.md @@ -153,7 +153,7 @@ Models are constructed by * adding variables using [`add_variables`](@ref) (or [`add_variables`](@ref)), see [Adding variables](@ref); * setting an objective sense and function using [`set`](@ref), - see [Setting an objective](@ref). + see [Setting an objective](@ref); * and adding constraints using [`add_constraint`](@ref) (or [`add_constraints`](@ref)), see [Sets and Constraints](@ref).