Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error on BigFloat inequalities #64

Open
danspielman opened this issue Oct 18, 2020 · 4 comments
Open

Error on BigFloat inequalities #64

danspielman opened this issue Oct 18, 2020 · 4 comments

Comments

@danspielman
Copy link

When I use JuMP to create two variables and try to place a constraint on them, I get an error.
Here is an minimal example and a prefix of the error message:

julia> lp = Model(Tulip.Optimizer{BigFloat})
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Tulip

julia> @variable(lp, x)
x

julia> @variable(lp, y)
y

julia> @constraint(lp, x - y >= 0)
ERROR: `MOI.ScalarAffineFunction{Float64}`-in-`MOI.GreaterThan{Float64}` constraints are not supported and cannot be bridged into supported constrained variables and constraints. See details below:
 [1] constrained variables in `MOI.Nonnegatives` are not supported because no added bridge supports bridging it.
   Cannot add free variables and then constrain them because:
   (5) `MOI.VectorOfVariables`-in-`MOI.Nonnegatives` constraints are not supported
 [2] constrained variables in `MOI.Nonpositives` are not supported because:
   Cannot use `MOIB.Variable.NonposToNonnegBridge{Float64}` because:
   [1] constrained variables in `MOI.Nonnegatives` are not supported
   Cannot add free variables and then constrain them because:
   (9) `MOI.VectorOfVariables`-in-`MOI.Nonpositives` constraints are not supported
 [3] constrained variables in `MOI.LessThan{Float64}` are not supported because:
   Cannot use `MOIB.Variable.VectorizeBridge{Float64,MOI.Nonpositives}` because:
   [2] constrained variables in `MOI.Nonpositives` are not supported
   Cannot add free variables and then constrain them because:
   (10) `MOI.SingleVariable`-in-`MOI.LessThan{Float64}` constraints are not supported
 [4] constrained variables in `MOI.GreaterThan{Float64}` are not supported because:
   Cannot use `MOIB.Variable.VectorizeBridge{Float64,MOI.Nonnegatives}` because:

I am using Julia 1.5.2 on a Mac running OS version 10.15.6, JuMP version v0.21.4, Tulip version v0.6.2.

Tips on how to make this work would be very appreciated!

@mtanneau
Copy link
Member

mtanneau commented Oct 18, 2020

First of all: thanks for using Tulip (or at least trying) :)

The occurs because JuMP (always) builds problems in Float64 arithmetic, while a Tulip.Optimizer{BigFloat} requires BigFloat input.
Unless you're OK with running Tulip in Float64 rather than BigFloat, there is no straightforward way around this.

Here are a few hacks to make it work nonetheless:

  • Use Convex.jl instead of JuMP. I know it can handle arithmetics other than Float64, but I've never used it myself.
  • Use MathOptInterface directly. Remember to formulate everything in BigFloat.
  • Build your problem with JuMP, export the problem to an MPS file, then import that MPS to a Tulip.Model{BigFloat}.
    Code-wise, it would look like this:
# Build your model here
lp = JuMP.Model()
@variable(lpl, x)
@variable(lp, y)
@constraints(lp, x - y >= 0)

# Export the model to an MPS file
JuMP.write_to_file(lp, "lp.mps")

# Import into Tulip, with BigFloat precision
lp_ex = Tulip.Model{BigFloat}()
Tulip.load_problem!(lp_ex, "lp.mps")

Tulip.optimize!(lp_ex)

The call to Tulip.load_problem! will read the MPS file in Float64 arithmetic, and convert everything to BigFloat before passing the problem to Tulip.

@danspielman
Copy link
Author

Using Convex worked with Double64!

The trick is to add the keyword argument numeric_type = Double64 to the problem, like this

problem = maximize(obj, cons, numeric_type = Double64)

I am very excited to now have useable high-precision linear programming.
Let me know if you'd like me to add this to the docs and create a pull request.

@mtanneau
Copy link
Member

I am very excited to now have useable high-precision linear programming.

I have some plans to improve the performance of high-precision solving, but it's not very high on my priority list right now.
Don't hesitate to open another issue (or reach out to me directly) if you use that feature extensively.

Let me know if you'd like me to add this to the docs and create a pull request.

The docs are due for a section on higher-precision arithmetic. I'll ping you when I add it.

@matbesancon
Copy link
Contributor

Running the first example of the tutorial with other types also yields conversion errors. I'm posting the example here since the issue is related, I can move it to a separate one if required:

using Tulip
import MathOptInterface
const MOI = MathOptInterface

const T = Float32

lp = Tulip.Optimizer{T}()

# Create variables
x = MOI.add_variable(lp)
y = MOI.add_variable(lp)

# Set variable bounds
MOI.add_constraint(lp, MOI.SingleVariable(x), MOI.GreaterThan(T(0)))  # x >= 0
MOI.add_constraint(lp, MOI.SingleVariable(y), MOI.GreaterThan(T(0)))  # y >= 0

# Add constraints
row1 = MOI.add_constraint(lp,
    MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(T[1.0, -1.0], [x, y]), T(0)),
    MOI.GreaterThan(T(-2))
)
row2 = MOI.add_constraint(lp,
    MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(T[2.0, -1.0], [x, y]), T(0)),
    MOI.LessThan(T(4))
)
row3 = MOI.add_constraint(lp,
    MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(T[1.0,  2.0], [x, y]), T(0)),
    MOI.LessThan(T(7))
) 

# Set the objective
MOI.set(lp,
    MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float32}}(),
    MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(T[-2.0, -1.0], [x, y]), T(0))
)
MOI.set(lp, MOI.ObjectiveSense(), MOI.MIN_SENSE)

MOI.optimize!(lp)

Results in the following error:

julia> MOI.optimize!(lp)
ERROR: MethodError: Cannot `convert` an object of type 
  LDLFactorizations.LDLFactorization{Float64,Int64,Int64,Int64} to an object of type 
  LDLFactorizations.LDLFactorization{Float32,Ti,Tn,Tp} where Tp<:Integer where Tn<:Integer where Ti<:Integer
Closest candidates are:
  convert(::Type{T}, ::T) where T at essentials.jl:171
Stacktrace:
 [1] Tulip.KKT.LDLFactSQD(::SparseArrays.SparseMatrixCSC{Float32,Int64}) at /home/mbesancon/.julia/packages/Tulip/3EWmj/src/KKT/ldlfact.jl:52
 [2] setup(::Type{Tulip.KKT.LDLFactSQD}, ::SparseArrays.SparseMatrixCSC{Float32,Int64}) at /home/mbesancon/.julia/packages/Tulip/3EWmj/src/KKT/ldlfact.jl:57
 [3] Tulip.HSD(::Tulip.IPMData{Float32,Array{Float32,1},BitArray{1},SparseArrays.SparseMatrixCSC{Float32,Int64}}, ::Tulip.KKT.KKTOptions{Float32}) at /home/mbesancon/.julia/packages/Tulip/3EWmj/src/IPM/HSD/HSD.jl:54

Following the stacktrace, the issue seems to occur in the constructor of LDLFactSQD{T<:Real} <: AbstractKKTSolver{T} :

        S = [
            spdiagm(0 => -θ)  A';
            spzeros(T, m, n) spdiagm(0 => ones(m))
        ]
        F = LDLF.ldl_analyze(Symmetric(S))

I believe S is converted to Float64 here because of the spdiagm(0 => ones(m)) which is a Float64 diagonal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants