Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Parametrize JuMP model in optimizer type #1348
The optimizer type allows zero overhead on the JuMP side.
This use the benchmarking file in
After this change:
Related to JuliaOpt/MathOptInterface.jl#321
@@ Coverage Diff @@ ## master #1348 +/- ## ========================================== + Coverage 89.33% 89.34% +<.01% ========================================== Files 24 24 Lines 3386 3368 -18 ========================================== - Hits 3025 3009 -16 + Misses 361 359 -2
The overhead of having an untyped backend is now 1/3 for both time and space in
julia> using JuMP julia> const model = Model() A JuMP Model julia> using BenchmarkTools julia> @btime @variable(model) 33.454 ns (2 allocations: 48 bytes) noname
By annotating the type of the backend in
julia> using JuMP julia> const model = Model() A JuMP Model julia> using BenchmarkTools julia> @btime @variable(model) 18.558 ns (1 allocation: 32 bytes) noname
I was looking into this a bit last night. I couldn't figure out why exactly Julia is allocating (I couldn't make a small example of an allocation when dispatching on an object whose type is unknown). Maybe there's a trick to fix it.
Another option without parameterizing the model is to do the work in batch and call