Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parametrize JuMP model in optimizer type #1348

Open
wants to merge 8 commits into
base: master
from

Conversation

@blegat
Copy link
Member

commented Jun 17, 2018

The optimizer type allows zero overhead on the JuMP side.
The variable type therefore now need to be parametrized on the model type so that its field are concretely typed which induces many changes of this PR.

Benchmarking results

This use the benchmarking file in test/perf/backend_overhead.jl.
Before this change:

v0.6
14.526 ns (0 allocations: 0 bytes)
43.865 ns (5 allocations: 96 bytes)
v0.7
19.497 ns (0 allocations: 0 bytes)
35.113 ns (3 allocations: 64 bytes)

After this change:

v0.6
1.832 ns (0 allocations: 0 bytes)
24.567 ns (3 allocations: 64 bytes)
v0.7
1.824 ns (0 allocations: 0 bytes)
11.284 ns (1 allocation: 32 bytes)

Related to JuliaOpt/MathOptInterface.jl#321

@mlubin
Copy link
Member

left a comment

I will need convincing benchmarks before approving this. The usability loss from extra complexity in the printouts and debugging statements is pretty terrible.

@@ -116,10 +116,10 @@ end

v = [x,y,x]
A = [x y; y x]
io_test(REPLMode, v, "JuMP.VariableRef[x, y, x]")
io_test(REPLMode, v, "JuMP.VariableRef{JuMP.Model{MathOptInterface.Utilities.CachingOptimizer{Union{MathOptInterface.AbstractOptimizer, Void},MathOptInterface.Utilities.UniversalFallback{JuMP.JuMPMOIModel{Float64}}}}}[x, y, x]")

This comment has been minimized.

Copy link
@mlubin

mlubin Jun 17, 2018

Member

JuMP.VariableRef{JuMP.Model{MathOptInterface.Utilities.CachingOptimizer{Union{MathOptInterface.AbstractOptimizer, Void},MathOptInterface.Utilities.UniversalFallback{JuMP.JuMPMOIModel{Float64}}}}} is not ok for user-facing printouts.

This comment has been minimized.

Copy link
@blegat

blegat Jun 17, 2018

Author Member

This is fixed

@codecov

This comment has been minimized.

Copy link

commented Jun 17, 2018

Codecov Report

Merging #1348 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1348      +/-   ##
==========================================
+ Coverage   89.33%   89.34%   +<.01%     
==========================================
  Files          24       24              
  Lines        3386     3368      -18     
==========================================
- Hits         3025     3009      -16     
+ Misses        361      359       -2
Impacted Files Coverage Δ
src/nlp.jl 79.56% <ø> (ø) ⬆️
src/print.jl 79.45% <ø> (ø) ⬆️
src/parseexpr.jl 94.29% <100%> (ø) ⬆️
src/JuMP.jl 74.77% <100%> (-3.36%) ⬇️
src/affexpr.jl 97.45% <100%> (ø) ⬆️
src/quadexpr.jl 92.4% <100%> (ø) ⬆️
src/macros.jl 88.48% <100%> (ø) ⬆️
src/variables.jl 88.55% <100%> (ø) ⬆️
src/operators.jl 96.69% <0%> (-0.02%) ⬇️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d94c4b...1d0e847. Read the comment docs.

@@ -130,7 +130,17 @@ function var_str(::Type{IJuliaMode}, v::AbstractVariableRef; mathmode=true)
return math("noname", mathmode)
end
end

# We want arrays of variables to be printed with `JuMP.VariableRef` as eltype

This comment has been minimized.

Copy link
@mlubin

mlubin Jun 17, 2018

Member

Eh, this is another chunk of code that will break between julia versions. Weird printing will also confuse developers when they try to create arrays to store VariableRefs.

This comment has been minimized.

Copy link
@blegat

blegat Jun 18, 2018

Author Member

Yes, but it improves printing :-P Ok I'll revert this ^^

@blegat

This comment has been minimized.

Copy link
Member Author

commented Jun 18, 2018

I have added a benchmark, see the initial comment for the results of the benchmark

@blegat

This comment has been minimized.

Copy link
Member Author

commented Nov 13, 2018

The overhead of having an untyped backend is now 1/3 for both time and space in @variable(model).
With Julia v1.0.2:

julia> using JuMP

julia> const model = Model()
A JuMP Model

julia> using BenchmarkTools

julia> @btime @variable(model)
  33.454 ns (2 allocations: 48 bytes)
noname

By annotating the type of the backend in

index = MOI.add_variable(backend(m))

we get

julia> using JuMP

julia> const model = Model()
A JuMP Model

julia> using BenchmarkTools

julia> @btime @variable(model)
  18.558 ns (1 allocation: 32 bytes)
noname

@blegat blegat added the performance label Nov 13, 2018

@mlubin

This comment has been minimized.

Copy link
Member

commented Nov 13, 2018

I was looking into this a bit last night. I couldn't figure out why exactly Julia is allocating (I couldn't make a small example of an allocation when dispatching on an object whose type is unknown). Maybe there's a trick to fix it.

Another option without parameterizing the model is to do the work in batch and call add_variables instead of add_variable when we want to create 1M variables.

@blegat

This comment has been minimized.

Copy link
Member Author

commented Nov 13, 2018

The batch idea could help. For the 32 remaining bytes, it is due to the fact that VariableRef is not isbits because it holds the models as a field and the model is not isbits

@odow

This comment has been minimized.

Copy link
Member

commented Jan 9, 2019

What is the status of this?

@blegat blegat referenced this pull request Feb 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.