Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace parse_expr and destructive_add by MutableArithmetics #2107

Merged
merged 18 commits into from
Jan 3, 2020

Conversation

blegat
Copy link
Member

@blegat blegat commented Nov 23, 2019

BREAKING CHANGES

  • @SDconstraint(model, A >= 1) throws ERROR: Operation +betweenArray{VariableRef,2}andInt64 is not allowed. You should use broadcast. instead of broadcasting, @SDconstraint(model, A >= 0) is equivalent to @contsraint(model, A in PSDCone())
  • Broadcast not needed in macro #2106

JuMP implements an arithmetic between the following types:

  • Numbers (that are eventually converted to Float64)
  • AbstractVariableRef
  • GenericAffExpr
  • GenericQuadExpr.

This arithmetic has two key features:

  1. The return type of an operation is delicate to compute and we want it to be as "simple" as possible.
  2. GenericAffExpr and GenericQuadExpr are mutable objects and creating new ones is costly.

The consequence of 1) is that many Base functions do not work out of the box (see e.g. JuliaLang/julia#26344 and JuliaLang/julia#26344).
The reason is that these are usually tested with Float64 types which are stable under multiplications, addition, ...
The common mistakes are

  • either the resulting type which is not inferred correctly (the code just promotes the type but does not take into account that there will be operations on them)
  • either the different operands are first promoted to the same type. Which is a problem, if for instance we take the product of a VariableRef and a Float64, because of the promotion we will get at QuadExpr as result. This is the case in many places in SparseArrays.

For these reasons, many Base functions need to be rewritten for JuMP types to make them work.

The consequence of 2) is that even if generic functions are implemented correctly, they cannot exploit the mutability of the JuMP expression without have JuMP as a dependency. This is obviously not possible for Base but it may also be cumbersome in other cases (e.g. MultivariatePolynomials supports any coefficient type and don't want to have JuMP as a dependency just because the coefficients could be JuMP expressions).

JuMP contains code to:

  • Fix Base function for JuMP types and make them efficient (addresses 1) and 2)).
  • Rewrite expressions in macros into a code that exploits the mutability of the JuMP expressions.

This code is quite involved and could benefit other mutable types as well (e.g. BigInt, polynomials, MOI functions, ...).

Just like MOI serves an an interface between solvers packages and packages writing optimization models, MutableArithmetics (MA) defines a mutable arithmetics API which allows packages defining mutable types and packages writing generic functions working on arbitrary arithmetics to work together (efficiently, exploiting the mutability of the types) without having to know about each other.

In this PR, parse_expr is replaced by MA.rewrite and destructive_add is replaced by MA.add_mul!. The next step are to move the array-related code of src/operators.jl to MA and check that there is no performance regression.

  • move the array-related code of src/operators.jl to MA.
  • check performance change with benchmarks.
  • Release MA.

Benchmarks

Here are the results of the benchmarks of test/perf with Julia v1.3.0.

test/perf/macros.jl

Before the PR:

Test 1
  Running N=20...
    N=20 min 0.025025574
    N=20 min 0.05714585
  Running N=50...
    N=50 min 0.883037842
    N=50 min 2.367867279
  Running N=100...
    N=100 min 9.098013744
    N=100 min 24.331470574

After the PR:

Test 1
  Running N=20...
    N=20 min 0.022802684
    N=20 min 0.065219341
  Running N=50...
    N=50 min 0.665006041
    N=50 min 2.365446098
  Running N=100...
    N=100 min 9.532664338
    N=100 min 26.547689074

test/perf/speed.jl

Before the PR:

P-Median(100 facilities, 100 customers, 5000 locations) benchmark:
BenchmarkTools.Trial: 
  memory estimate:  704.39 MiB
  allocs estimate:  11016102
  --------------
  minimum time:     1.030 s (36.89% GC)
  median time:      1.279 s (43.63% GC)
  mean time:        1.240 s (43.24% GC)
  maximum time:     1.370 s (44.26% GC)
  --------------
  samples:          5
  evals/sample:     1

Cont5(n=500) benchmark:
BenchmarkTools.Trial: 
  memory estimate:  365.94 MiB
  allocs estimate:  5779823
  --------------
  minimum time:     735.224 ms (39.59% GC)
  median time:      918.568 ms (38.96% GC)
  mean time:        947.665 ms (39.87% GC)
  maximum time:     1.225 s (43.60% GC)
  --------------
  samples:          6
  evals/sample:     1

After the PR:

P-Median(100 facilities, 100 customers, 5000 locations) benchmark:
BenchmarkTools.Trial: 
  memory estimate:  658.60 MiB
  allocs estimate:  9515803
  --------------
  minimum time:     890.010 ms (31.47% GC)
  median time:      1.073 s (44.15% GC)
  mean time:        1.024 s (41.66% GC)
  maximum time:     1.098 s (44.23% GC)
  --------------
  samples:          5
  evals/sample:     1

Cont5(n=500) benchmark:
BenchmarkTools.Trial: 
  memory estimate:  365.83 MiB
  allocs estimate:  5776321
  --------------
  minimum time:     580.974 ms (24.95% GC)
  median time:      711.568 ms (40.87% GC)
  mean time:        700.904 ms (39.22% GC)
  maximum time:     781.495 ms (44.34% GC)
  --------------
  samples:          8
  evals/sample:     1

test/perf/matrix_product.jl

Before the PR:

n = 10
2D Matrix product `x * a`: 9.4724e-5
2D Matrix product `a * x * a`: 0.001512777
n = 20
2D Matrix product `x * a`: 0.000997618
2D Matrix product `a * x * a`: 0.028644301
n = 50
2D Matrix product `x * a`: 0.014455512
2D Matrix product `a * x * a`: 0.889454376
n = 100
2D Matrix product `x * a`: 0.097259733
2D Matrix product `a * x * a`: 17.224245268

After the PR:

n = 10
2D Matrix product `x * a`: 0.000286861
2D Matrix product `a * x * a`: 0.001942842
n = 20
2D Matrix product `x * a`: 0.001884311
2D Matrix product `a * x * a`: 0.027819503
n = 50
2D Matrix product `x * a`: 0.019070582
2D Matrix product `a * x * a`: 0.986350429
n = 100
2D Matrix product `x * a`: 0.113936104
2D Matrix product `a * x * a`: 15.724068411

test/perf/vector_speedtest.jl

Before the PR:

n = 10
Vector with sum(): 2.1711e-5
Vector with vecdot() : 0.036935638
2D Matrix with sum(): 1.8754e-5
2D Matrix with bigvecdot(): 1.6488e-5
3D Matrix with sum(): 0.000187124
3D Matrix with vecdot(): 0.000125503
n = 50
Vector with sum(): 1.0791e-5
Vector with vecdot() : 2.1002e-5
2D Matrix with sum(): 0.000409262
2D Matrix with bigvecdot(): 0.000330225
3D Matrix with sum(): 0.057521546
3D Matrix with vecdot(): 0.04789197
n = 100
Vector with sum(): 1.3681e-5
Vector with vecdot() : 1.7187e-5
2D Matrix with sum(): 0.001867746
2D Matrix with bigvecdot(): 0.001266082
3D Matrix with sum(): 1.280707065
3D Matrix with vecdot(): 0.508676454
n = 200
Vector with sum(): 2.9611e-5
Vector with vecdot() : 3.2911e-5
2D Matrix with sum(): 0.009697569
2D Matrix with bigvecdot(): 0.012156103
3D Matrix with sum(): 10.151108586
3D Matrix with vecdot(): 6.685056413
n = 300
Vector with sum(): 5.0284e-5
Vector with vecdot() : 6.1865e-5
2D Matrix with sum(): 0.040167187
2D Matrix with bigvecdot(): 0.034102709
3D Matrix with sum(): 58.472195229
3D Matrix with vecdot(): 53.44349059

After the PR:

n = 10
Vector with sum(): 2.075e-6
Vector with vecdot() : 0.009736153
2D Matrix with sum(): 2.5059e-5
2D Matrix with bigvecdot(): 3.4854e-5
3D Matrix with sum(): 0.000173879
3D Matrix with vecdot(): 0.000162925
n = 50
Vector with sum(): 1.3841e-5
Vector with vecdot() : 1.2784e-5
2D Matrix with sum(): 0.000387396
2D Matrix with bigvecdot(): 0.000433147
3D Matrix with sum(): 0.056042383
3D Matrix with vecdot(): 0.051097818
n = 100
Vector with sum(): 1.3623e-5
Vector with vecdot() : 2.0152e-5
2D Matrix with sum(): 0.001834483
2D Matrix with bigvecdot(): 0.001681267
3D Matrix with sum(): 0.781470256
3D Matrix with vecdot(): 0.762225311
n = 200
Vector with sum(): 2.8746e-5
Vector with vecdot() : 3.7019e-5
2D Matrix with sum(): 0.00825409
2D Matrix with bigvecdot(): 0.013403326
3D Matrix with sum(): 9.491300749
3D Matrix with vecdot(): 12.185332687
n = 300
Vector with sum(): 5.0791e-5
Vector with vecdot() : 6.6087e-5
2D Matrix with sum(): 0.032576904
2D Matrix with bigvecdot(): 0.034234888
3D Matrix with sum(): 56.830063836
3D Matrix with vecdot(): 55.519139182

Closes #2005
Closes #2039
Closes #2102
Closes #2106
Closes #2125

Copy link
Member

@mlubin mlubin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to see all the deleted code! Here's a superficial review.

src/JuMP.jl Outdated Show resolved Hide resolved
src/JuMP.jl Outdated Show resolved Hide resolved
src/JuMP.jl Outdated Show resolved Hide resolved
src/macros.jl Outdated Show resolved Hide resolved
src/macros.jl Outdated Show resolved Hide resolved
src/mutable_arithmetics.jl Outdated Show resolved Hide resolved
src/mutable_arithmetics.jl Outdated Show resolved Hide resolved
src/parse_nlp.jl Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Nov 27, 2019

Codecov Report

Merging #2107 into master will increase coverage by 0.25%.
The diff coverage is 92.81%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2107      +/-   ##
==========================================
+ Coverage   91.08%   91.33%   +0.25%     
==========================================
  Files          41       41              
  Lines        4376     4053     -323     
==========================================
- Hits         3986     3702     -284     
+ Misses        390      351      -39
Impacted Files Coverage Δ
src/operators.jl 88.96% <100%> (+2.57%) ⬆️
src/sd.jl 93.18% <100%> (+0.15%) ⬆️
src/aff_expr.jl 87.85% <100%> (-0.7%) ⬇️
src/JuMP.jl 80% <33.33%> (-0.99%) ⬇️
src/parse_nlp.jl 90.38% <50%> (-0.59%) ⬇️
src/quad_expr.jl 92.81% <75%> (-1.19%) ⬇️
src/mutable_arithmetics.jl 91.93% <91.93%> (ø)
src/macros.jl 92.51% <94.28%> (-0.62%) ⬇️
... and 3 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 04546c9...5ae961d. Read the comment docs.

@blegat blegat mentioned this pull request Dec 18, 2019
@blegat blegat marked this pull request as ready for review December 19, 2019 09:26
@blegat blegat added this to the 0.21 milestone Dec 19, 2019
@blegat
Copy link
Member Author

blegat commented Jan 1, 2020

Ready for final review :)

docs/src/constraints.md Outdated Show resolved Hide resolved
src/aff_expr.jl Outdated Show resolved Hide resolved
src/aff_expr.jl Outdated Show resolved Hide resolved
@@ -11,6 +11,10 @@ function _let_code_block(ex::Expr)
return ex.args[2]
end

function _error_curly(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this in parse_nlp? We should also throw this error for the regular macros.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is now called inside MutableArithmetics for the regular macro.

src/macros.jl Outdated Show resolved Hide resolved
src/macros.jl Outdated Show resolved Hide resolved
src/mutable_arithmetics.jl Outdated Show resolved Hide resolved
src/mutable_arithmetics.jl Outdated Show resolved Hide resolved
src/mutable_arithmetics.jl Outdated Show resolved Hide resolved
@blegat blegat merged commit cdd734f into master Jan 3, 2020
@odow odow deleted the bl/mutable_arithmetics branch January 22, 2020 03:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment