-
-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory consumption and time of @constraint #969
Comments
In the first example, you're really seeing the cost of the vectorized syntax. Writing out the loop explicitly gives me about an order of magnitude speedup over the vectorized version: for i in 1:size(coef,1)
@constraint(m, Θ <= sum(coef[i,j]*x[j] for j in size(coef,2)))
end I think the syntax for adding constraints directly to the model, bypassing JuMP, would look a lot like your second code example there :) |
Using vectorized syntax in JuMP is known to consume more memory (because of temporaries) and to be slower than scalar syntax. |
With: @constraint(m,[i = 1:size(coef,1)], Θ <= sum(coef[i,j]*x[j] for j = 1:size(coef,2))) Memory consumption 957 Mb and Time 18.123854547 I'm on a different computer, the results for CPLEX.add_constrs on this one are: So the scalar syntax consumes 2.53x more memory and is 5.45x more slower then CPLEX.add_constrs. I don't see much difference between this values and the vectorized constraint. So the memory consumption of the vectorized constraint is almost the same as scalar syntax. I think the memory consumption is because of the auxiliary objects created by JuMP. |
Make sure you're timing this inside of a function. Performance at the global scope will be much worse. |
Ok. Using: function addConstraints(m,x,coef)
@constraint(m,[i = 1:size(coef,1)], Θ <= sum(coef[i,j]*x[j] for j = 1:size(coef,2)))
end
...
coef = rand(C,N)
m1= memuse()
tic()
addConstraints(m,x,coef)
time = toq()
gc()
m2 = memuse()
print("Memory consumption $(m2-m1) Mb and Time $(time)") I got: Memory consumption 1180 Mb and Time 11.69368708 To be fair, I also tested the result of the vectorized constraint inside a function. function addConstraints(m,x,coef)
@constraint(m, Θ .<= coef*x )
end Memory consumption 1035 Mb and Time 14.692361745 |
using JuMP,CPLEX
function memuse()
pid = getpid()
return round(Int,parse(Int,readstring(`ps -p $pid -o rss=`))/1024)
end
const C = 300000
const N = 100
function t1()
m = Model(solver=CplexSolver())
@variable(m, x[1:N] >= 0)
@variable(m, Θ >= 0)
@objective(m, Max, Θ )
coef = rand(C,N)
gc()
m1= memuse()
tic()
@constraint(m, Θ .<= coef*x )
time = toq()
gc()
m2 = memuse()
println("Memory consumption $(m2-m1) Mb and Time $(time)")
end
function t2()
m = Model(solver=CplexSolver())
@variable(m, x[1:N] >= 0)
@variable(m, Θ >= 0)
@objective(m, Max, Θ )
coef = rand(C,N)
gc()
m1= memuse()
tic()
for i in 1:size(coef,1)
@constraint(m, Θ <= sum(coef[i,j]*x[j] for j = 1:size(coef,2)))
end
time = toq()
gc()
m2 = memuse()
println("Memory consumption $(m2-m1) Mb and Time $(time)")
end
println("t1")
t1()
t1()
println("t2")
t2()
t2() gives
|
The code above has some weird behavior. I prefer to create a function to add the constraints, because I'm worried about the overall memory use not the temporary created objects. using JuMP,CPLEX
function memuse()
pid = getpid()
return round(Int,parse(Int,readstring(`ps -p $pid -o rss=`))/1024)
end
const C = 300000
const N = 100
function addconstraints(m,x,Θ,coef,p)
if p == 1
@constraint(m, Θ .<= coef*x )
elseif p == 2
for i in 1:size(coef,1)
@constraint(m, Θ <= sum(coef[i,j]*x[j] for j = 1:size(coef,2)))
end
elseif p ==3
@constraint(m,[i = 1:size(coef,1)], Θ <= sum(coef[i,j]*x[j] for j = 1:size(coef,2)))
else
rhs = zeros(C)
coef = hcat(-coef,ones(C));
CPLEX.add_constrs!(m.internalModel.inner, coef, '<', rhs)
end
end
function t(p)
m = Model(solver=CplexSolver(CPX_PARAM_SCRIND=0))
@variable(m, 0 <= x[1:N] <= 1)
@variable(m, 0 <= Θ <= 1000)
@objective(m, Max, Θ )
solve(m)
coef = rand(C,N)
gc()
m1= memuse()
tic()
addconstraints(m,x,Θ,coef,p)
time = toq()
gc()
m2 = memuse()
println("Memory consumption $(m2-m1) Mb and Time $(time)")
end
println("Vectorized")
t(1)
t(1)
println("Scalar 1")
t(2)
t(2)
println("Scalar 2")
t(3)
t(3)
println("Low-level API")
t(4)
t(4) Vectorized |
The inconsistent performance of the scalar case suggests issues in Julia with type inference. Try using |
The most important problem here is the big difference between memory consumption, the difference in performance should be an consequence of that. |
We store linear constraints as vectors of sparse affine expressions. For each coefficient we additionally store the corresponding Additionally, there will be two copies of the constraint matrix kept in memory: JuMP's internal copy and the solver's internal copy. If this causes a memory bottleneck for you, you should consider not using JuMP. |
Ok, I see that JuMP is not created for that but why not have a special function to add constraints without storing any information about them. I'm not the only one with this problem, many of my collegues have similar problems with memory consumption or perfomance, most of them use with benders decompositions. |
If JuMP has an internal model loaded, then you're free to call |
I added a flag stroreconstr in the JuMP model, that enables to add constraints without storing them. I just set this flag to false after creating the model on this code: using JuMP,CPLEX
function memuse()
pid = getpid()
return round(Int,parse(Int,readstring(`ps -p $pid -o rss=`))/1024)
end
const C = 300000
const N = 100
function addconstraints(m,x,Θ,coef,p)
m.storeconstr = false
if p == 1
@constraint(m, Θ .<= coef*x )
elseif p == 2
for i in 1:size(coef,1)
@constraint(m, Θ <= sum(coef[i,j]*x[j] for j = 1:size(coef,2)))
end
elseif p ==3
@constraint(m,[i = 1:size(coef,1)], Θ <= sum(coef[i,j]*x[j] for j = 1:size(coef,2)))
else
rhs = zeros(C)
coef = hcat(-coef,ones(C));
CPLEX.add_constrs!(m.internalModel.inner, coef, '<', rhs)
end
end
function t(p)
m = Model(solver=CplexSolver(CPX_PARAM_SCRIND=0))
@variable(m, 0 <= x[1:N] <= 1)
@variable(m, 0 <= Θ <= 1000)
@objective(m, Max, Θ )
solve(m)
coef = rand(C,N)
gc()
m1= memuse()
tic()
addconstraints(m,x,Θ,coef,p)
time = toq()
gc()
m2 = memuse()
println("Memory consumption $(m2-m1) Mb and Time $(time)")
end
println("Scalar 1")
t(2)
t(2)
println("Low-level API")
t(4)
t(4) has the flowing result: Memory consumption change from 2.65x to 1.17x and performance from 3.49x to 1.55x. |
We already have support in principle for keyword arguments within |
Thanks, that is all I'm asking for. |
I don't think this should be added to JuMP. It makes modifications to the JuMP problem too brittle. Unless we implement a JuMP index -> MPB index. Ref JuliaOpt/MathProgBase.jl#125 JuliaOpt/MathProgBase.jl#139 In particular, there are problems with things like updating RHS vectors. Should Users who want this functionality should know the consequences and be forced to do something like varidx = [v.col for v in [ ... variables ... ]]
coef = [ ... coefficients ... ]
lb = -Inf
ub = 1
push!(m.linconstr, JuMP.LinearConstraint(0, lb, ub)) # dummy constraint for JuMP facing side
MathProgBase.addconstr!(internalmodel(m), varidx, coef, lb, ub) |
Another stopgap could be function zeroconstraintmatrix!(m)
warn("You are about to zero the constraint matrix of the JuMP model. Hopefully you know what this means!")
for con in m.linconstr
con.terms = 0
end
end
function t(p, dozero)
m = Model(solver=CplexSolver(CPX_PARAM_SCRIND=0))
@variable(m, 0 <= x[1:N] <= 1)
@variable(m, 0 <= Θ <= 1000)
@objective(m, Max, Θ )
solve(m)
coef = rand(C,N)
gc()
m1= memuse()
tic()
addconstraints(m,x,Θ,coef,p)
dozero && zeroconstraintmatrix!(m)
time = toq()
gc()
m2 = memuse()
println("Memory consumption $(m2-m1) Mb and Time $(time)")
end t(2, true)
# Memory consumption 532.87109375 Mb and Time 6.88804248
t(2, false)
# Memory consumption 1048.49609375 Mb and Time 6.702602572
t(4, true)
# Memory consumption 311.9140625 Mb and Time 1.514756458
t(4, false)
# Memory consumption 355.05078125 Mb and Time 1.491463572 |
I think memory consumption and performance should be a key factor for JuMP. Many of my colleagues, some that I persuaded to use Julia and JuMP, are coming for a C++/C background and are using Julia because of the simplicity and performance. However, many of them are having problem with memory consumption and performance. JuMP is a wonderful package, but to establish as a package to solve all king of mathematical optimization problems it should be easy for beginners(as it is) and possible to be tweaked by experts. |
@odow has a point. If you're trying to do something that JuMP wasn't designed to do (i.e., not store a copy of the constraints), then your code should look ugly on principle, unless you create your own pretty layer on top. You can also do We've never claimed that JuMP has similar memory characteristics to hand-tuned solver-specific matrix generators. I think having to keep around two copies of the constraint matrix in memory is a fair price to pay for the added generality. I'm also confused about how |
I think JuMP created an awesome environment for mathematical programming, pretty, easy to use, efficient for many situations, we all know that the list is huge! Writing problem directly in matrix form should faster and lighter. My point is that we could have something in between, not as fast as writing matrices (which are terrible to write LP's and maintain code) nor as slow as some few current JuMP bottlenecks for some algorithms that solve hundreds of thousands of LP´s. This is at least the third time some bottleneck appears while someone is implementing SDDP, myself and @blegat had some other problems, with deleting constraints I think... I am aware that the current focus is to finish JuMP 1.0 and modifying the design is not an option, I agree with that. However, adding some keyword arguments that could improve algorithm performance at cost of breaking some JuMP functionality would be just ok, in my opinion, as long as the user is aware of that. The pros could be way greater than the cons... Maybe we could think of some way to make JuMP as awesome for solving millions of similar LP´s as it already is for MIP´s! |
By the way, I liked @odow idea of having something like JuMP index -> MPB index. this could be a way to go... |
I appreciate the enthusiasm for wanting JuMP to do everything that you want it to do and quickly, but I think the discussion will be more productive if we focus on the technical issues here. There are two separate things that are being confused at the moment.
|
Hello every body, @variable(model, w[1:1000,1:1000,1:620], Bin) I am not that professional in JuMP and Modeling. Can you please help? |
A |
The new code presented in JuMP-dev 2019(https://www.youtube.com/watch?v=MLunP5cdRBI): ########### Vectorized - WithoutDirect ########### |
I'm having some issues with memory consumption on JuMP. I have a problem that has too many constraints and the JuMP structures for macro @constraint is consuming too much memory.
Example:
Memory consumption 1022 Mb and Time 11.076951813
Changing the @constraint to CPLEX.add_constrs.
Memory consumption 447 Mb and Time 1.975631208
With JuMP this constraint consumes 2.28x more memory and become 5.6x more slower. Maybe JuMP should have some macro to add constraints direct to the model without storing any information about it.
The text was updated successfully, but these errors were encountered: