Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel Solves #119

Closed
vtjeng opened this issue Mar 22, 2018 · 1 comment
Closed

Parallel Solves #119

vtjeng opened this issue Mar 22, 2018 · 1 comment

Comments

@vtjeng
Copy link
Contributor

vtjeng commented Mar 22, 2018

I have a JuMP (mixed-integer) model where I'm trying to determine the upper and lower bounds on many variables. Each individual solve is very quick, but since I have many variables (~1,000), the overall solve time is slow (~5s).

One thing that I observed is that julia often seems to be using only one core at a time, and I was hoping that parallelization might be able to reduce solve times significantly. However, I'm running into an issue that seems to have to do with there being only a single Gurobi environment created.

Here's an example of a snippet of code that doesn't work. (map runs fine, but pmap does not).

addprocs(2)

using JuMP
using Gurobi

@everywhere begin
env = Gurobi.Env()
m = JuMP.Model(solver = Gurobi.GurobiSolver(env, OutputFlag=0))

@JuMP.variable(m, 0 <= x[i=1:100] <= i)

function upperbound_mip(x)
    @JuMP.objective(x.m, Max, x)
    JuMP.solve(x.m)
    return JuMP.getobjectivevalue(x.m)
end
end

@time map(upperbound_mip, x)
@time pmap(upperbound_mip, x)

Output

Academic license - for non-commercial use only
	From worker 3:	Academic license - for non-commercial use only
	From worker 2:	Academic license - for non-commercial use only 
 2.317813 seconds (1.72 M allocations: 90.407 MiB, 1.04% gc time)
ERROR: LoadError: On worker 2:
AssertionError: env.ptr_env != C_NULL
get_error_msg at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_env.jl:38
Type at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_env.jl:50 [inlined]
get_intattr at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_attrs.jl:16
setvarLB! at /home/vtjeng/.julia/v0.6/Gurobi/src/GurobiSolverInterface.jl:187
#build#119 at /home/vtjeng/.julia/v0.6/JuMP/src/solvers.jl:338
#build at ./<missing>:0
#solve#116 at /home/vtjeng/.julia/v0.6/JuMP/src/solvers.jl:168
upperbound_mip at /home/vtjeng/Dropbox/Documents/MIT/UROP/adversarial_examples/pset/6_using_MIPVerify/debug/feature-parallel-solves.jl:14
#106 at ./distributed/process_messages.jl:268 [inlined]
run_work_thunk at ./distributed/process_messages.jl:56
macro expansion at ./distributed/process_messages.jl:268 [inlined]
#105 at ./event.jl:73
Stacktrace:
 [1] #571 at ./asyncmap.jl:178 [inlined]
 [2] foreach(::Base.##571#573, ::Array{Any,1}) at ./abstractarray.jl:1733
 [3] maptwice(::Function, ::Channel{Any}, ::Array{Any,1}, ::Array{JuMP.Variable,1}, ::Vararg{Array{JuMP.Variable,1},N} where N) at ./asyncmap.jl:178
 [4] wrap_n_exec_twice(::Channel{Any}, ::Array{Any,1}, ::Base.Distributed.##204#207{WorkerPool}, ::Function, ::Array{JuMP.Variable,1}, ::Vararg{Array{JuMP.Variable,1},N} where N) at ./asyncmap.jl:154
 [5] #async_usemap#556(::Function, ::Void, ::Function, ::Base.Distributed.##188#190, ::Array{JuMP.Variable,1}, ::Vararg{Array{JuMP.Variable,1},N} where N) at ./asyncmap.jl:103
 [6] (::Base.#kw##async_usemap)(::Array{Any,1}, ::Base.#async_usemap, ::Function, ::Array{JuMP.Variable,1}, ::Vararg{Array{JuMP.Variable,1},N} where N) at ./<missing>:0
 [7] (::Base.#kw##asyncmap)(::Array{Any,1}, ::Base.#asyncmap, ::Function, ::Array{JuMP.Variable,1}) at ./<missing>:0
 [8] #pmap#203(::Bool, ::Int64, ::Void, ::Array{Any,1}, ::Void, ::Function, ::WorkerPool, ::Function, ::Array{JuMP.Variable,1}) at ./distributed/pmap.jl:126
 [9] pmap(::WorkerPool, ::Function, ::Array{JuMP.Variable,1}) at ./distributed/pmap.jl:101
 [10] #pmap#213(::Array{Any,1}, ::Function, ::Function, ::Array{JuMP.Variable,1}) at ./distributed/pmap.jl:156
 [11] pmap(::Function, ::Array{JuMP.Variable,1}) at ./distributed/pmap.jl:156
 [12] include_from_node1(::String) at ./loading.jl:576
 [13] include(::String) at ./sysimg.jl:14
 [14] process_options(::Base.JLOptions) at ./client.jl:305
 [15] _start() at ./client.jl:371

One thing I've thought about trying to do is to generate one environment per worker thread, but 1) I don't know whether that would help and 2) I don't know how to do that.

@odow
Copy link
Member

odow commented Mar 22, 2018

One option is to use a caching pool https://docs.julialang.org/en/stable/stdlib/parallel/#Base.Distributed.CachingPool

julia> addprocs(2)

julia> @everywhere using JuMP, Gurobi

julia> @everywhere function upperbound_mip(x)
           @JuMP.objective(x.m, Max, x)
           JuMP.solve(x.m)
           return JuMP.getobjectivevalue(x.m)
       end

julia> m = JuMP.Model(solver = Gurobi.GurobiSolver(OutputFlag=0));

julia> @JuMP.variable(m, 0 <= x[i=1:100] <= i);

julia> let x=x
       wp = CachingPool(workers())
       pmap(wp, (i)->upperbound_mip(x[i]), 1:100)
       end
        From worker 2:  Academic license - for non-commercial use only
        From worker 3:  Academic license - for non-commercial use only
100-element Array{Float64,1}:
   1.0
   2.0
   3.0
   4.0
   5.0
   ?
  95.0
  96.0
  97.0
  98.0
  99.0
 100.0

Closing because this is not an issue with Gurobi.jl. In the future, you can ask usage questions like this on Discourse.

@odow odow closed this as completed Mar 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants