Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Find the right "default" distributed setup #1568

Open
ChrisRackauckas opened this issue Jan 11, 2022 · 2 comments
Open

Find the right "default" distributed setup #1568

ChrisRackauckas opened this issue Jan 11, 2022 · 2 comments

Comments

@ChrisRackauckas
Copy link
Member

OrdinaryDiffEq.jl works with distributed because it works on the AbstractArray interface, but as mentioned in the StackOverflow response, the real issue is what library to pair it with. The most current contender is PartitionedArrays.jl, but currently the demo is stuck: fverdugo/PartitionedArrays.jl#52 . We need to find which array type can work, and how to match that to sparse linear solvers (Elemental.jl). Maybe DistributedArrays is fine?

This demo-ish code is a good started that can be modified to test DistributedArrays and all of that:

using OrdinaryDiffEq, LinearAlgebra, SparseArrays, BenchmarkTools, LinearSolve

const N = 32
const xyd_brusselator = range(0,stop=1,length=N)
brusselator_f(x, y, t) = (((x-0.3)^2 + (y-0.6)^2) <= 0.1^2) * (t >= 1.1) * 5.
limit(a, N) = a == N+1 ? 1 : a == 0 ? N : a
kernel_u! = let N=N, xyd=xyd_brusselator, dx=step(xyd_brusselator)
  @inline function (du, u, A, B, α, II, I, t)
    i, j = Tuple(I)
    x = xyd[I[1]]
    y = xyd[I[2]]
    ip1 = limit(i+1, N); im1 = limit(i-1, N)
    jp1 = limit(j+1, N); jm1 = limit(j-1, N)
    du[II[i,j,1]] = α*(u[II[im1,j,1]] + u[II[ip1,j,1]] + u[II[i,jp1,1]] + u[II[i,jm1,1]] - 4u[II[i,j,1]]) +
    B + u[II[i,j,1]]^2*u[II[i,j,2]] - (A + 1)*u[II[i,j,1]] + brusselator_f(x, y, t)
  end
end
kernel_v! = let N=N, xyd=xyd_brusselator, dx=step(xyd_brusselator)
  @inline function (du, u, A, B, α, II, I, t)
    i, j = Tuple(I)
    ip1 = limit(i+1, N)
    im1 = limit(i-1, N)
    jp1 = limit(j+1, N)
    jm1 = limit(j-1, N)
    du[II[i,j,2]] = α*(u[II[im1,j,2]] + u[II[ip1,j,2]] + u[II[i,jp1,2]] + u[II[i,jm1,2]] - 4u[II[i,j,2]]) +
    A*u[II[i,j,1]] - u[II[i,j,1]]^2*u[II[i,j,2]]
  end
end
brusselator_2d = let N=N, xyd=xyd_brusselator, dx=step(xyd_brusselator)
  function (du, u, p, t)
    @inbounds begin
      ii1 = N^2
      ii2 = ii1+N^2
      ii3 = ii2+2(N^2)
      A = p[1]
      B = p[2]
      α = p[3]/dx^2
      II = LinearIndices((N, N, 2))
      kernel_u!.(Ref(du), Ref(u), A, B, α, Ref(II), CartesianIndices((N, N)), t)
      kernel_v!.(Ref(du), Ref(u), A, B, α, Ref(II), CartesianIndices((N, N)), t)
      return nothing
    end
  end
end
p = (3.4, 1., 10., step(xyd_brusselator))

function init_brusselator_2d(xyd)
  N = length(xyd)
  u = zeros(N, N, 2)
  for I in CartesianIndices((N, N))
    x = xyd[I[1]]
    y = xyd[I[2]]
    u[I,1] = 22*(y*(1-y))^(3/2)
    u[I,2] = 27*(x*(1-x))^(3/2)
  end
  u
end
u0 = init_brusselator_2d(xyd_brusselator)
prob_ode_brusselator_2d = ODEProblem(brusselator_2d,u0,(0.,11.5),p)

du = similar(u0)
brusselator_2d(du, u0, p, 0.0)
du[34] # 802.9807693762164
du[1058] # 985.3120721709204
du[2000] # -403.5817880634729
du[end] # 1431.1460373522068
du[521] # -323.1677459142322

du2 = similar(u0)
brusselator_2d(du2, u0, p, 1.3)
du2[34] # 802.9807693762164
du2[1058] # 985.3120721709204
du2[2000] # -403.5817880634729
du2[end] # 1431.1460373522068
du2[521] # -318.1677459142322

using Symbolics, PartitionedArrays, SparseDiffTools
du0 = copy(u0)
jac_sparsity = float.(Symbolics.jacobian_sparsity((du,u)->brusselator_2d(du,u,p,0.0),du0,u0))
colorvec = matrix_colors(jac_sparsity)

# From https://github.com/fverdugo/PartitionedArrays.jl/blob/v0.2.8/test/test_fdm.jl#L93
# A = PSparseMatrix(I,J,V,rows,cols;ids=:local)

II,J,V = findnz(jac_sparsity)
jac_sparsity_distributed = PartitionedArrays.PSparseMatrix(II,J,V,jac_sparsity.rowval,jac_sparsity.colptr)
# MethodError: no method matching PSparseMatrix(::Vector{Int64}, ::Vector{Int64}, ::Vector{Float64}, ::Vector{Int64}, ::Vector{Int64})

f = ODEFunction(brusselator_2d;jac_prototype=jac_sparsity,colorvec = colorvec)
parf = ODEFunction(brusselator_2d;jac_prototype=jac_sparsity_distributed,colorvec = colorvec)

prob_ode_brusselator_2d = ODEProblem(brusselator_2d,u0,(0.0,11.5),p,tstops=[1.1])
prob_ode_brusselator_2d_sparse = ODEProblem(f,u0,(0.0,11.5),p,tstops=[1.1])

nparts = 4
pu0 = PartitionedArrays.PVector(u0,nparts) # MethodError: no method matching PVector(::Array{Float64, 3}, ::Int64)

prob_ode_brusselator_2d_parallel = ODEProblem(brusselator_2d,pu0,(0.0,11.5),p,tstops=[1.1])
prob_ode_brusselator_2d_parallelsparse = ODEProblem(parf,pu0,(0.0,11.5),p,tstops=[1.1])

With solver codes:

@time solve(prob_ode_brusselator_2d_parallel,Rosenbrock23(),save_everystep=false);
@time solve(prob_ode_brusselator_2d_parallelsparse,Rosenbrock23(),save_everystep=false);

using AlgebraicMultigrid
function algebraicmultigrid(W,du,u,p,t,newW,Plprev,Prprev,solverdata)
  if newW === nothing || newW
    Pl = aspreconditioner(ruge_stuben(convert(AbstractMatrix,W)))
  else
    Pl = Plprev
  end
  Pl,nothing
end

# Required due to a bug in Krylov.jl: https://github.com/JuliaSmoothOptimizers/Krylov.jl/pull/477
Base.eltype(::AlgebraicMultigrid.Preconditioner) = Float64

@time solve(prob_ode_brusselator_2d_parallelsparse,Rosenbrock23(linsolve=KrylovJL_GMRES()),save_everystep=false);
@time solve(prob_ode_brusselator_2d_parallelsparse,Rosenbrock23(linsolve=KrylovJL_GMRES(),precs=algebraicmultigrid,concrete_jac=true),save_everystep=false);
@ChrisRackauckas
Copy link
Member Author

@Wimmerer could you try to cook up a DistributedArrays version?

@rayegun
Copy link

rayegun commented Jan 11, 2022

Yep, I'll work on it in the next few days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants