-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel simulations using MPI.jl #141
base: master
Are you sure you want to change the base?
Conversation
@weymouth @b-fg @TzuYaoHuang My initial idea is to incorporate this into the main solver as an extension and use the custom type
The flow will be constructed using We could also bind the |
ParallelVTK.jl
Outdated
function write!(w,a::Simulation) | ||
k = w.count[1]; N = size(inside(sim.flow.p)) | ||
xs = Tuple(ifelse(x==0,1,x+3):ifelse(x==0,n+4,n+x+6) for (n,x) in zip(N,grid_loc())) | ||
extents = MPI.Allgather(xs, mpi_grid().comm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be done upon initializing the writer to avoid global operation at every write.
@@ -1,11 +1,9 @@ | |||
#mpiexecjl --project= -n 4 julia TwoD_CircleMPI.jl | |||
#mpiexecjl --project=. -n 2 julia TwoD_CircleMPI.jl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just FYI --project=.
and just --project
are equivalent.
This pull request is a work in progress.
I open it now to see how we can add parallel capabilities to WaterLily efficiently, keeping in mind that ultimately we want to be able to do multi-CPU/GPU simulations.
I have run most of the
examples/TwoD
files with the double ghost again, and it works in serial.There are a lot of files changed/added in this pull request and I will briefly describe the changes done.
changed files
src/WaterLily
: enables passing the type of the Poisson solver to the simulations (mainly to simplify my testing), preliminary MPI extension (not used, to be discussed).src/Flow.jl
: implement double ghost cells and remove the special QUICK/CD scheme on the boundaries, as these are no longer needed.src/MultiLevelPoisson.jl
: implement downsampling for double ghost arrays and change all utilities functions accordingly. Explicitly define the dot product functions to overload these with the MPI functions later on. Also, change thesolver!
function as thePoissonMPI.jl
test did not converge properly with theseLinfty
criteria.src/Poisson.jl
: add aperBC!
call in Jacobi (not needed, I think) and adjust the solver.src/util.jl
: adjust all theinside
functions andloc
to account for the double ghost cells. AdjustBC!
,perBC!
andexitBC!
functions for the double ghost cells.New files
WaterLilyMPI.jl
: contains all the function overload needed to perform parallel WaterLily simulations. Define anMPIGrid
type that stores information about the decomposition (global
for now) and thempi_swap
function that performs the message passing together with some MPI utils.MPIArray.jl
: a custom Array type that also allocates send and receive buffers to avoid allocating them at everympi_swap
call. This is an idea for the final implementation and has not been tested yet.FlowSolverMPI.jl
: tests for some critical part of the flow solver, fromsdf
measures tosim_step
. Use withvis_mpiwaterlily.jl
to see plot the results on the different ranks.PoisonMPI.jl
: parallel Poisson solver test on an analytical solution. Use withvis_mpiwaterlily.jl
to see plot the results on the different ranks.diffusion_2D_mpi.jl
: initial test of MPI function, deprecatedvis_diffusion.jl
: use to the the results ofdiffusion_2D_mpi.jl
deprecatedext/WaterLilyMPIExt.jl
: my initial try of adding MPI as an extension, not used currently.test/poisson.jl
a simple Poisson test, will be removedtest/test_mpi.jl
initial MPI test should be changed to use this instead.The things that remain to do
AllReduce
inPoisson.residuals!
)@views()
for the send-receive buffer. This could be avoided if we allocate the send and receive buffer with the arrays using something similar to what it is in the fileMPIArray.jl
VTK
extension to enable the writing of parallel files.Some of the results from
FlowSolverMPI.jl
basic rank and sdf check
zeroth kernel moment vector with and without halos
full
sim_step
check