avoid duplicated data in parallel computations #2209

Open
tjhei opened this Issue Feb 19, 2016 · 2 comments

Projects

None yet

2 participants

@tjhei
Member
tjhei commented Feb 19, 2016

When using a distributed::Triangulation especially with a large coarse mesh, large amounts of memory are wasted. This is true especially when using an MPI only approach, because a single node will have many (redundant) copies of the data.
There are several things to do here:

  • First investigate current memory consumption (p4est, Triangulation, DoFHandler, etc.)
  • Use MPI_Win_allocate_shared to allocate the memory needed for the mesh only once per node.
  • Do not store dof_indices or any other data for artificial cells.
  • Investigate using MPI_Win_allocate_shared also inside p4est (see partial implementation by Toby: https://github.com/tisaac/p4est/tree/feature/shmem-array)
@tjhei tjhei added the Enhancement label Feb 19, 2016
@bangerth
Member

We did part of this in the distributed paper and found that the number of artificial cells is actually quite small. This may not be the case in situations where the coarse mesh has tens of thousands of cells, but that's probably a rare case.

Not storing indices for artificial cells is also something @kronbichler suggested in #2000 and is a no-brainer.

@tjhei
Member
tjhei commented Feb 19, 2016

the number of artificial cells is actually quite small

Yes, except when your coarse mesh is very large. Martin has cases where you end up with millions of artificial coarse cells.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment