You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It turns out that calling MPI_Init from main rather than the TPS::Tps object is not as simple.
A key issue is that mfem::MPI_Session is a singleton class hard-coded to call MPI_Init.
That is, if we want parla to handle the initialization of mpi then we must get rid of mfem::MPI_Session from everywhere in the tps code (Note: mfem has already deprecated this class).
Before we do that, I'd like to better understand the use of tps::MPI_Groups are we actually solving different physics on different set of processors or are we simply use one single communicator for everything?
The text was updated successfully, but these errors were encountered:
uvilla
changed the title
Call MPI_Init from main rather then TPS::Tps
Call MPI_Init from main rather than TPS::Tps
Jun 30, 2023
It turns out that calling
MPI_Init
frommain
rather than theTPS::Tps
object is not as simple.A key issue is that
mfem::MPI_Session
is a singleton class hard-coded to callMPI_Init
.That is, if we want
parla
to handle the initialization ofmpi
then we must get rid ofmfem::MPI_Session
from everywhere in thetps
code (Note:mfem
has already deprecated this class).Before we do that, I'd like to better understand the use of
tps::MPI_Groups
are we actually solving different physics on different set of processors or are we simply use one single communicator for everything?The text was updated successfully, but these errors were encountered: