Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need MPI_THREAD_MULTIPLE for backward compatibility #947

Closed
eschnett opened this issue Oct 11, 2013 · 6 comments
Closed

Need MPI_THREAD_MULTIPLE for backward compatibility #947

eschnett opened this issue Oct 11, 2013 · 6 comments

Comments

@eschnett
Copy link
Contributor

It seems I need either MPI_THREAD_MULTIPLE or MPI_THREAD_SERIALIZED for backward compatibility, to gradually add HPX support to Cactus. I am using the MPI parcelport, and I tried simply calling some MPI functions. This seems to work many times, but I get strange hangs that could be explained by OpenMPI not being thread safe.

@ghost ghost assigned sithhell Oct 11, 2013
@hkaiser
Copy link
Member

hkaiser commented Oct 11, 2013

What do you mean by that? Do we need to add a preprocessor constant, do we need to initialize MPI in a particular way, or what else do you want us to do?

@eschnett
Copy link
Contributor Author

By default, if MPI is initialized via MPI_Init, MPI routines can be called only from one of the threads. The alternative routine MPI_Init_thread accepts several "mode" selectors, including MPI_THREAD_MULTIPLE and MPI_THREAD_SERIALIZED.

MPI_THREAD_MULTIPLE allows all threads to make MPI calls. Unfortunately, e.g. OpenMPI does not seem to fully support this yet.

MPI_THREAD_SERIALIZED also allows all threads to make MPI calls, but requires that the application ensures that the calls are serialized, i.e. that only one thread at a time makes MPI calls. Using this would probably require (maybe global) state changes between "HPX uses MPI" and "application uses MPI".

(I just see that MPI_THREAD_SINGLE is the default, which states that the applications is single-threaded. With HPX, you probably should use at least MPI_THREAD_FUNNELED instead.)

@hkaiser
Copy link
Member

hkaiser commented Oct 12, 2013

Thanks for the explanations. HPX guarantees that the MPI calls are being made from exactly one thread only, thus MPI_THREAD_FUNNELED seems to be the right flag for now. I'll go ahead and make that change to the code base.

If you need to do MPI calls yourself then I don't see any way to synchronize HPX with an external application without creating a global contention point (as MPI_THREAD_MULTIPLE is not a viable option). Any suggestions?

@eschnett
Copy link
Contributor Author

You could introduce a function that switches HPX between use-MPI and don't-use-MPI mode. This function could be local to a locality. Of course, performance would not be ideal.

Since Cactus has defined times at which it performs MPI calls, I would then wrap these calls by first disabling and later re-enabling MPI in HPX.

@sithhell
Copy link
Member

You could introduce a function that switches HPX between use-MPI and
don't-use-MPI mode. This function could be local to a locality. Of course,
performance would not be ideal.

Since Cactus has defined times at which it performs MPI calls, I would
then wrap these calls by first disabling and later re-enabling MPI in HPX.

That's absolutely possible. Is suggest to introduce something that pauses
the parcel handling completely.

@hkaiser
Copy link
Member

hkaiser commented Oct 12, 2013

For now I'm going to commit the following change. I switched from MPI_Init to MPI_Init_thread. HPX passes either MPI_THREAD_MULTIPLE or MPI_THREAD_SINGLE to this function, where the default is MPI_THREAD_SINGLE (the docs for MPI_THREAD_FUNNELLED explicitly say that all MPI calls are done from the main thread, which is not the case in HPX).

The mode used with MPI_Init_thread can be controlled by the configuration setting hpx.parcel.mpi.multithreaded (0 is the default and forces MPI_THREAD_SINGLE, anything != 0 forces MPI_THREAD_MULTIPLE). As usual this setting can be pre-initialized using the environment variable HPX_PARCELPORT_MPI_MULTITHREADED.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants