Pavan Balaji edited this page Nov 2, 2016 · 5 revisions


Ensure that MPI has the features necessary to facilitate efficient hybrid programming


Investigate what changes are needed in MPI to better support:

Traditional thread interfaces (e.g., Pthreads, OpenMP) Emerging interfaces (like TBB, OpenCL, CUDA, and Ct) PGAS (UPC, CAF, etc.)


Parallel computers are increasingly being built with nodes comprising large numbers of cores that include regular CPUs as well as accelerators such as Cell or GPGPUs. To make better use of shared memory and other resources within a node or address space, users may want to use a hybrid programming model that uses MPI for communicating between nodes or address spaces and some other programming model (X) within the node or address space. Various options for X at present include OpenMP, Pthreads, PGAS languages (UPC, CoArray? Fortran), Intel TBB, Cilk, CUDA, OpenCL, and Intel Ct.

In general, we anticipate proposals from this committee to lead to changes to the external interfaces chapter. In preparation for those changes, the External Interfaces Chapter Committee has prepared an update of the chapter that covers minor grammatical fixes.

Mailing List

A lot of discussion about the proposals in the Hybrid programming models working group takes place on the working group mailing list: mpi3-hybridpm (at) lists (dot) mpi-forum (dot) org. You can subscribe to this list here.

Telecon Information

Biweekly webexes are held, typically at 11am US Central time on Mondays.

Click this link to join the webex at the meeting time. If prompted, the password is "hybrid".

Active Topics

MPI endpoints:

Issue #56 -> [PR #TBC]

Outstanding issues

Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.