Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Ensure that MPI has the features necessary to facilitate efficient hybrid programming
Investigate what changes are needed in MPI to better support:
Traditional thread interfaces (e.g., Pthreads, OpenMP) Emerging interfaces (like TBB, OpenCL, CUDA, and Ct) PGAS (UPC, CAF, etc.)
Parallel computers are increasingly being built with nodes comprising large numbers of cores that include regular CPUs as well as accelerators such as Cell or GPGPUs. To make better use of shared memory and other resources within a node or address space, users may want to use a hybrid programming model that uses MPI for communicating between nodes or address spaces and some other programming model (X) within the node or address space. Various options for X at present include OpenMP, Pthreads, PGAS languages (UPC, CoArray? Fortran), Intel TBB, Cilk, CUDA, OpenCL, and Intel Ct.
In general, we anticipate proposals from this committee to lead to changes to the external interfaces chapter. In preparation for those changes, the External Interfaces Chapter Committee has prepared an update of the chapter that covers minor grammatical fixes.
A lot of discussion about the proposals in the Hybrid programming models working group takes place on the working group mailing list: mpi3-hybridpm (at) lists (dot) mpi-forum (dot) org. You can subscribe to this list here.
Biweekly webexes are held, typically at 11am US Central time on Mondays.
Click this link to join the webex at the meeting time. If prompted, the password is "hybrid".