-
Notifications
You must be signed in to change notification settings - Fork 2
Home
HPC node architectures are trending toward large numbers of cores/CPUs, as well as accelerators such as GPUs. To make better use of shared resources within a node and to program accelerators, users have turned to hybrid programming that combines MPI with a node-level and data parallel programming models. The goal of this working group is to improve the programmability and performance of MPI+X usage models
Investigate support for MPI communication involving accelerators
- Hybrid programming of MPI + [CUDA, HIP, DPC++, ...]
- Host-initiated communication with accelerator memory
- Host-setup with accelerator triggering
- Host-setup, enqueued on a stream or queue
- Accelerator-initiated communication
Investigate improved compatibility/efficiency for multithreaded MPI communication
- MPI + [Pthreads, OpenMP, C/C++ threading, TBB, ...]
- James Dinan -- jdinan (at) nvidia (dot) com
mpiwg-hybridpm (at) lists (dot) mpi-forum (dot) org -- Subscribe
The HACC WG is currently sharing a meeting time with the Persistence WG.
We meet every two weeks on Wednesdays at 10:00 AM - 11:00 AM ET
Meeting details and recordings are available here.
- Continuations proposal #6 (Joseph)
- Memory Allocation Kinds Side Document v2
- OpenMP (Edgar, Maria)
- Coherent Memory, std::par (Rohit)
- Backlog: OpenCL
- Accelerator bindings for partitioned communication #4 (Ryan Grant et al.)
- Partitioned communication buffer preparation (shared with Persistence WG) #264
- File IO from GPUs (Edgar, topic shared with File IO WG)
- Accelerator Synchronous MPI Operations #11 (Need someone to drive)
- MPI Teams / Helper Threads (Joseph)
- MPI Teams proposal: https://github.com/mpi-forum/mpi-forum-historic/issues/217 (PDF)
- Clarification of thread ordering rules #117 (MPI 4.1)
- Integration with accelerator programming models:
- Accelerator info keys follow-on
- Memory allocation kind in MPI allocators (e.g. MPI_Win_allocate, MPI_Alloc_mem, etc.)
- Accelerator info keys follow-on
- Asynchronous operations #585
-
12/25 -- Canceled
-
12/11 -- Open
-
11/20 -- Open
-
11/6 -- Open
-
10/23 -- Open
-
10/9 -- Open
-
9/25 -- MPI Forum Meeting
-
9/11 -- Canceled
-
8/28 -- Continue Discussion on GPU Triggering APIs [Patrick and Tony]
-
8/14 -- GPU Triggering APIs for MPI+X Communication [Patrick Bridges]
-
7/31 -- Partitioned Communication [Ryan Grant]
-
7/17 -- Canceled
-
7/3 -- Canceled
-
6/19 -- US Juneteenth Holiday
-
6/5 -- Open
-
3/27 -- Application use case for device-side continuations (Joachim Jenke)
-
3/20 -- MPI Forum Meeting in Chicago
-
3/13 -- Continuations (Joseph Schuchart)
-
3/6 -- Continuations (Joseph Schuchart)
-
2/28 -- Memory Alloc Kinds (Rohit Zambre)
-
2/21 -- No Meeting
-
2/14 -- Topic rescheduled to 3/27
-
2/7 -- Canceled
-
1/31 -- Canceled
-
1/17 -- Planning Meeting