Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working with Shared Memory #481

mpiforumbot opened this issue Jul 24, 2016 · 0 comments

Working with Shared Memory #481

mpiforumbot opened this issue Jul 24, 2016 · 0 comments


Copy link

@mpiforumbot mpiforumbot commented Jul 24, 2016

Originally by gropp on 2015-06-25 08:52:55 -0500

MPI 3.0 introduced a way to gain access to shared memory from within MPI. However, the tricky and subtle issues consistent access to that shared memory, using load and store operations (as opposed to MPI RMA operations) was not carefully considered or described, leading to many debates in the Forum, as well as several attempts to clarify the issues (see #429, #435, and #437 and also #475). This ticket is a place holder to collect suggestions for clarifying and improving the discussion of shared memory access.

Note that correct access to shared memory requires great care in working with the underlying programming language (C, C++, or Fortran), since in all cases, the language does not cover processes sharing memory (The 2011 versions of C and C++ have some support for threads sharing memory, and this provides some basis for discussing sharing of memory within these languages).

At the June 2015 Forum meeting, the RMA working group endorsed the idea of creating this new ticket, retiring the ones mentioned above, and considering this issue in the context of the entire RMA chapter. Providing separate text for the shared memory case, rather than attempting to write the RMA synchronization in such a way as to cover both MPI RMA calls (such as MPI_Put) and load/store access from the underlying language, was considered the best approach to consider.

Possible updates include (these are exclusive: pick one):

  1. Encourage users to use the language mechanisms for working with shared memory from different threads (e.g., the C11 memory models, atomic memory operations, and consistency models) rather than relying on MPI RMA calls such as MPI_Win_sync to provide memory consistency
  2. Clarify MPI_Win_sync as the only way to force a memory barrier (in the language of C11). Note that other RMA synchronization calls may not provide a barrier, and if needed, the user should ensure a barrier. As a variation, require certain MPI synchronization calls, only when applied to a shared memory window, to have barrier semantics. In this case, be aware of the impact of permitting users to discover that a window is shared even though not allocated with MPI_Win_allocate_shared
  3. Require users to make use of the language-features (if any) to work with shared memory (a change to the standard, but one that may be required for correctness).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant