Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"WITH_INFO" review correlated for collective operations in Chapters "Collective Operations" and "Process Topologies" #85

tonyskjellum opened this issue Mar 8, 2018 · 4 comments


Copy link

@tonyskjellum tonyskjellum commented Mar 8, 2018


Info keys are missing from blocking and nonblocking collective operations.

Blocking and nonblocking APIs (noted below) presently lack info arguments for legacy reasons; per-operation info arguments can be useful to application performance tuning in MPI implementations that support such optional arguments. Standardized info key may also be proposed separately.

If Ticket #80 is approved, this is a proposed extension to those extended APIs to be proposed for collective operations. We do not intend to create a combinatorial explosion of APIs WITH_INFO and _X, but rather merge the two changes into one new API set (_X).


The blocking and nonblocking collectives APIs that will be modified by Ticket #80 that do not have info arguments will be augmented to have info arguments. For each case, we will make sure it makes sense for the info argument to be added in the _X interface based on performance and functionality. In some cases, we might prefer to have a separate _WITH_INFO form.

In general, the info argument will be added between the comm and request arguments for nonblocking arguments, and after the comm in blocking operations.

Example (shown with C bindings to emphasize the Ticket #80 changes too):

MPI_Ibcast(void* buffer, int count, MPI _Datatype datatype, int root, MPI _Comm comm, MPI_Request *request)

Ticket #85 addition (on its own):
MPI_Ibcast_with_info(void* buffer, int count, MPI _Datatype datatype, int root, MPI _Comm comm, MPI_Info info, MPI_Request *request)

Ticket #80 addition (on its own):
MPI_Ibcast_x(void* buffer, MPI_Count count, MPI _Datatype datatype, int root, MPI _Comm comm, MPI_Request *request)

Both tickets #80 and #85 are accepted together:
MPI_Ibcast_x(void* buffer, MPI_Count count, MPI _Datatype datatype, int root, MPI _Comm comm, MPI_Info info, MPI_Request *request)

Operations impacted:

  1. Collectives Chapter
    As well as MPI_REDUCE_LOCAL

  2. Process Topologies Chapter
    None. This is addressed either by Tickets #78 or #84.

Note: No existing functionality is broken, nor is any API deprecated by this ticket.

Tickets #78, #82 proposes new, nonblocking operations. These tickets will separately address the info argument for such new APIs where appropriate. Nothing described in these tickets is impacted by this ticket. Ticket #25 operations already include info keys as well, so this ticket does not impact Ticket #25.

Also, note that this ticket could be folded into Ticket #80 if appropriate.

Changes to the Text

Here is a subset related to topologies (copied from Ticket #84, which is duplicative and is being closed):

The "non-dist" graph constructor should be deprecated in favour of the newer (and better) "dist" version(s). Every call to MPI_GRAPH_CREATE can be legally and 'simply' replaced with a call to MPI_DIST_GRAPH_CREATE.

MPI_GRAPH_CREATE requires that each process passes the full (global) communication graph to the call. This limits the scalability of this constructor. With the distributed graph interface, the communication graph is specified in a fully distributed fashion.

In the absence of a proposal to deprecate MPI_GRAPH_CREATE (and thereby obviate the need to maintain/improve it), adding an MPI_INFO argument is one of the advisable changes/improvements.

The proposed cartesian topology "with info" constructor seems like a good addition but should probably be part of a more general change to add an MPI_INFO argument to all object constructors that don't already have one. For example, MPI_COMM_SPLIT_WITH_INFO makes a lot of sense too.


This ticket can standalone or be approved with Ticket #80. If approved on its own, the info argument would be added to a set of APIs with the suffix _WITH_INFO.

If Ticket #85 is approved, this ticket would be modified to combine with it, as noted above. Ticket #80's changes for "big MPI" would be augmented to add the info argument to each of the abovenamed operations.

Impact on Implementations

  1. Incrementally, the info argument of each collective operation would have to be interpreted. It is OK to ignore the info arguments for backward compatibility. Therefore, minimal compliance is nearly trivial. Refactoring the API will only happen once if this is added on to Ticket #80 work for refactoring the API.
  2. Implementations will be free to use info arguments to offer users enhanced performance and/or predictability of blocking and nonblocking collective APIs. Persistent APIs defined in Ticket #25 already include this argument.
  3. All the impacts of Ticket #80 are assumed to be covered by that ticket [if approved].

Impact on Users

Users want to deliver info arguments in particular calls can incrementally modify their programs to use the new API where warranted.

Users opting for the new "big MPI" API already have to make appropriate changes to their code to use functionality added by Ticket #80. This would minimally add MPI_INFO_NULL into an argument slot during such refactoring.


See Tickets #25, #76, #78, #80, #83.

Copy link

@dholmes-epcc-ed-ac-uk dholmes-epcc-ed-ac-uk commented Mar 8, 2018

This gives us the opportunity to use the form MPI_thing_WITH_INFO rather than MPI_thing_X for the BigMPI-and-info API. That is, the _WITH_INFO functions would have both MPI_INFO and MPI_COUNT parameters, whereas the existing API would stand as is with neither of these enhancements. This alleviates concerns over the ambiguity, and possible future use, of _X function names.

Copy link

@tonyskjellum tonyskjellum commented Mar 8, 2018

Copy link

@tonyskjellum tonyskjellum commented May 28, 2018

We have plenty of work for Austin, and we need to understand the plans and outcomes of Ticket #80 to pursue this ticket efficiently. So I am going to mark its goal as for Barcelona, not Austin. Let's discuss in the WG in Austin.

Copy link

@tonyskjellum tonyskjellum commented Jun 14, 2018

We reviewed this in the working group time in Austin. We will discuss this further in the plenary in Austin ahead of presenting this as a reading for the Barcelona meeting.

@tonyskjellum tonyskjellum changed the title "WITH_INFO" review correlated to Tickets #80 and #84 for collective operations in Chapters "Collective Operations" and "Process Topologies" "WITH_INFO" review correlated for collective operations in Chapters "Collective Operations" and "Process Topologies" Sep 26, 2018
@wesbland wesbland added this to To Do in MPI 4.0 Ratification Oct 21, 2020
@wesbland wesbland moved this from To Do to Triage in MPI 4.0 Ratification Nov 16, 2020
@wesbland wesbland removed this from Triage in MPI 4.0 Ratification Nov 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants