Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate MPI_Graph #89

Open
hjelmn opened this issue Mar 15, 2018 · 19 comments
Open

Deprecate MPI_Graph #89

hjelmn opened this issue Mar 15, 2018 · 19 comments
Assignees
Labels
mpi-5 For inclusion in the MPI 5.0 standard wg-hardware-topologies Hardware Topologies Working Group
Projects

Comments

@hjelmn
Copy link

hjelmn commented Mar 15, 2018

Problem

The scalability of MPI_Graph_create() is limited due to the global nature of its arguments. We should deprecate it.

Proposal

Deprecate the following:

int MPI_Graph_create (MPI_Comm comm_old, int nnodes, const int index[],
const int edges[], int reorder, MPI_Comm *comm_graph);

int MPI_Graphdims_get(MPI_Comm comm, int *nnodes, int *nedges);

int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges, int index[],
int edges[]);

int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int *nneighbors);

int MPI_Graph_neighbors(MPI_Comm comm, int rank, int maxneighbors, int neighbors[]);

int MPI_Graph_map(MPI_Comm comm, int nnodes, const int index[], const int edges[], int *newrank);

MPI_GRAPH

Changes to the Text

Add MPI_Graph_create() and associated functions to the deprecated interfaces. It would also be worthwhile to add examples of how to convert from using MPI_Graph to MPI_Dist_graph.

Impact on Implementations

None. Just marking them as deprecated for now.

Impact on Users

Update code to MPI_Dist_graph_create() and associated functions instead.

References

PR: https://github.com/mpi-forum/mpi-standard/pull/37
Insert any internal (other issues) or external (websites, papers, etc.) references here.

@hjelmn
Copy link
Author

hjelmn commented Mar 15, 2018

Cross-reference to mpiwg-coll/coll-issues#2

@hjelmn hjelmn self-assigned this Mar 15, 2018
@hjelmn hjelmn added not ready wg-collectives Collectives Working Group labels Mar 15, 2018
@dholmes-epcc-ed-ac-uk dholmes-epcc-ed-ac-uk self-assigned this Mar 21, 2018
@schulzm
Copy link

schulzm commented May 20, 2018

Same as in issue #16 - closed #16

@tonyskjellum
Copy link

tonyskjellum commented May 20, 2018 via email

@dholmes-epcc-ed-ac-uk
Copy link
Member

There is one already:
https://github.com/mpi-forum/mpi-standard/pull/37
I've added it to the issue description (under References) above and back-referenced from the PR to this issue.

@hjelmn
Copy link
Author

hjelmn commented May 21, 2018

I will be working on the text for MPI_Dist_graph_map today. Should be ready to discuss on Wednesday.

@RolfRabenseifner
Copy link

There is no real need to deprecate the MPI-1 virtual graph interface, because it is still an efficient interface for medium size applications. Before deprecating, there should be publications proofing that the implementations of the new dist-graph-create interface are really faster than using the old interface. Whitout any such proof, we should not burden the users with such deprecation. Do such publications exist? For MPICH, OpenMPI, ...?
I'm not sure whether I should comment on #2 or hear on #89 ?

@jeffhammond
Copy link
Member

It should not take more than an hour to write the tests that will produce the data you want. Feel free to include them here and then folks can test with their favorite implementation.

@dholmes-epcc-ed-ac-uk
Copy link
Member

Per the suggestion by @jeffhammond, I spent an hour making a library implementation of MPI_GRAPH_CREATE that translates it into a call to MPI_DIST_GRAPH_CREATE_ADJACENT. It is available here: https://github.com/dholmes-epcc-ed-ac-uk/libmpigraph (comments and contributions are welcome).

I have not tested it at all other than making sure it compiles in at least one environment (my laptop). Possible additions include implementation of the topology query functions, for example, using the distributed graph equivalents (for queries about neighbours) or by storing and regurgitating the input parameters (for all the other queries).

IMHO, this shows that MPI_GRAPH_CREATE is, and has always been, syntactic sugar that presents an inferior (ambiguous, non-scalable) interface to users. As such, it should be removed from the MPI Standard. This proposal deprecates only, but as a first step toward eventual removal.

@tonyskjellum
Copy link

Nathan, you need to finalize this please for reading in Barcelona by eve of Sep 3 in USA time.
Can you please update and follow up with @dholmes-epcc-ed-ac-uk and me ASAP?

@hjelmn
Copy link
Author

hjelmn commented Aug 22, 2018

@tonyskjellum Yeah. I will have it done by then. Sorry I missed the call. 8:30am MDT can be a bit of a stretch to make.

@bosilca
Copy link
Member

bosilca commented Aug 22, 2018

Similarly to collective communications the graph API provides the MPI library with an invaluable global view of the communication graph that can be used for further optimizations. The MPI library can internally gather this information, but when the information is already available at the algorithm level we should not force duplication. The lack of scalability is evident, but if the application has the global information why are we preventing it from providing it to the MPI library? Instead of deprecating them we could emphasize their lack of scalability and the memory needs and let the users decide if they want to use them or not.

@hjelmn
Copy link
Author

hjelmn commented Aug 22, 2018

@bosilca Yes, we could use the information to optimize but I know we don't in Open MPI and I doubt that MPICH does. MPI Graph has been around for 21 years and the implementation is still unoptimized. I prefer that we axe it now and focus on optimizing MPI distributed graph instead.

As for letting the user decide, I disagree. As far as MPI graph is concerned I know of three primary classes of users:

  1. Users that don't know about it (vast majority),
  2. users that have used it in toy codes, or
  3. users that tried it, discovered it didn't scale, and dropped it.

Will you be in Barcelona? The reason we are planning on a plenary of this there is to get the broader community's view on this. Your input would be useful as part of the discussion.

I wouldn't be surprised if we discuss this again in December.

@bosilca
Copy link
Member

bosilca commented Aug 23, 2018

I did not claim this information was currently used, simply that it gives the opportunity to the MPI library to optimize communications. I know some moldable applications that tried to use it, but due to lack of benefits and then dropped the idea (they do use topology and rank reordering instead). Let's talk it over in Barcelona.

@pavanbalaji
Copy link

I agree with @bosilca that this is useful information to have. However, you can add an info argument to MPI_Dist_graph_create to allow users to say that they are giving the full graph on each process. That way, we can fully move to distributed graph functions while at the same time allowing users to give the full graph, if they want.

@bosilca
Copy link
Member

bosilca commented Aug 23, 2018

@pavanbalaji 👍

@RolfRabenseifner
Copy link

RolfRabenseifner commented Aug 24, 2018 via email

@RolfRabenseifner
Copy link

RolfRabenseifner commented Aug 24, 2018 via email

@tonyskjellum
Copy link

Here is the latest source for reading in Barcelona...

mpi-report-ticket89-04sep18.pdf

@tonyskjellum tonyskjellum added scheduled reading Reading is scheduled for the next meeting and removed not ready labels Sep 5, 2018
@wesbland wesbland added this to To Do in MPI 4.1 Jun 9, 2021
@wesbland wesbland added wg-hardware-topologies Hardware Topologies Working Group and removed wg-collectives Collectives Working Group labels Jul 21, 2021
@wesbland wesbland added mpi-4.1 For inclusion in the MPI 4.1 standard and removed scheduled reading Reading is scheduled for the next meeting mpi <next> labels Jul 21, 2021
@wesbland
Copy link
Member

I’m going to propose moving this to MPI 5.0. There’s more discussion to be had here. If someone objects and thinks we’ll be ready to read this soon, leave a comment and we can discuss bringing it back into MPI 4.1.

@wesbland wesbland added mpi-5 For inclusion in the MPI 5.0 standard and removed mpi-4.1 For inclusion in the MPI 4.1 standard labels Jul 14, 2022
@wesbland wesbland removed this from To Do in MPI 4.1 Jul 14, 2022
@wesbland wesbland added this to To Do in MPI 5.0 via automation Jul 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mpi-5 For inclusion in the MPI 5.0 standard wg-hardware-topologies Hardware Topologies Working Group
Projects
Status: To Do
MPI 5.0
To Do
Development

No branches or pull requests