Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve ABI for identifying an MPI implementation #159

Open
simonbyrne opened this issue Jan 26, 2020 · 25 comments
Open

Improve ABI for identifying an MPI implementation #159

simonbyrne opened this issue Jan 26, 2020 · 25 comments
Assignees
Labels
mpi-4.2 wg-abi ABI Working Group

Comments

@simonbyrne
Copy link

simonbyrne commented Jan 26, 2020

Problem

I realise the specification of an MPI ABI is a very complicated endeavour, but as a very small step in this direction, I would like to propose that there be an ABI for identifying a particular MPI implementation.

This is important when using MPI with other languages: I am one of the maintainers of the Julia language MPI bindings, and one of the biggest difficulties is identifying which implementation of MPI is being used. At the moment, we compile a small C program to identify the MPI implementation and the necessary constants, but this has many drawbacks in terms of usability (namely that it requires a C compiler, and makes it difficult to switch between versions).

MPI_Get_library_version can currently be used to identify an implementation, but it is not well specified as a C ABI as it is not possible to determine the following without parsing a header file:

  1. the size of the buffer to allocate (i.e. the value of MPI_MAX_LIBRARY_VERSION_STRING from the header file)
  2. if a non-default calling convention is used (e.g. Microsoft MPI uses stdcall)

Proposal

Point 1 could easily be addressed by adding in the specification the maximum value of MPI_MAX_LIBRARY_VERSION_STRING: the largest value I have seen is 8192 used by mpich.

Point 2 is more difficult to standardize, since it is much more platform specific. However it only affects one implementation on one platform, and on that platform it appears to be the dominant implementation, so this is less of an issue. If anyone does have a suggestion for solving this, I would be grateful.

Changes to the Text

MPI_MAX_LIBRARY_VERSION_STRING must be a value less than or equal to 8192.

Impact on Implementation

As far as I know, this should not affect any implementations. If it does, the value can be chosen to be larger.

Impact on Users

This will make it easier to develop MPI bindings in languages other than C or Fortran, which in turn should improve user experience.

References

There is some discussion of this problem in JuliaParallel/MPI.jl#169.

@jsquyres
Copy link
Member

Greetings Simon; thanks for the suggestion. I wonder if you can clarify something for me, because I'm not entirely sure I understand the problem you're trying to solve.

You want to fix the length of the string that is returned by MPI_Get_library_version() to X (doesn't matter what the value of X is). Can you explain why a compile-time constant isn't sufficient for that?

Specifically: I'm trying to understand why it would be necessary to have this as a fixed length. Are you trying to ship a binary with a fixed-length buffer for calling MPI_Get_library_version()? If not -- i.e., if you're compiling from source -- is there a problem with using MPI_MAX_LIBRARY_VERSION_STRING to size the buffer properly?

@jeffhammond
Copy link
Member

Are you trying to ship a binary with a fixed-length buffer for calling MPI_Get_library_version()?

@jsquyres They want to get away from requiring a C compiler...

At the moment, we compile a small C program to identify the MPI implementation and the necessary constants, but this has many drawbacks in terms of usability (namely that it requires a C compiler, and makes it difficult to switch between versions).

@simonbyrne
Copy link
Author

simonbyrne commented Jan 27, 2020

You want to fix the length of the string that is returned by MPI_Get_library_version() to X (doesn't matter what the value of X is).

To clarify: I don't want to fix the length, I just want to specify an upperbound, so that allocating a buffer of that length would be sufficient and guaranteed not to cause a buffer overflow. Since the C version of MPI_Get_library_version is required to return a null-terminated string, passing a buffer that is too big is not an issue. This should not require any changes to existing MPI implementations.

Can you explain why a compile-time constant isn't sufficient for that?

Specifically: I'm trying to understand why it would be necessary to have this as a fixed length. Are you trying to ship a binary with a fixed-length buffer for calling MPI_Get_library_version()? If not -- i.e., if you're compiling from source -- is there a problem with using MPI_MAX_LIBRARY_VERSION_STRING to size the buffer properly?

The constant MPI_MAX_LIBRARY_VERSION_STRING is specified in the header file, which requires a C compiler to parse. Julia, like many other recent languages, provides a foreign function interface (FFI) for calling C-compatible functions directly, without invoking a C compiler.

Unfortunately in the case of MPI this is complicated by the lack of a standard ABI (e.g. the size and alignment of types like MPI_Comm can vary), so at the moment we generate a small C program that in turn generates the necessary Julia code to define these types and values of constants. However this causes frequent problems when users want to switch MPI implementations, or have nonstandard MPI or compiler configurations.

Now that most MPI installations are ABI compatible with one of MPICH, OpenMPI or Microsoft MPI, if we can identify the implementation without needing to invoke a C compiler, then we can provide pre-generated files specifying the necessary types and constants, and fallback to our current code generation machinery only in the cases where it does not conform to one of those.

@omor1
Copy link
Member

omor1 commented Jan 27, 2020

An alternative could be a query function MPI_GET_MAX_LIBRARY_VERSION_STRING that returns the MPI_MAX_LIBRARY_VERSION_STRING. This inflates the number of functions that MPI implementations must support, but is a trivial function to implement.

int MPI_Get_max_library_version_string(int *length)
MPI_Get_max_library_version_string(length, ierror)
    INTEGER, INTENT(OUT) :: length
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_GET_MAX_LIBRARY_VERSION_STRING(LENGTH, IERROR)
    INTEGER LENGTH, IERROR

@simonbyrne
Copy link
Author

I think if larger changes are under consideration, I would prefer something that could address point 2 as well: one example would be making the version string available via an extern char array.

@jsquyres
Copy link
Member

Ok, that's a fair point: you want to be able to have some information that is available to C and Fortran at compile time, but is not necessarily available at run time (e.g., in Open MPI's case, MPI_MAX_LIBRARY_VERSION_STRING is a #define in C).

To be more multi-language friendly, I'd actually take @omor1's proposal and raise you: have a symbol that can be used to find all such compile-time values (not just MPI_MAX_LIBRARY_VERSION_STRING). Either some type of query function (that doesn't rely on enums or other compile-time constants itself so that it can unconditionally be invoked from other languages without a priori knowledge of the MPI implementation), or a struct full of members, or perhaps changing the requirements of the various compile-time constants to also be run-time values. Regardless of the mechanism, I'd propose fixing this for all the values that you need -- not just MPI_MAX_LIBRARY_VERSION_STRING.

  1. Query function: perhaps something like MPI_Query_constant(const char *name, int *value) (that's off the top of my head -- someone should think that through to make sure it covers all cases properly).
  2. Struct full of members. This is probably a bit unwieldy, but perhaps something like
struct MPI_Constants_t {
    MAX_LIBRARY_VERSION_STRING,
    // ...others
} MPI_Constants = {
    .MAX_LIBRARY_VERSION_STRING = implementation_specific_value,
    // ...others
}
  1. Convert compile-time constants to also be run-time values. This is probably the least-desirable option.

However this causes frequent problems when users want to switch MPI implementations, or have nonstandard MPI or compiler configurations.

I hear you on this one: the situation just plain sucks. There's unfortunately too much historical baggage here -- MPI was not designed with an ABI in the beginning; it's hard to reverse-graft an ABI on to 25+ years of software and design. MPI v4.0 is all but in the bag, and ABI didn't make the cut. I think the next best hope is MPI-5.0, and the potential for more-than-C-and-Fortran native bindings.

@simonbyrne
Copy link
Author

Thanks @jsquyres, that would work for the constants C ints, but not say arbitrary handles (though you could provide the Fortran constants and call F2C conversions, but would need to know the types), and pointers like MPI_BOTTOM.

@jsquyres
Copy link
Member

jsquyres commented Feb 2, 2020

I don't quite understand, probably because I don't know enough about how other languages (e.g., Julia) work.

The MPI handles (e.g., MPI_COMM_WORLD) are defined to be run-time values. Open MPI's handles are pointers to static objects, and are therefore effectively only retrievable at run-time. MPICH's handles just happen to be integers. Regardless, this scheme was not designed such that applications should know or care what the value of these handles are.

My assumption is that you are looking for a way to get the values of these handles into Julia -- correct? If so, is there an interact-with-C mechanism in Julia such that you can effectively either bind a Julia value to a C value (like Fortran), or at least load a Julia value from a C value?

@simonbyrne
Copy link
Author

The MPI handles (e.g., MPI_COMM_WORLD) are defined to be run-time values. Open MPI's handles are pointers to static objects, and are therefore effectively only retrievable at run-time. MPICH's handles just happen to be integers. Regardless, this scheme was not designed such that applications should know or care what the value of these handles are.

My assumption is that you are looking for a way to get the values of these handles into Julia -- correct? If so, is there an interact-with-C mechanism in Julia such that you can effectively either bind a Julia value to a C value (like Fortran), or at least load a Julia value from a C value?

We have figured out a reasonably robust way of getting the values of the handles: we determine the Fortran constants (which are C integers), then at MPI.Init we call MPI_XXXX_f2c to convert it to a C handle (the Rust MPI bindings appear to do something similar). The bigger problem is figuring out the appropriate size and alignment of the handles themselves (which at the moment requires parsing the C headers).

@omor1
Copy link
Member

omor1 commented Feb 3, 2020

which at the moment requires parsing the C headers

There was a discussion previous in #137 regarding whether C MPI programs that do not include the mpi.h header are valid, with the general consensus that this was not the case.

@jsquyres
Copy link
Member

jsquyres commented Feb 3, 2020

I think that the Forum is amenable to keeping the door open to other languages. Indeed, there's groups working on opening the door wider in future revisions of the MPI spec. So let's keep having the conversation here... 😄

In Julia, is there no way to get the size of a C type? (again, forgive my ignorance here)

What do you need the alignment for? Julia shouldn't be creating new MPI handles -- you should always be getting them from C, right?

@simonbyrne
Copy link
Author

In Julia, is there no way to get the size of a C type? (again, forgive my ignorance here)

Julia knows the sizes of the builtin C types, and you can construct Julia types which match arbitrary C types in size and alignment, but there is no way to figure out how a C type is defined without parsing the header (which requires a C compiler).

What do you need the alignment for? Julia shouldn't be creating new MPI handles -- you should always be getting them from C, right?

Alignment is dependent on how the handle type is defined (e.g. a struct containing two 32-bit integers has different alignment than a 64-bit pointer).

My point was that if you want to make the ABI queryable, you would need to know both size and alignment of the various handles. You could expose these via the proposed MPI_Query_constant by defining additional names to query these, e.g. MPI_COMM_size and MPI_COMM_alignment. I'm not sure how one would handle cases like MPI_BOTTOM: I suspect an additional function would be required.

That said, I think a more robust solution would be to define a mechanism by which implementations could identify their ABI, and then leave it up to them to specify it how they wish. MPI_Get_library_version seemed like the simplest mechanism to achieve this, but I could see the utility in another (e.g. ideally the various MPICH-compatible implementations would all identify themselves the same way).

@simonbyrne
Copy link
Author

On a related note: another recent pain point we were experiencing is that there does not appear to be a consistent mechanism for determining whether an MPI implementation supports the CUDA-aware interface (OpenMPI provides MPIX_Query_cuda_support, but I haven't seen anything similar in other implementations). Providing a mechanism to query such features would seem related to this issue.

@omor1
Copy link
Member

omor1 commented Feb 4, 2020

I think that the Forum is amenable to keeping the door open to other languages. Indeed, there's groups working on opening the door wider in future revisions of the MPI spec. So let's keep having the conversation here... 😄

Agreed that this is an important conversation—just pointing out that the current standard makes that somewhat difficult.

I know that when I was working on a binding for Apple's Swift language around three years ago I basically resorted to wrapping around all the functions and constants with inline functions so as to force a common ABI (the Swift–C interface had issues with function-like macros in Open MPI and imported enums as the Swift type Integer rather than CInt, the former being a word-length integer and the latter the same as the C int type).

I don't know if a similar technique works for Julia, since it seems that its C integration uses a different mechanism than Swift.

there does not appear to be a consistent mechanism for determining whether an MPI implementation supports the CUDA-aware interface

Any GPU interoperability is incredibly implementation-specific and not specified by the standard. The are currently no proposals to add such support to the standard as far as I know. Querying for extensions might be a useful addition actually, but that should definitely be discussed in a different issue.

A workaround might be to use dlsym to determine whether the MPIX_Query_cuda_support symbol exists and therefore can be called.

@omor1
Copy link
Member

omor1 commented Feb 4, 2020

Regarding C MPI handles—the standard actually restricts how they can be implemented. On page 13 (MPI-3.1 §2.5.1) the following is stated:

The C types must support the use of the assignment and equality operators.

The only types in C that supports these are pointers and arithmetic types. Now I suppose an implementation could technically use long double _Complex and still be standards-compliant, but I'm fairly certain that all implementations use either pointers or integers.

@jsquyres
Copy link
Member

jsquyres commented Feb 4, 2020

I think that this is boiling down to two important questions:

  1. In the current generation of MPI implementations, what is needed to allow third-party bindings packages discover what they need to know about the back-end C implementation?
    • Handles and constants are one set of things. Are there more?
    • We might want to have a separate discussion about this somewhere (another GitHub issue? a Webex? ...?), since it's not really a Forum-level issue.
  2. In the next generation standard (and by consequence, MPI implementations), what is needed to allow "pluggable" bindings packages to integrate with MPI implementations?
    • E.g., what standardized hooks are necessary to allow a third-party bindings package for language XYZ to integrate with a given MPI implementation?
    • The question may ultimately be relevant to things other than third-party bindings packages, but bindings packages seem like a good place to start.

I think that @dalcinl is also interested in such things.

@dalcinl
Copy link

dalcinl commented Feb 4, 2020

@jsquyres A maximum level of ABI exposure should allow a third party library to access ALL of MPI by just doing dlopen() and dlsym (I actually have an alternative implementation of mpi4py based on cffi). For that to happen, many things should be exposed in [lib]mpi.{so|dylib|dll}:

  • dylib entry points for all, and by all I really mean ALL, functions and function-like macro in mpi.h. The standard allow for a few things to be implemented as macros. That's fine, MPI implementation can continue providing them as macros, but they should add an entry point in the library. On a personal opinion, I would get rid of all function-like macros, just for the benefit of third-party profiling libraries. Is using macros really going to buy as any substantial performance?
  • dylib entry points for all the MPI_XXX integer/pointer/handle constants.
  • dylib entry points for new readonly values containing the values of sizeof(MPI_Xxx) for handles. In Open MPI, I think all handles match sizeof(void*), but in MPICH, IIRC, everything is sizeof(int) but MPI_File breaks the rule. These entry points should include the values of sizeof(...) for MPI_Aint, MPI_Offset, and MPI_Fint, and of course MPI_Status.
  • The API for MPI_Status should define setters/getters for source/tag/error. Moreover, maybe the MPI standard should deprecate the use of status.MPI_SOURCE = value; in favor of using the new getter/setters. If these getter/setters are implemented as macros (please don't!), you still generate dylib entry points for them.
  • Maybe a readonly dylib entry point pointing to an instance of an empty MPI_Status (empty defined as what you get from MPI_Wait() on an inactive persistent request) ? This would be convenient, although I concede is not strictly required, as we could just use the five MPI_Status setters (two of them already in the MPI standard, plus the three I'm proposing for setting source/tag/error fields).
  • Finally, maybe the MPI standard could relax the rules about what routines can be called before/after MPI_{Init|Finalize}(), though this is low-hanging fruit.

I don't think this is really too much work for implementors. And as I said, I have a pure Python-based reimplementation of mpi4py that can be used to test ABI features (and run all of mpi4py's testsuite, which, as you may remember, is quite picky and tests for almost all of the MPI API). Anything missing in my list above would be relatively easy to catch using my Python stuff.

Right now, my alternative mpi4py implementation (not public yet) is not functional in ABI mode, simply because there is no MPI exposing a full ABI interface. I'm using Python's cffi package, which has the option to generate and compile C code to access stuff in API-mode, but switching to ABI-mode (i.e pure dlopen() and dlsym()) should be quite trivial.

Just my two cents...

@jsquyres
Copy link
Member

jsquyres commented Feb 4, 2020

Thanks for the details.

  • dylib entry points for all, and by all I really mean ALL, functions and function-like macro in mpi.h. The standard allow for a few things to be implemented as macros. That's fine, MPI implementation can continue providing them as macros, but they should add an entry point in the library. On a personal opinion, I would get rid of all function-like macros, just for the benefit of third-party profiling libraries. Is using macros really going to buy as any substantial performance?
  • dylib entry points for all the MPI_XXX integer/pointer/handle constants.
  • dylib entry points for new readonly values containing the values of sizeof(MPI_Xxx) for handles. In Open MPI, I think all handles match sizeof(void*), but in MPICH, IIRC, everything is sizeof(int) but MPI_File breaks the rule. These entry points should include the values of sizeof(...) for MPI_Aint, MPI_Offset, and MPI_Fint, and of course MPI_Status.

By "dylib entry point", do you mean "something you can find via a specific dlsym(...)"? (i.e., you have to call dlsym(...) a bunch of times) Or would having a one (or a small number) of top-level, ABI-friendly functions to query all of this information (assumedly as a bunch of pointers/sizes) be sufficient?

Also, in your comments, are you referring to "ABI" as "the guaranteed-to-be-the-same parts of an MPI implementation that can be used to discover what you need to know about the not-guaranteed-to-be-the-same parts of an MPI implementation"? If so, I like that idea quite a lot. 😄

  • The API for MPI_Status should define setters/getters for source/tag/error. Moreover, maybe the MPI standard should deprecate the use of status.MPI_SOURCE = value; in favor of using the new getter/setters. If these getter/setters are implemented as macros (please don't!), you still generate dylib entry points for them.

Let me ask a larger question: given that we're talking about breaking the 1:1 correspondence between C and <...any other language binding...>, is it necessary for the C usage of status.MPI_SOURCE=value to be deprecated? Or do we just not have to support that in other language bindings?

  • Maybe a readonly dylib entry point pointing to an instance of an empty MPI_Status (empty defined as what you get from MPI_Wait() on an inactive persistent request) ? This would be convenient, although I concede is not strictly required, as we could just use the five MPI_Status setters (two of them already in the MPI standard, plus the three I'm proposing for setting source/tag/error fields).

Can you clarify: are you looking for a golden "empty" (in the MPI sense of the word) MPI_Status that you can use to compare to other statuses to see if they, too, are empty?

  • Finally, maybe the MPI standard could relax the rules about what routines can be called before/after MPI_{Init|Finalize}(), though this is low-hanging fruit.

Sessions is coming. 😄

MPI-4 isn't going to contain everything that was envisioned for Sessions, but work will proceed for Sessions beyond MPI-4.

@omor1
Copy link
Member

omor1 commented Feb 4, 2020

dylib entry points for all, and by all I really mean ALL, functions and function-like macro in mpi.h. The standard allow for a few things to be implemented as macros. That's fine, MPI implementation can continue providing them as macros, but they should add an entry point in the library.

dylib entry points for all the MPI_XXX integer/pointer/handle constants.

Note that the integer constants can be also be implemented as macros (or enums). I think that technically most or all MPI implementations actually violate the standard on this point (or at least my interpretation of it), as they implement the various handle constants via macros as well. i.e. most (or all?) implementations don't actually have a symbol named e.g. MPI_INT or MPI_COMM_WORLD.

I'm referring to MPI-3.1 §A.1.1 pp. 669: Constants with the type const int may also be implemented as literal integer constants substituted by the preprocessor. With the implication being that other constants cannot be implemented as macros.

On a personal opinion, I would get rid of all function-like macros, just for the benefit of third-party profiling libraries. Is using macros really going to buy as any substantial performance?

I agree with this; using extern inline functions gives all the benefits of function-like macros while also providing symbol names. There are cases where function-like macros are useful, but as used in MPI is not really one of them.

@dalcinl
Copy link

dalcinl commented Feb 4, 2020

By "dylib entry point", do you mean "something you can find via a specific dlsym(...)"? (i.e., you have to call dlsym(...) a bunch of times)

Yes.

Or would having a one (or a small number) of top-level, ABI-friendly functions to query all of this information (assumedly as a bunch of pointers/sizes) be sufficient?

ABI-friendly functions maybe would be enough. But is it worth the complication? Why don't you just export symbols? Also, this way, there is little room for MPI implementors to screw things 😆. Note that mpi.h can still use compile-time constants (and it is probably a good idea to do so).

Note that my proposal is not really an ABI proposal. A real ABI proposal would involve making all implementations agree on the values of these constants. Good look with that! And even then, the implementation of builtin handles as constants may be tricky for Open MPI.

Also, in your comments, are you referring to "ABI" as "the guaranteed-to-be-the-same parts of an MPI implementation that can be used to discover what you need to know about the not-guaranteed-to-be-the-same parts of an MPI implementation"? If so, I like that idea quite a lot.

I'm not sure I understood your question. My proposal is an ABI interface for dlopen(). If implemented properly, it would allow writing language wrappers, profiling libraries that work with ANY implementation they dlopen at runtime. Handles still pose problems, it would be great if everyone agreed on make them pointer-sized (I don't think this would be a huge issue for MPICH, aside of breaking its current ABI).

  • The API for MPI_Status should define setters/getters for source/tag/error.

Let me ask a larger question: given that we're talking about breaking the 1:1 correspondence between C and <...any other language binding...>, is it necessary for the C usage of status.MPI_SOURCE=value to be deprecated? Or do we just not have to support that in other language bindings?

Maybe not deprecate, but discourage its use in new code such that profiling libraries may benefit from the intercepting these calls.

  • Maybe a readonly dylib entry point pointing to an instance of an empty MPI_Status

Can you clarify: are you looking for a golden "empty" (in the MPI sense of the word) MPI_Status that you can use to compare to other statuses to see if they, too, are empty?

If C had "constructors" in the C++ sense, how would you code the default constructor for MPI_Status? I would make it empty, i.e source=MPI_ANY_SOURCE, tag=MPI_ANY_TAG, error=MPI_SUCCESS, cancelled state to false, and calls to Get_count() return 0. MPI-3.1, Section 3.7.3 Communication Completion. I'm doing just that in mpi4py, though the cancelled status and internal count work as I want just by accident (Python memzeroes on allocation, and I do not want to call MPI routines in initializers, because of the MPI_Init/Finalize restrictions)

In [1]: from mpi4py import MPI                                                  
In [2]: s = MPI.Status()                                                        
In [3]: assert s.Get_source() == MPI.ANY_SOURCE                                 
In [4]: assert s.Get_tag() == MPI.ANY_TAG                                       
In [5]: assert s.Get_error() == MPI.SUCCESS                                     
In [6]: assert s.Is_cancelled() == False                                        
In [7]: assert s.Get_count(MPI.BYTE) == False                                   

@rawarren
Copy link

rawarren commented Feb 4, 2020

I have to agree with the idea of having MPI_Status setters/getters for each of the predefined fields {MPI_SOURCE, MPI_TAG, and MPI_ERROR}. Note that while the MPI spec defines these fields, it does NOT specify an order in which these appear in the structure. Nor does it specify any additional information such as structure size. One might assume that the structure size for example, might be fairly standard but you'd be wrong. HPMPI is/was 32 bytes, MPICH1.2 was 16 bytes, MPICH3.0 is 20 bytes as is OpenMPI, and SGI is 24 bytes. For purely interpreted wrappers that don't include "mpi.h" to understand the actual MPI structure layouts, having setter/getter functions to these "opaque" objects would actually go a long way in enabling progress on MPI ABI development (in my opinion).

@wgropp
Copy link

wgropp commented Feb 5, 2020 via email

@simonbyrne
Copy link
Author

simonbyrne commented Jun 4, 2020

I appreciate that there is a lot more that could be done, but given the large amount of effort required I would again request that my original proposal of specifying a maximum length be considered. We currently use this assumption in our codebase:
https://github.com/JuliaParallel/MPI.jl/blob/1971a5c1a4328c55b69058a112c012133ca49612/src/implementations.jl#L15-L27

@jeffhammond
Copy link
Member

Standardizing MPI_Status is one of the more trivial aspects of ABI standardization, from a technical perspective.

https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_2.md#the-mpi_status-object

Forcing users to user new APIs to query a struct is absurd. We've defined the public fields for decades.

Yeah, it sucks that implementers will have to do some hard work here, but users outnumber implementers by orders-of-magnitude, and there is no excuse to keep pushing all the pain of not having an ABI onto thousands of users to placate 10 MPI maintainers.

@dalcinl
Copy link

dalcinl commented Nov 10, 2022

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mpi-4.2 wg-abi ABI Working Group
Projects
Status: To Do
Development

No branches or pull requests

8 participants