Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Logically concurrent' isn't #117

Closed
jsquyres opened this issue Dec 8, 2018 · 390 comments
Closed

'Logically concurrent' isn't #117

jsquyres opened this issue Dec 8, 2018 · 390 comments
Assignees
Labels
chap-p2p Point to Point Communication Chapter Committee had reading Completed the formal proposal reading mpi-4.1 For inclusion in the MPI 4.1 standard passed final vote Passed the final formal vote passed first vote Passed the first formal vote wg-hybrid Hybrid Working Group wg-p2p Point-to-Point Working Group
Milestone

Comments

@jsquyres
Copy link
Member

jsquyres commented Dec 8, 2018

@dholmes-epcc-ed-ac-uk and I were talking about an issue the other night at dinner, and I wanted to record it because it's a serious issue that needs to get fixed in MPI next.

MPI-3.1 section 3.5 p41:10-17 states:

If a process has a single thread of execution, then any two communications executed by this process are ordered. On the other hand, if the process is multithreaded, then the semantics of thread execution may not define a relative order between two send operations executed by two distinct threads. The operations are logically concurrent, even if one physically precedes the other. In such a case, the two messages sent can be received in any order. Similarly, if two receive operations that are logically concurrent receive two successively sent messages, then the two messages can match the two receives in either order.

The problematic text states that any operations on different threads are “logically concurrent." Sometimes that is because the thread execution does not define an order. But even if there is a guaranteed order (which is perhaps what the phrase "physical precedes" means?) then MPI still considers them to be “logically concurrent”. For example, even if there is a thread synchronization between the operations, or perhaps an extremely long wall-clock time between the operations, MPI is still permitted to consider those operations “logically concurrent.” This is bad because MPI is permitted to deliver “logically concurrent” messages in any order, which is going to astonish users (and implementors).

Here's an example:

MPI_Init_thread(NULL, NULL, MPI_THREAD_MULTIPLE, &provided);
assert(provided == MPI_THREAD_MULTIPLE);
char a = 111;
char b = 222;

// Thread 1                              // Thread 2
MPI_Send(&a, 1, MPI_CHAR, 1, 2, comm1);
sleep(60);
thread_barrier();                        thread_barrier();
                                         MPI_Send(&b, 1, MPI_CHAR, 1, 2, comm1);

According to MPI-3.1 3.5, these two sends are logically concurrent, and it is permitted for the b message to be received at the receiver before the a message.

Note: the sleep(60) is actually unnecessary in this example -- it's just insult-added-to-injury to drive home the point.

Here's another example (that was sent across the point-to-point working group list):

void test(int rank) {
   int msg = 0;
   if (rank == 0) {
#pragma omp parallel num_threads(2)
#pragma omp critical
       {
           MPI_Send(&msg, 1, MPI_INT, 1, 42, MPI_COMM_WORLD);
           msg++;
       }
   } else if (rank == 1) {
       MPI_Recv(&msg, 1, MPI_INT, 0, 42, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
       printf("Received %i\n", msg);
       MPI_Recv(&msg, 1, MPI_INT, 0, 42, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
       printf("Received %i\n", msg);
   }
}

What does this print: 0, 1, or 1, 0?

According to MPI-3.1 section 3.5, both are possible. 😲

Resources

PR: https://github.com/mpi-forum/mpi-standard/pull/748

@hzhou
Copy link

hzhou commented Dec 8, 2018

This issue has a few layers of principles or definitions that without explicit consensus, discussions can easily devolve into cross-talking. May I suggest the following points for potential consensus?

  1. The least astonishment principle. Concurrent programming is rarely intuitive. Our mind doesn't work concurrently. The "astonishing score" is largely affected by personal experience. Therefore, the least astonishing metric is a very unreliable metric. With clear enough definitions and explanations, we can turn astonishing into understanding or even expected.

  2. The need to define "logical concurrency" and "logical in-concurrency". The word "logical" is implicitly defined and at different levels, it could bear different meanings. For example, the programmer may have no requirement of the ordering of certain events, then these events are logically concurrent in the programmer's mind even in a serial code. On the other hand, at the library level, as all events are enqueued, they all appear with an order, thus, are never logically concurrent -- that is an implementation level definition. Finally, at the physical level, the concurrency is a matter of your physical resolution and I believe ultimately there is never physical concurrency. Therefore, just saying "logically concurrent" is very ambiguous.

  3. What does the current text say and do we accept? So we need to define "logical concurrency" and its counterpart -- "logical in-concurrency". The current text seems to read that two send operations executed by two distinct threads are "logically concurrent" by definition, and operations executed in the same thread are "logically in-concurrent" (maybe "logically serial"). Such definitions are fine in my opinion. The "astonishment" comes before one accepts such definitions. We can add text to make this definition more explicit and maybe add a rationale to explain the necessity of such a definition.

  4. Is the mention of "logical concurrency" necessary at all? Let's agree on the point of the examples. Both examples hinge on one question: do we want to restrict implementations to enqueue atomically send or recv at function invocation time, potentially with an implicit synchronization call? -- Is this interpretation correct? This is a rather clear requirement on the implementation and we can specify this without mention of "logical concurrency". With current text, both examples are "logically concurrent" and either behavior should be accepted as "non-astonishing". We use such examples to train our concurrent programmers and to correct their "astonishing" metrics all the time. [The second example touches one more philosophy. #pragma is used for performance tuning, it should not alter the logical meaning of the code. Edit: withdrawn -- logical meaning ill-defined, off-topic.]

  5. Finally, we can discuss on what final text would be, assuming we agreed on all above -- one way or the other.

Sorry that I mixed in my opinions within each point. Also, I edited heavily as I re-read my own comments.

@jsquyres
Copy link
Member Author

jsquyres commented Dec 8, 2018

Defining that MPI operations separated by an infinite amount of wall clock time are "logically concurrent" (and therefore can be arbitrarily ordered in reality) adds no clarity to the standard, serves no practical purpose, and creates confusion about how app developers are supposed to write coherent multi-threaded MPI applications.

@hzhou
Copy link

hzhou commented Dec 8, 2018

There is no "infinite" amount of wall clock. "infinite" only exist logically and the sentence "operations separated by an infinite time" is the same as "logically in-concurrent", which requires definition -- back to square one. Do you want to refine your "infinite" to a finite one, 1 second or 1 minute?

PS: of course the question is a trap. There is no logical distinction between 1 second or 1 minute or 1 femtosecond. Logical concurrency as in its name is logical; physical time separation is irrelevant. We can't really define logical concurrency to physical one -as that is ultimately impossible, which only leaves the choice of completely separating "logical concurrency" from the physical one.

@hzhou
Copy link

hzhou commented Dec 8, 2018

A different and more relevant question is: Can we come up with a realistic example that the order of send in distinct threads are critical (for application's purpose) and there is no alternative cleaner way to express such ordering?

In the first example, since the code is willing to call thread_barrier, does it make more sense to move both send into the same thread? Similarly, in the second example, since the code is willing to do #pragma omp critical, why not call two sends in the same thread?

@mhoemmen
Copy link

mhoemmen commented Dec 8, 2018

#pragma is used for performance tuning, it should not alter the logical meaning of the code.

If the parallel region actually happens, then #pragma omp critical very definitely does alter the meaning of the code. In theory, one could remove all the #pragma omp * and the could would still work (maybe-ish, if you ignore the OpenMP API and interactions with other pragmas), but you can't just remove some of the pragmas.

@hzhou
Copy link

hzhou commented Dec 8, 2018

@mhoemmen I agree that #pragma omp critical does alter the results of the code. I am asking whether the meaning should. A code that depends on #pragma for its intended meaning, in my opinion, is bad code. Should we encourage such bad code -- note that it is not bad code if the meaning does not depend on actual realization; and bad codes might well be practical code -- by guarantee implementation level behaviors?

PS: on further thought, #pragma omp critical is kind of colored with meaning rather than simply performance. I don't quite like it, but I guess it is similar to the explicit thread_barrier calls -- might as well be compiled that way. I withdraw my earlier comment on this particular #pragma.

@jeffhammond
Copy link
Member

jeffhammond commented Dec 8, 2018 via email

@jeffhammond
Copy link
Member

jeffhammond commented Dec 8, 2018 via email

@mhoemmen
Copy link

@jeffhammond It's cool :D

@jsquyres
Copy link
Member Author

@hzhou The current MPI specification text currently allows for an unbounded / infinite amount of time to pass between MPI operations in different threads, and yet will allow the implementation to re-order them. That is flat-out terrible.

It does not matter whether you agree with the style of the two examples that were provided. They are valid MPI applications, and demonstrate the issue clearly.

You can continue to argue that the text's current definition of "logically concurrent" is fine/good, but that's just academic. The definition in the standard adds no clarity, serves no practical purpose, and creates confusion about how app developers are supposed to write coherent multi-threaded MPI applications.

@dholmes-epcc-ed-ac-uk
Copy link
Member

@jsquyres

serves no purpose

If the two threads were assigned completely independent communication resources (HW & SW) and their receivers were also independent, then enforcing any ordering that spans both threads adds overhead that could, in theory, be avoided. The messages from really/properly concurrent threads would need sequence numbers to record the order at the sender MPI process, then after taking independent non-deterministic routes (permitted in either case), the receiver must enforce the original ordering using the sequence numbers to re-create the sender MPI process order in the matching queue.

The current text permits that "re-create ordering" overhead to be avoided by allowing logically concurrent messages to be delivered in any order. It has the side-effect of removing any guarantee of ordering, even in pathological cases, such as the examples given. My understanding is that this was known and intended (both the potential optimisation and the side-effect) by the original authors of this text (but I might be wrong). Congestion control could add an arbitrary holding delay to one network route - why should other (concurrent/independent) messages be delayed, even if they left the sender later than the held message?

The API design decision here is a trade-off between easy-to-use but lower performance (enforce the order) and hard-to-use but better performance (don't enforce the order).

Easy/hard-to-use is subjective and based on a clear explanation of the intended/expected behaviour. Higher/lower performance ought to be objectively measurable.

@dholmes-epcc-ed-ac-uk
Copy link
Member

In the face-to-face meeting, @schulzm and I decided that:

  1. clarifying the text so that the permission to ignore ordering is more explicit/obvious (i.e. what some folk think it already says) would make it clear that existing programs that rely on ordering between threads are actually erroneous, even if they accidentally work with current MPI library implementations. No MPI library would need to change behaviour because it would be correct to honour the order or to ignore it. Some (erroneous) user programs may (technically) need to change.

  2. modifying the text so that the MPI library is required to record and re-create the order of "logically concurrent" messages (i.e. what other folk think it already says) would make it clear that existing programs are fine irrespective of whether they rely on ordering or not but that some (possibly hypothetical) MPI implementations (those that deliver logically concurrent messages in arbitrary order) would become incorrect unless they introduced some mechanism to enforce this new extension of the ordering rule, probably at the expense of reducing performance.

Thus, I agree with two thirds of @jsquyres' summary:

The definition in the standard adds no clarity, serves no practical purpose, and creates confusion about how app developers are supposed to write coherent multi-threaded MPI applications.

@dholmes-epcc-ed-ac-uk dholmes-epcc-ed-ac-uk self-assigned this Dec 10, 2018
@dholmes-epcc-ed-ac-uk dholmes-epcc-ed-ac-uk added not ready wg-p2p Point-to-Point Working Group labels Dec 10, 2018
@hzhou
Copy link

hzhou commented Dec 10, 2018

Despite complete different wording, I believe @dholmes-epcc-ed-ac-uk and I have the same understanding (of the problem). On that understanding, my personal opinion is heavily leaning to his first option, but my earlier post is mostly trying to clarify that understanding.

@dholmes-epcc-ed-ac-uk
Copy link
Member

@hzhou

Both examples hinge on one question: do we want to restrict implementations to enqueue atomically send or recv at function invocation time, potentially with an implicit synchronization call?

Is this interpretation correct?

No, it is not a correct interpretation. (Perhaps it is and I have simply misinterpreted your question.)

  1. there is no enqueue because the communication subsystem is not assumed to be a queue - it might be multi-rail, it might be multi-route, it might be non-deterministic.

  2. all MPI function calls are "atomic" in a thread-compliant MPI implementation, in as much as "the outcome will be as if the calls executed in some order" (see MPI-4.0-SC18-Draft, section 12.4.1, page 511, point 1)

  3. despite these points, the ordering guarantee (if extended) would go further and require that the matching order at the receiver MPI process respect/enforce the function call ordering recorded by the sender MPI process, even the messages were injected into the network in a different order (always permitted), taken longer/shorter routes in the network (always permitted), and/or arrived at the receiver in a different order (always permitted).

@hzhou
Copy link

hzhou commented Dec 10, 2018

@dholmes-epcc-ed-ac-uk I meant the same thing -- your sentence may be clearer. I meant that the option for MPI Standard text is to specify an implementation detail rather than a concept description (with logical concurrency or infinite wall time). By "enqueue" I meant to record the order at send time. It could be a literal queue or simply sequence number or some other mechanism that ensures the ability to restore the order -- if there is ordering, there is a meta-queue. However, I don't see anyway to get away from the synchronization. The best we can do is to make the global sequence number or queue atomic, which is more strict than simply thread safety. It is the first time that I learned all MPI function calls are "atomic". I guess I have to accept it or it is another discussion.

EDIT: I referenced the text and it says thread-safe, not atomic. Correct me if I am wrong: Thread-safe can be interleaved; atomic strictly can't. It matters when we are talking about ordering where atomic the ordering is well-defined and in an inter-leaved situation, the ordering need be defined. Whether do we need define such ordering and if yes, how to define such ordering is the current point of discussion.

@dholmes-epcc-ed-ac-uk
Copy link
Member

@hzhou

I believe ultimately there is never physical concurrency.

This seems intuitively false: consider a multi-core socket, a multi-socket node, a multi-rail NIC, a multi-route fabric - it's turtles all the way down.

Also, even if the two sender threads are multiplexed through a single bottleneck at some point during transmission, this misses the point of the ordering rule, which talks about message matching order rather than injection, transmission, or delivery order.

@dholmes-epcc-ed-ac-uk
Copy link
Member

dholmes-epcc-ed-ac-uk commented Dec 10, 2018

if there is ordering, there is a meta-queue

Personally, I think the ordering rule leads inexorably towards channels or streams, which are unastonishingly ordered by definition. However, that is another story.

@hzhou
Copy link

hzhou commented Dec 10, 2018

I believe ultimately there is never physical concurrency.

This seems intuitively false, ... it's turtles all the way down.

"Turtles all the way down" leads to "ultimately there is never physical concurrency" right?

If we discuss on the abstract level -- such as turtles all the way down, then the discussion will never end or agree -- depending on which turtle we take the pause. If we abort the philosophical discussion at all and simply define our terms on a technical level, it is will be definite. Defining "concurrency" based on distinct threads or not is one such approaches. Defining the behavior by requiring MPI record the ordering at send invocation time -- the starting point of the function call is another option -- still has ambiguity of which point but enough for our example cases.

@dholmes-epcc-ed-ac-uk
Copy link
Member

meta-queue

At the moment, point-to-point send and receive in MPI are half-channel operations. A channel is a FIFO queue, so send and receive operations contribute to meta-half-queues.

Each {send-thread, receive-thread} pairing constitutes a different meta-queue. For each thread, all of its meta-half-queues (its contributions to all meta-queues formed with all other threads) when taken together form a meta-queue.

@dholmes-epcc-ed-ac-uk
Copy link
Member

"Turtles all the way down" leads to "ultimately there is never physical concurrency" right?

Quite the opposite?

Two different processes on two different hardware threads, on two different cores, in two different sockets, on two different nodes, in two different cities, ... eventually you must admit that these could actually execute at the same time, i.e. physically concurrently.

@hjelmn
Copy link

hjelmn commented Dec 10, 2018

Honestly, any app that relies on the ordering of messages sent/received from different threads without explicitly enforcing the order is an erroneous code and should be rewritten. So, I would go with option 1. But then, I am explicitly against the current no-overtaking semantic in MPI anyway.

@hzhou
Copy link

hzhou commented Dec 10, 2018

"Turtles all the way down" leads to "ultimately there is never physical concurrency" right?

Quite the opposite?

Two different processes on two different hardware threads, on two different cores, in two different sockets, on two different nodes, in two different cities, ... eventually you must admit that these could actually execute at the same time, i.e. physically concurrently.

Because you will never reach the "eventual" or the "last turtle", therefore, you'll never reach the ultimate "concurrency" -- which equates to ultimate in-concurrent. ... But this is a recurring philosophical discussion. Shall we agree that we should understand its never ending or pointless nature and avoid such philosophical discussion?

@dholmes-epcc-ed-ac-uk
Copy link
Member

@hjelmn the problem is that there is no way to explicitly enforce the order - as shown by the examples. The only option the user has is to marshal all point-to-point calls onto a single thread and rely on the ordering rule, or to use different tags/ranks/communicators.

@hjelmn
Copy link

hjelmn commented Dec 10, 2018

@dholmes-epcc-ed-ac-uk True. Though they could force the order using out-of-band methods (which would be extremely ugly).

@dholmes-epcc-ed-ac-uk
Copy link
Member

@hzhou we have different meanings for the word concurrency.

I'm using this one:
https://dictionary.cambridge.org/dictionary/english/concurrent

"at the same time"

What do you mean by the word?

@dholmes-epcc-ed-ac-uk
Copy link
Member

@hjelmn

Though they could force the order using out-of-band methods (which would be extremely ugly).

How does that work? I'm not sure I agree - please provide an example.

@hzhou
Copy link

hzhou commented Dec 7, 2022

To my knowledge, programmers need to mark the data as atomic or use explicit atomic instructions to tell the compiler. The memory models is all about educating programmers so they can tell the compiler precisely. I think we should do the same in MPI.

@devreal
Copy link

devreal commented Dec 7, 2022

To my knowledge, programmers need to mark the data as atomic or use explicit atomic instructions to tell the compiler.

That is what MPI_THREAD_MULTIPLE does.

@hzhou
Copy link

hzhou commented Dec 7, 2022

To my knowledge, programmers need to mark the data as atomic or use explicit atomic instructions to tell the compiler.

That is what MPI_THREAD_MULTIPLE does.

That's what this thread is about -- we are not agreeing on it and are debating about it. To me, MPI_THREAD_MULTIPLE says users are going to call MPI from different threads, potentially concurrently.

@garzaran
Copy link

garzaran commented Dec 7, 2022

This whole discussion is about performance, and if you care about performance, then things are not sequential. So, @jeffhammond , I do not think I buy your arguments.

@jeffhammond
Copy link
Member

My argument is that people need to use proper computer science terminology for discussing shared-memory concurrency. Do you object to that?

Once we start using C11 memory model language to discuss things, then we can talk about the consequences of them on MPI.

@jeffhammond
Copy link
Member

And no, performance is irrelevant at this point. You cannot just casually break the MPI-1 semantics for message ordering just because you want MPI_THREAD_MULTIPLE to go faster in some specific use case.

If you want to change the semantics of Send-Recv, please submit a ticket to do that.

@jeffhammond
Copy link
Member

There is nothing wrong with the following text. The problem exists only in how people are reading it, because we only address the unordered multithread case.

If a process has a single thread of execution, then any two communications executed by this process are ordered. On the other hand, if the process is multithreaded, then the semantics of thread execution may not define a relative order between two send operations executed by two distinct threads. The operations are logically concurrent, even if one physically precedes the other. In such a case, the two messages sent can be received in any order. Similarly, if two receive operations that are logically concurrent receive two successively sent messages, then the two messages can match the two receives in either order.

This is the logical complement of the above text:

On the other hand, if the process is multithreaded, then the semantics of thread execution may define a relative order between two send operations executed by two distinct threads. The operations are logically ordered even though they occur on different threads. In such a case, the two messages sent will be received in the order they are sent, as defined by the ordering established between two threads. Similarly, if two receive operations that are logically ordered on two threads receive two successively sent messages, then the two messages will match the two receives in the order defined by their order of execution on the two threads.

I am not adding anything here. The standard currently describes unordered multithreaded execution. We didn't bother defining logically ordered multithreaded execution, probably because it was obvious that it was degenerate with the single-threaded case, but it seems that was a bad assumption.

@hzhou
Copy link

hzhou commented Dec 7, 2022

There is nothing wrong with the following text. The problem exists only in how people are reading it, because we only address the unordered multithread case.

The problem lies precisely at the statement that you wanted to dismiss -- The order in a multithreaded case is an intention, not a result. Without telling MPI, and if MPI does not assume it, then MPI cannot tell whether two calls from two threads are ordered or not, even when the two calls are seconds apart in real-time. To tell an order, MPI needs to actively order between threads, and as a result, all calls will be ordered -- not as a result, but as an assumed intention. Note that to my experience, when programmers make parallel code, they usually intend calls by default to be concurrent rather than ordered.

I am not adding anything here. The standard currently describes unordered multithreaded execution. We didn't bother defining logically ordered multithreaded execution, probably because it was obvious that it was degenerate with the single-threaded case, but it seems that was a bad assumption.

But you are adding something in your previous comments. You are asserting that all MPI calls from multiple threads are logically ordered. Again, "logical" here is an intention. The standard does not have text on how to tell this "logical" order, thus the text about its consequence is ambiguous at best and misleading at worst.

@jeffhammond
Copy link
Member

I'm not saying that but it's clear at this point that you have no intention to engage in good faith reading so I'm going to ignore you from now on.

@jeffhammond
Copy link
Member

jeffhammond commented Dec 8, 2022

What I am saying is what Dan said 4 years ago (#117 (comment)), which is that calls to MPI behave as if they begin with an atomic operation. That's what it means to have some order: atomicity. They are not logically concurrent, any more than atomic_store to the same memory location is, and the relevant location here is the object associated with the MPI_Comm handle.

The camp B people are welcome to propose modifications to 11.6, but they cannot ignore it. For example, they can propose MPI_THREAD_CONCURRENT that says that MPI calls are logically concurrent, but the current standard does not allow this, because it is inconsistent with the meaning of the words "in some order".

I suppose this debate was already had in the 331 comments I haven't read yet, but since there has not been a pull request to amend the following...

two concurrently running threads may make MPI calls and the outcome will be as if the calls executed in some order, even if their execution is interleaved.

...we can just stop talking about how the calls themselves are logically concurrent. The standard already says otherwise.

It is true that in a multithreaded program where threads are not synchronized, the programmer must reason about MPI calls as if they are logically concurrent but that does not mean they are actually logically concurrent in the implementation. What programmers can infer from code that uses MPI is not the same thing as what implementations are allowed to do.

@jaegerj
Copy link
Member

jaegerj commented Dec 8, 2022 via email

@jprotze
Copy link

jprotze commented Dec 8, 2022

What I am saying is what Dan said 4 years ago (#117 (comment)), which is that calls to MPI behave as if they begin with an atomic operation. That's what it means to have some order: atomicity. They are not logically concurrent, any more than atomic_store to the same memory location is, and the relevant location here is the object associated with the MPI_Comm handle.

This statement is more restrictive, than the current standard text defines. The interesting part of "no ordering is given, if the calls are concurrent" means, that it is not relevant at which time during the execution of the MPI function this atomic operation is placed. If the calls are logically ordered, the placement does not matter - it will reflect the ordering in any case. If the calls are not logically ordered, the placement does not matter - no ordering is given anyways.

Following up on the enabled discussion yesterday, I think, that actually the enabling of the operation should define the ordering. I think, this would make it more clear for the case of non-blocking communication.

@jprotze
Copy link

jprotze commented Dec 8, 2022

Do you mean the non-overtaking rule ? Because to me, the sentence "The operations are logically concurrent, even if one physically precedes the other. In such a case, the two messages sent can be received in any order." states that the non-overtaking rule is not applicable for MPI_Sends which were issued in different threads. Julien J.

The first sentence in your quote just states, that logically concurrent is more relaxed than physically concurrent. For people with multi-threading background this should be an axiomatic statement. The second sentence then becomes to "If two send operations are logically concurrent, the two messages sent can be received in any order.". This does not say that ordering between threads is given away. It is only given away, if the operations are logically concurrent.

@wgropp wgropp closed this as completed Jan 9, 2023
@devreal
Copy link

devreal commented Jan 9, 2023

I believe https://github.com/mpi-forum/mpi-standard/pull/777 wasn't meant to close this issue, reopening.

@devreal devreal reopened this Jan 9, 2023
@wesbland
Copy link
Member

wesbland commented Jan 9, 2023

I believe mpi-forum/mpi-standard#777 wasn't meant to close this issue, reopening.

Oops. Sorry I missed that comment. You're right.

@wesbland wesbland added the passed first vote Passed the first formal vote label Feb 8, 2023
@wesbland
Copy link
Member

wesbland commented Feb 8, 2023

This passed a first vote on 2023-02-08.

Yes No Abstain
26 1 4

@Wee-Free-Scot Wee-Free-Scot removed the scheduled reading Reading is scheduled for the next meeting label Feb 27, 2023
@wesbland wesbland modified the milestones: February 2023, March 2023 Mar 1, 2023
@mpiforumbot
Copy link

This passed a 2nd vote.

Yes No Abstain
29 1 1

@mpiforumbot mpiforumbot added passed final vote Passed the final formal vote and removed scheduled vote labels Mar 21, 2023
@wgropp wgropp closed this as completed Mar 21, 2023
@Wee-Free-Scot Wee-Free-Scot added the chap-p2p Point to Point Communication Chapter Committee label Jul 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chap-p2p Point to Point Communication Chapter Committee had reading Completed the formal proposal reading mpi-4.1 For inclusion in the MPI 4.1 standard passed final vote Passed the final formal vote passed first vote Passed the first formal vote wg-hybrid Hybrid Working Group wg-p2p Point-to-Point Working Group
Projects
No open projects
Status: Done
Development

No branches or pull requests