You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The VM's ProcessSend instruction takes a wait flag. When set to true the sender waits for the message's result, without using a future. When set to false the message is scheduled, but the sender continues as usual and the receiver doesn't reschedule the sender. This is used for scheduling process destructors, as the process dropping the last handle to another process shouldn't have to wait for said process to finish running its destructor.
There are certain cases where you want to schedule a message asynchronously but don't care about the result. For example, if we have a distributed counter we may not care about the result of different increment calls, as we await the final result when obtaining the final value. The current way of dealing with this is to use async proc.message() and just ignore the future it returns. While this works it's a bit wasteful, as the allocated future is never used.
To handle this better, the compiler should optimise async calls such that if the resulting future isn't used, the call translates to a ProcessSend instruction with wait=false, removing the need for allocating the future.
The text was updated successfully, but these errors were encountered:
Thinking out loud, the way we'd implement this is roughly as follows:
The expression async x.y returns an initial register R1. If this expression is assigned, R1 is moved into R2 (= variable assignments always introduce new registers instead of reusing their input registers). If the expression is moved (e.g. into an array), R1 is marked as moved as usual.
If at some point we drop R1 we know for a fact it's not used anywhere. In this case we'd have to somehow "mark" the register such that code generation can switch to ProcessSend(wait=false).
Another approach is to do this using a separate MIR pass where we rewrite the instruction at the MIR level, meaning code generation stays "dumb". I think I prefer this approach, but it does require that we somehow persist register states between MIR passes, instead of throwing them away after the first pass.
I'll put this on the backlog for now. While I want to handle this sooner than later, I think I first need to come up with a more general idea on how to apply optimisations on MIR. I don't want to end up with different ad-hoc ways of going about it, then having to standardise all of them later.
The VM's
ProcessSend
instruction takes await
flag. When set totrue
the sender waits for the message's result, without using a future. When set tofalse
the message is scheduled, but the sender continues as usual and the receiver doesn't reschedule the sender. This is used for scheduling process destructors, as the process dropping the last handle to another process shouldn't have to wait for said process to finish running its destructor.There are certain cases where you want to schedule a message asynchronously but don't care about the result. For example, if we have a distributed counter we may not care about the result of different
increment
calls, as we await the final result when obtaining the final value. The current way of dealing with this is to useasync proc.message()
and just ignore the future it returns. While this works it's a bit wasteful, as the allocated future is never used.To handle this better, the compiler should optimise
async
calls such that if the resulting future isn't used, the call translates to aProcessSend
instruction withwait=false
, removing the need for allocating the future.The text was updated successfully, but these errors were encountered: