Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Comparing changes

Choose two branches to see what's changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
base fork: jepst/pub
base: b58db9b2a7
...
head fork: jepst/pub
compare: 12ca4d87c6
Checking mergeability… Don't worry, you can still create the pull request.
  • 3 commits
  • 1 file changed
  • 0 commit comments
  • 1 contributor
Showing with 396 additions and 209 deletions.
  1. +396 −209 remote.tex
View
605 remote.tex
@@ -147,11 +147,16 @@ \section{Introduction}
With the age of steadily improving processor performance behind us, the way forward is to compute with more, rather than faster, processors. A data center that makes available a large number of processors for storing and processing users' data, or running users' programs, is termed a \emph{cloud}. We'll use this term to mean specifically a network of computers that have independent failure modes and separate memories.
How should we program the cloud? One approach is to simulate a familiar shared-memory multiprocessor, and then to program the simulated computer using conventional shared-memory concurrency primitives, such as locks and transactions.
-We have two objections to this approach. The first is that the preponderance of the evidence is that shared-memory concurrency is \emph{just too hard}. For example,
+We have two objections to this approach. The first is that the preponderance of the evidence is that shared-memory concurrency is \emph{just too hard}.
+\spj{Not a strong argument. Message passing is always availble for shared memory machines and people don't use it much. It's just
+too inconvenient, or its cost model doesn't fit. I'd nuke this objection.}
+For example,
$<<$Automatically classifying benign and harmful data races using replay analysis?$>>$
-The second objection is that, to be effective, a programming model must be accompanied by a cost model: it must give programmers tools for reasoning about the cost of computation. In a distributed memory system, one of the most significant costs is data movement; this is true whether one measures cost in terms of energy or time. A programmer trying to reduce these costs needs a model in which they are explicit, not one that denies that data movement is even taking place\,---\,which is exactly the premise of a simulated shared memory.
+The second objection is that, to be effective, a programming model must be accompanied by a cost model: it must give programmers tools for reasoning about the cost of computation. In a distributed memory system, one of the most significant costs is data movement; this is true whether one measures cost in terms of energy or time. A programmer trying to reduce these costs needs a model in which they are explicit, not one that denies that data movement is even taking place\,---\,which is exactly the premise of a simulated shared memory. \spj{Not just data movement, but
+cost of synchronisation too. Eg distributed STM would be a disaster.}
-Instead, we turn to a solution, popularized by MPI \cite{mpi99} and Erlang \cite{Erlang93}: {\em message passing}. The message passing model stipulates that the concurrent processes have no access to each other's data: any data that needs to be communicated from one process to another is explicitly copied by sending and receiving {\em messages}. Not only does this make the costs of communication apparent; it also eliminates many of the classic concurrency pitfalls, such as race conditions.
+Instead, we turn to a solution, popularized by MPI \cite{mpi99} and Erlang \cite{Erlang93}: {\em message passing}. The message passing model stipulates that the concurrent processes have no access to each other's data: any data that needs to be communicated from one process to another is explicitly copied by sending and receiving {\em messages}. Not only does this make the costs of communication apparent; it also eliminates many of the classic concurrency pitfalls, such as race conditions. \spj{Again, I'd drop the race condition argument, and substitute the
+good failure model supported by message passing.}
Developing for the cloud presents other challenges. In a network of dozens or hundreds of computers, some of them are likely to fail during the course of an extended computation; a programming system for the cloud must therefore be able to tolerate partial failure. Here again, Erlang is has a solution that has stood the test of time; the highest reliability programs on the planet are written in Erlang, and achieve 9 nines reliability. We don't innovate in this area, but adopt Erlang's solution (summarized in Section \ref{FaultTolerance}).
@@ -163,7 +168,7 @@ \section{Introduction}
The contributions of this paper are:
\begin{itemize}
-\item An interface for distributed programming in Haskell (Section \ref{Processes}). Following the Erlang model, our framework provides a system for exchanging messages between concurrent processes, regardless of whether those threads are running on one computer or on many. Besides mechanisms for sending and receiving data, we provide functions for starting new threads remotely, and for fault tolerance, which closely follow the widely-respected Erlang model. Unlike Erlang, our framework does not prohibit the use of explicit shared memory concurrency mechanisms \emph{within} one of our concurrent processes.
+\item An interface for distributed programming in Haskell (Section \ref{Processes}). Following the Erlang model, our framework provides a system for exchanging messages between concurrent processes, regardless of whether those threads are running on one computer or on many. Besides mechanisms for sending and receiving data, we provide functions for starting new threads remotely, and for fault tolerance, which closely follow the widely-respected Erlang model. Unlike Erlang, our framework supports the use of explicit shared-memory concurrency mechanisms \emph{within} one of our concurrent processes.
\item A method for serializing function closures to enable higher-order functions to work in a distributed environment (Section \ref{Closures}). Starting a remote process demands a representation of code objects and their environment. Our approach to closures requires a \emph{explicit} indication of which parts of the function's environment will be serialized, and thus gives the programmer control over the cost of data movement.
@@ -184,12 +189,13 @@ \subsection{Processes}
}
\end{figure}
-In most respects, our framework follows Erlang by favoring message-passing as the primary means of communication between processes. Our framework differs from Erlang, though, in that it does not prohibit shared-memory concurrency. The existing elements of Concurrent Haskell, such as \textt{MVar} for shared mutable variables and \textt{forkIO} for creating lightweight threads, are still available to programmers who wish to combine message passing with the more traditional approach. This is illustrated in Figure \ref{fig:ProcessBubbles}. Our framework ensures that mechanisms specific to shared memory concurrency cannot be inadvertently used between remote systems.
+In most respects, our framework follows Erlang by favoring message-passing as the primary means of communication between processes. Our framework differs from Erlang, though, in that it also supports shared-memory concurrency within a signle process. The existing elements of Concurrent Haskell, such as \textt{MVar} for shared mutable variables and \textt{forkIO} for creating lightweight threads, are still available to programmers who wish to combine message passing with the more traditional approach. This is illustrated in Figure \ref{fig:ProcessBubbles}. Our framework ensures that mechanisms specific to shared memory concurrency cannot be inadvertently used between remote systems. \spj{Explain how!! This is very far from obvious and it's a key advantage.}
\subsection{Messages to processes}
Any process can send and receive messages. Our messages are asynchronous, reliable, and buffered. All the state associated with messaging (most especially, the message queue) is wrapped in the \textt{ProcessM} monad, which is updated with each messaging action. Thus, any code participating in messaging must be in the \textt{ProcessM} monad. The basic primitives are \textt{send} and \textt{expect}:
+\spj{I thought we were just going to use send and receive, not receiveWait and match?}
\begin{code}
send :: (Serializable a) => ProcessId -> a -> ProcessM ()
expect :: (Serializable a) => ProcessM a
@@ -227,6 +233,9 @@ \subsection{Messages to processes}
If this example looks familiar, it should: it's very close to the first distributed programming example given in {\em Getting Started with Erlang}. Note that in the Haskell version, unlike in the Erlang version, \textt{Ping} and \textt{Pong} are types rather than atoms, and so they need to be declared explicitly. As given, the type declarations are incomplete; they need to be declared to be instances of the class \textt{Serializable}; we will discuss this in Section~\ref{s:serialization}.
+\spj{Check for consistent spelling of serialise/serialize}
+\spj{Somewhere we need to talk about Binary, and the relation of encode/decode to get/put.}
+
In general, to send a message we use \textt{send}, which packages up an chunk of (serializable) data and transmits it (possibly over the network) to a particular process, given by its unique \textt{ProcessId}. Upon receipt, the incoming message will be placed in a message queue associated with the destination process.
The \textt{send} function corresponds to Erlang's \texttt{!} operator.
@@ -235,12 +244,15 @@ \subsection{Messages to processes}
While all of Haskell's primitive data types and most of the common higher-level data structures are instances of \textt{Serializable}, and therefore can be part of a message, some data types are emphatically not serializable. One example is \textt{MVar}, the type of Haskell's mutable concurrent variables. Since \textt{MVar} allows communication between threads on the assumption of shared memory, it isn't helpful to send it to a remote process that may not share memory with the current process. Although one can imagine a synchronous distributed variable that mimics the semantics of an \textt{MVar}, such a variable would have a vastly different cost model from a normal \textt{MVar}. Since neither \textt{MVar}'s cost model nor its implementation could be preserved in an environment requiring communication between remote systems, we felt it best to prohibit programmers from trying to use \textt{MVar}s in that way. Notice, however, that we do not attempt to stop the programmer from using \textt{MVar}s within a single process: processes are allowed to use Haskell's \textt{forkIO} function to create \emph{local} threads that can share memory using \textt{MVar}.
+\spj{What about receive? Point out that parsing can fail, which makes use block, yes?}
+
At the far end of the channel, the simplest way of receiving a message is with \textt{expect}, which
examines the message queue associated with the current process and extracts the first message whose type matches the (inferred) type of \textt{expect}\,---\,a \textt{Ping} message in the example.
\textt{expect} deques the message, unpacks the transmitted data and returns it.
%Together, they correspond to Erlang's \textt{receive} construct. Since our framework is packaged as a library rather than as a language extension, we use the \textt{MatchM} type to approximate Erlang's specialized syntax. \textt{receiveWait}'s first parameter is a list of \textt{match} invocations, where the lambda function argument to each \textt{match} potentially accepts a different type of message. Thus, the programmer can selectively dequeue messages of particular types. As in Erlang, incoming messages are tested in the order that the matching patterns appear. If no message in the queue is of any of the acceptable types, \textt{receiveWait} will block until such a message is received. % maybe mention matchIf, receiveTimeout, etc
+
%In the ping example above, we use \textt{receiveWait} and \textt{match} to accept messages only of type \textt{Pong}. The type of message to accept is specified through Haskell's type inference: the lambda function given as the first parameter to \textt{match} has type \lstinline!Pong -> ProcessM ()!, and so that invocation of \textt{match} will accept messages only of type \textt{Pong}.
\subsection{Messages through channels}
@@ -249,12 +261,13 @@ \subsection{Messages through channels}
Thus, an alternative to sending messages by process identifier is to use {\em typed channels}. Each distributed channel consists of two ends, which we call the {\em send port} and {\em receive port}. Messages are inserted via the send port, and extracted in FIFO order from the receive port. Unlike process identifiers, channels are associated with a particular type and the send port will emit messages only of that type; likewise, the receive port will accept messages only of that type, so the sender has a guarantee that its receiver is of the right type.
The central functions of the channel API are:
-
+\par{\small
\begin{code}
-newChannel :: (Serializable a) => ProcessM (SendPort a, ReceivePort a)
-sendChannel :: (Serializable a) => SendPort a -> a -> ProcessM ()
-receiveChannel :: (Serializable a) => ReceivePort a -> ProcessM a
-\end{code}
+newChan :: Serializable a
+ => ProcessM (SendPort a, ReceivePort a)
+sendChan :: Serializable a => SendPort a -> a -> ProcessM ()
+receiveChan :: Serializable a => ReceivePort a -> ProcessM a
+\end{code}}
A critical point is that although \textt{SendPort} can be serialized and copied to other nodes, allowing the channel to accept data from multiple sources, the \textt{ReceivePort} cannot be moved from the node on which it was created. We decided that allowing a movable and copyable message destination would introduce too much complexity. This restriction is enforced by making \textt{SendPort} an instance of \textt{Serializable}, but not \textt{ReceivePort}.
@@ -264,44 +277,59 @@ \subsection{Messages through channels}
\begin{code}
ping2 :: SendPort Ping -> ReceivePort Pong -> ProcessM ()
ping2 pingout pongin =
- do { (Pong partner) <- receiveChannel pongin
- ; sendChannel partner (Ping pongin)
+ do { (Pong partner) <- receiveChan pongin
+ ; sendChan partner (Ping pongin)
; ping pingout pingin }
\end{code}
How do we start the exchange? Clearly we need to create two channels and call \textt{ping2} and \textt{pong2} (not shown, but substantially similar to \textt{ping2}) as new processes. But how do we start a new process?
+\spj{Somwhere we need to say that send-ports are serialisable and read-ports are not. It's a key property!
+Indeed that is why we distinguish the end-points.}
+
\subsection{Starting processes}
+\spj{This whole section needs a rewrite, and probably dramatically shortenig, in the light the closures section}
%\setlength{\parindent}{-3in}
\begin{figure}[t!]
+\small
\textbf{Basic messaging}
\begin{code}
-send :: (Serializable a) => ProcessId -> a -> ProcessM ()
-expect :: (Serializable a) => ProcessM a
+send :: Serializable a => ProcessId -> a
+ -> ProcessM ()
+expect :: Serializable a => ProcessM a
\end{code}
\textbf{Channels}
\begin{code}
-newChannel :: Serializable a => ProcessM (SendPort a, ReceivePort a)
-sendChannel :: Serializable a => SendPort a -> a -> ProcessM ()
-receiveChannel :: Serializable a => ReceivePort a -> ProcessM a
-mergePortsBiased :: Serializable a => [ReceivePort a] -> ProcessM (ReceivePort a)
-mergePortsRR :: Serializable a => [ReceivePort a] -> ProcessM (ReceivePort a)
+newChan :: Serializable a
+ => ProcessM (SendPort a, ReceivePort a)
+sendChan :: Serializable a
+ => SendPort a -> a -> ProcessM ()
+receiveChan :: Serializable a => ReceivePort a
+ -> ProcessM a
+mergePortsBiased :: Serializable a => [ReceivePort a]
+ -> ProcessM (ReceivePort a)
+mergePortsRR :: Serializable a => [ReceivePort a]
+ -> ProcessM (ReceivePort a)
\end{code}
\textbf{Advanced messaging}
\begin{code}
-receiveWait :: [MatchM q ()] -> ProcessM q
-receiveTimeout :: Int -> [MatchM q ()] -> ProcessM (Maybe q)
-match :: Serializable a => (a -> ProcessM q) -> MatchM q ()
-matchIf :: Serializable a => (a -> Bool) -> (a -> ProcessM q) -> MatchM q ()
+receiveWait :: [MatchM q ()] -> ProcessM q
+receiveTimeout :: Int -> [MatchM q ()]
+ -> ProcessM (Maybe q)
+match :: Serializable a => (a -> ProcessM q)
+ -> MatchM q ()
+matchIf :: Serializable a => (a -> Bool)
+ -> (a -> ProcessM q) -> MatchM q ()
matchUnknown :: ProcessM q -> MatchM q ()
\end{code}
\textbf{Process management}
\begin{code}
-spawn :: ProcessM () -> ProcessM ProcessId
-call :: (Serializable a) => NodeId -> Closure (ProcessM a) -> ProcessM a
+spawn :: NodeId -> Closure (ProcessM ())
+ -> ProcessM ProcessId
+% call :: (Serializable a) => NodeId -> Closure (ProcessM a) -> ProcessM a
terminate :: ProcessM a
getSelfPid :: ProcessM ProcessId
getSelfNode :: ProcessM NodeId
@@ -310,18 +338,22 @@ \subsection{Starting processes}
\textbf{Process monitoring}
\begin{code}
linkProcess :: ProcessId -> ProcessM ()
-monitorProcess :: ProcessId -> ProcessId -> MonitorAction -> ProcessM ()
+monitorProcess :: ProcessId -> ProcessId
+ -> MonitorAction -> ProcessM ()
\end{code}
\textbf{Initialization}
\begin{code}
-remoteInit :: Maybe FilePath -> [RemoteCallMetaData] -> (String -> ProcessM ()) -> IO ()
-getPeers :: ProcessM PeerInfo
+type RemoteTable = [(String,Dynamic)]
+runRemote :: Maybe FilePath -> [RemoteTable]
+ -> (String -> ProcessM ()) -> IO ()
+getPeers :: ProcessM PeerInfo
findPeerByRole :: PeerInfo -> String -> [NodeId]
\end{code}
\textbf{Syntactic sugar}
\begin{code}
+mkClo :: Name -> Q Exp
remotable :: [Name] -> Q [Dec]
\end{code}
@@ -392,8 +424,9 @@ \subsection{Fault tolerance}
Here are the functions for setting up process monitoring:
\begin{code}
-monitorProcess :: ProcessId -> ProcessId -> MonitorAction -> ProcessM ()
-linkProcess :: ProcessId -> ProcessM ()
+monitorProcess :: ProcessId -> ProcessId
+ -> MonitorAction -> ProcessM ()
+linkProcess :: ProcessId -> ProcessM ()
\end{code}
\lstinline!monitorProcess a b ma! establishes unidirectional process monitoring. That is, process \textt{a} will be notified if process \textt{b} terminates. The third argument determines whether the monitoring process will be notified by exception or by message.
@@ -502,7 +535,7 @@ \section{Closures}
one node to another. For example, consider:
\begin{code}
sf :: SendPort (Int -> Int) -> Int -> ProcessM ()
- sf p x = send p (\y -> x+y)
+ sf p x = send p (\y -> x+y+1)
\end{code}
\texttt{sf} is a function that creates an anonymous function and sends it on Port\texttt{p}. The function that it sends, \texttt{($\lambda$y -> x+y)},
is a closure that captures its free variables,
@@ -517,22 +550,23 @@ \section{Closures}
instance ???? => Serialisable (a->b) where ????
\end{code}
One ``solution'' would be to say that functions are simply not
-serialisable --- but exactly the same issue arises with \textt{spawn}
-which is clearly essential. Consider
+serialisable --- but exactly the same issue arises with \textt{spawn}:
\begin{code}
nc :: ProcessM ()
nc = do { (s,r) <- newChan
- ; spawn node (do { v <- receive r
- ; ... })
+ ; spawn node (do { ...ans...
+ ; send s ans })
; ... }
\end{code}
The \textt{ProcessM} argument of \textt{spawn} is a value closed over
its free variables, \textt{r} in this case, so exactly the same issue
-arises.
+arises. We can hardly solve the problem by outlawing \textt{spawn} -- we
+need \emph{some} way to start a remote computation and that willy-nilly
+involves serialising a closure of some kind.
\subsection{The standard solution}
-The standard approach to this problem is to build in serialisation of
+The standard approach to this problem is to bake in serialisation of
function values --- and indeed of all values --- as a primitive operation
implemented directly by the runtime system. That is, the runtime system
allows one to serialise \emph{any value at all}, and transport it to the
@@ -543,8 +577,7 @@ \subsection{The standard solution}
\apb{I don't think that this is true; I think that the standard solution is reflection.
This does not have the first two disadvantages that you list, but t does have the third.}
-
-But making serialisability built-in
+Making serialisability built-in
has multiple disadvantages:
\begin{itemize}
\item It relies on a single built-in notion of serialisability.
@@ -574,7 +607,7 @@ \subsection{Static values}
transmitted to the other end of the wire, namely ones that have no
free variables. For the present we make a simplifying assumption,
that every node is running the same code. (We return to the question
-of code that varies between nodes in Section~\ref{s:further-work}.)
+of code that varies between nodes in Section~\ref{s:code-update}.)
Under this assumption, a closure without free variables can be
readily be serialised to a single label, or even (in the limit) a machine
address.
@@ -631,12 +664,13 @@ \subsection{Static values}
The type environment $\Gamma$ is a set of variable bindings, each of form $x :_{\delta} \sigma$.
The subscript $\delta$ is a static-ness flag, which takes the values \textt{S} (static) or
\textt{D} (dynamic). The idea is that top-level (static) variables have bindings
-of the form $f :_{\text{\tt S}}$, while all other variables have dynamic bindings $x :_{\text{\tt D}}$.
+of the form $f\! :_{\text{\tt S}}\! \sigma$, while all other variables have dynamic bindings
+$x\! :_{\text{\tt D}}\!\sigma$.
(It is straightforward to formalise this idea in the typing judgements for top-level
bindings and for terms; we omit the details.)
-The operation $\Gamma \downarrow$ filters $\Gamma$ to leave only the static bindings,
-thereby checking that a term $\text{\tt static}\;e$ is well typed only if all its free
-variables are static.
+The operation $\Gamma \downarrow$ filters $\Gamma$ to leave only the
+static (top-level) bindings,thereby checking that a term $\text{\tt
+static}\;e$ is well typed only if all its free variables are static.
Although simple, these rules have interesting consequences:
\begin{itemize}
@@ -648,8 +682,8 @@ \subsection{Static values}
\end{code}
The function \textt{id} is by definition static (top-level). Its binding
in $\Gamma$ will have $\delta=\text{\tt S}$, but its type is the ordinary
-polymoprhic type.
-\apb{However, (Static id) has static type, right?}
+polymorphic type. However, \textt{(Static id)} has type \textt{(Static (a -> a))}.
+
\item A non-static variable may have a \textt{Static} type. For example
\begin{code}
@@ -662,9 +696,11 @@ \subsection{Static values}
\item The free variables of a term $(\text{\tt static}\;e)$ need not have
\text{\tt Static} types. For example, this term is well-typed:
+\par{\small
\begin{code}
- static (length . filter id) :: Static ([Bool] -> Int)
+static (length . filter id) :: Static ([Bool] -> Int)
\end{code}
+}
because all its free variables (\textt{length}, \textt{(.)},
\textt{filter}, \textt{id}) are bound at top-level and hence are
static. However, all these functions have their usual types.
@@ -675,16 +711,29 @@ \subsection{From static values to closures}
In the examples \textt{sf} and
\textt{nc} (at the start of this Section) we wanted to transmit closures
that certainly did have free variables. How do static values help us?
-They help us by making closure conversion possible. A closure
+They help us by making \emph{closure conversion} possible. A closure
is just a pair of a code pointer and an environment. With the aid of
\textt{Static} values we can now represent a closure directly in Haskell:
\begin{code}
data Closure a where -- Wrong
- MkClo :: Static (env -> a)
- -> env -> Closure a
+ MkClo :: Static (e -> a) -> e -> Closure a
\end{code}
-As is conventional, we capture the environment in an existential.
+As is conventional, we capture the environment in an existential\footnote{
+Existential because \textt{MkClo}'s type is isomorphic
+to $\forall a. (\exists e. (e \to a) \times e) \to \text{\tt Closure}\;a$.
+}.
\apb{More explanation needed. Since there are no existential quantifiers here, this is hard to follow as written. You have encoded the existential as an (unwritten) universal on the LHS of an arrow; this is not obvious!}
+Different closures of the same type may thereby capture environments
+of different type. For example,
+\begin{code}
+ cs :: [Closure Int]
+ cs = [MkClo (static negate) 3,
+ MkClo (static ord) 'x']
+\end{code}
+Both closures in the list \textt{cs} have the same type \textt{Closure Int},
+but the first captures an \textt{Int} as its environment, while the second
+captures a \textt{Char}. (The function \textt{ord} has type \textt{Char->Int}.)
+
The trouble is that this closure type is not serialisable: precisely
because the environment is existentially quantified, there is no information
for how to serialise it! This is apparently esay to solve, by asking
@@ -718,7 +767,22 @@ \subsection{From static values to closures}
=> Static (ByteString -> a)
-> ByteString -> Closure a
\end{code}
-To see this in action, here is our earlier \textt{sf} example,
+Now the correct deserialiser becomes part of the static code pointer
+in the closure. Simple.
+
+It is easy to un-closure-convert:
+\begin{code}
+ unClo :: Closure a -> a
+ unClo (MkClo f x) = unstatic f x
+\end{code}
+This is when the deserialisation of the environment takes place. For a
+function-valued closure it makes sense to apply \textt{unClo} once, and
+apply the resulting function many times, so that the deserialisation is
+one just once.
+
+\subsection{Closures in practice} \label{s:closures-in-practice}
+
+To see closures in action, here is our earlier \textt{sf} example,
expressed using closures:
\begin{code}
sf :: SendPort (Closure (Int -> Int))
@@ -727,28 +791,154 @@ \subsection{From static values to closures}
where
clo = MkClo (static sfun) (encode x)
- sfun :: Static (ByteString -> Int -> Int)
- sfun = static (\bs -> let x = decode bs
- in \y -> x + y)
+ sfun :: ByteString -> Int -> Int
+ sfun = \bs -> let x = decode bs
+ in \y -> x + y + 1)
\end{code}
The closure contains the pre-serialised environment \textt{encode x},
-and the static function \textt{sfun}. The latter de-serialises its
+and the static function \textt{sfun}. The latter deserialises its
argument \textt{bs} to get the real argument \textt{x} that it expects.
-It is easy to un-closure-convert:
+As a second example, consider \textt{nc} from the beginning of this section.
+Using closures we would rewrite it like this:
\begin{code}
- unClo :: Closure a -> a
- unClo (MkClo f x) = unstatic f x
+ nc :: ProcessM ()
+ nc = do { (s,r) <- newChan
+ ; spawn node (MkClo (static child) (encode s))
+ ; ... }
+
+ child :: ByteString -> ProcessM ()
+ child = \bs -> let s = decode bs
+ in do { ...ans...
+ ; sendChan s ans })
\end{code}
-This is when the deserialisation of the environment takes place. For a
-function-valued closure it makes sense to apply \textt{unClo} once, and
-apply the resulting function many times, so that the deserialisation is
-one just once.
+The type of \texttt{spawn} is given in Figure~\ref{fig:api}; it takes
+a closure as its second argument.
+
+\subsection{Summary}
+In this section we introduced a rather simple set of language primitives:
+\begin{itemize}
+\item A new type constructor \textt{Static}, with built-in serialisation.
+\item A new term form $(\text{\tt static}~e)$.
+\item A new primitive function \textt{unstatic :: Static a -> a}.
+\end{itemize}
+Building on these primitives we can manually construct closures and
+control exactly how and when they are serialised.
Performing manual closure conversion is tiresome for the programmer,
-and we describe some Template Haskell support in
-Section~\ref{sect:th}. But it has the great merit that it makes
-crystal clear exactly what is serialised, and when.
+and one might wonder about adding some syntactic sugar.
+We have not yet explored this option very much, preferring to work out the
+foundations first.
+However in the next section we describe some simple Template Haskell support.
+
+% A more intrusive drawback of our approach is that serialization cannot
+% handle existential data types or generalized abstract data types
+% (GADTs) at all. That is, no remotely invoked function can accept an
+% existential or GADT parameter or return such a type. The problem is
+% that these extended Haskell types can hide constituent data types that
+% are not reflected in the signature of the enclosing type. As a result,
+% the type-based dispatch for serializing and deserializing them has no
+% way to know which concrete serializer and deserializer to invoke. As
+% far as we know, serializing existentials and GADTs would require some
+% form of run-time introspection, and Haskell does not currently provide
+% that.
+
+
+\section{Faking it}
+
+We have not yet implemented statics in GHC, but we have implemented
+some simple workarounds that allow us (and you, gentle reader) to experiement
+with them without changing GHC. We describe these workarounds in this section.
+
+\subsection{Example}
+As a running example, here is the code for \textt{sfun} using the workarounds:
+\begin{code}
+ sf :: SendPort (Closure (Int -> Int))
+ -> Int -> ProcessM ()
+ sf ch x = send ch ($(mkClo 'add1) x)
+
+ add1 :: Int -> Int -> Int
+ add1 x y = x + y + 1
+
+ $(remotable ['add1])
+\end{code}
+The programmer still has to do manual closure conversion, by defining
+a top-level function (\textt{add1} in this case) whose first argument is
+the environment. However, the code is otherwise significantly more
+straightforward than in Section~\ref{s:closures-in-practice}.
+
+The Template Haskell splice \textt{$(mkClo 'add1)}
+is run at compile time. Its argument \textt{'add1} is Template Haskell notation
+for the (quoted) name of the \textt{add1} function.
+\begin{code}
+ mkClo :: Name -> Q Exp
+\end{code}
+The splice expands to a call to \textt{add1__closure},
+so the net result is just as if we had written
+\begin{code}
+ sf ch x = send ch (add1__closure x)
+\end{code}
+What is \textt{add1__closure}? It is a new top-level definition
+added by the Template Haskell splice \textt{$(remotable ['add1])}.
+\begin{code}
+ remotable :: [Name] -> Q [Dec]
+\end{code}
+This splice expands to the following definitions
+\begin{code}
+ add1__closure :: Int -> Closure Int
+ add1__closure x = MkClo (MkS "M.add1") (encode x)
+
+ add1__dec :: ByteString -> Int -> Int
+ add1__dec bs = add1 (decode bs)
+
+ __remoteTable :: [(String, Dynamic)]
+ _remoteTable = [("add1", toDyn add1__dec)]
+\end{code}
+We will see how these definitions work next.
+
+\subsection{How it works}
+
+We fake the \textt{Static} type by a simple string, which will serve as the
+label of the function to call at the other end
+\begin{code}
+ newtype Static a = MkS String
+\end{code}
+We maintain a table in the \textt{ProcessM} monad, that maps these strings
+to the appropriate implementation composed with the environment deserialiser,
+\texttt{add1\_\_dec} in our example.
+This table is initialised by the call to \textt{runRemote} which initialises the
+\textt{ProcessM} monad, and the table may be consulted from within the monad:
+\begin{code}
+ runRemote :: Maybe FilePath
+ -> [[(String,Dynamic)]
+ -> ProcessM () -> IO ()
+ lookupStatic :: Typeable a => String -> ProcessM a
+\end{code}
+The \textt{lookupStatic} function looks up the function in the table,
+and performs a run-time typecheck to ensure that the value returned
+has the type expected by the caller. Our fake implementation of
+statics is therefore still type-safe; it's just that the checks happen
+at runtime. If either the lookup or typecheck fail, the entire
+process crashes, consistent with Erlang's philosophy of crash-and-recover.
+
+Tiresomely, the programmer has the following obligations:
+\begin{itemize}
+\item In each module, write one call \textt{$(remotable [...])},
+passing a list of all the functions passed to \textt{mkClo}.
+\item In the call to \texttt{runRemote}, pass a list
+a list of all the \textt{\_\_remoteTable} definitions, imported from
+each module that has a call to \textt{remotable}.
+\end{itemize}
+
+Finally, the closure un-wrapping process becomes monadic, which is
+a little less convenient for the programmer:
+\begin{code}
+ unClosure :: Typeable a => Closure a -> ProcessM a
+ unClosure (MkClo (MkS s) env)
+ = do { f <- lookupStatic s
+ ; return (f env) }
+\end{code}
+
% ----------------------------------------------- BEGIN ANDREW
%
@@ -862,126 +1052,124 @@ \subsection{From static values to closures}
%
%
-\section{Mapping statics and closures onto GHC}
-
-Unfortunately, our consideration of the \textt{Static} type
-requires extensions to the underlying language, and Haskell does not currently provide these extensions. How, then, to proceed?
-
-Our definition in the previous section allows a serializable representation of any staticable function, where a staticable function is a top-level function or a function whose free variables are staticable. A reasonable approximation of the set of staticable functions is the set of top-level functions, and as it turns out, providing serializable representations of any top-level function is a much easier goal than providing serializable representations of any staticable function; in fact, with this minor restriction, we are able to implement this feature, and the closures that depend on it, without extending the language. Furthermore, we feel that limiting serializability to top-level functions isn't a major additional restriction, since any staticable function can be trivially made top-level. Finally, top-level functions are all automatically staticable and cannot capture non-staticable names from their environment, and so are guaranteed to be safe to serialize.
-
-The difficult part of function representation is that whatever identifier we use for a particular function must be unique and recognizable on both sides of the communication. Function pointers, while useful for representing functions on a single system, aren't a valid option in a distributed environment, since the remote system might have a different pointer width, and even if it doesn't variations in compiler and operating system render function pointers incomparable between computers. If we could select a unique identifier for each serializable function and transmit that identifier in place of the function itself, the receiving end would only need to map the identifier back to the original function, and we will have achieved our goal of transmitting a function representation. Such an identifier could take the place of the \textt{Static} type in \textt{Closure}s.
-
-Fortunately, since we've restricted the set of serializable functions to top-level functions, we already have a convenient, globally unique identifier for each such function: its fully-qualified name. Here, a fully-qualified name consists of the module in which the function is defined and the name of the function, separated by a period. We can now consider yet another version of the \textt{Closure} data structure:
-
-\begin{code}
--- correct!
-data Closure a = Closure String ByteString
-\end{code}
-
-We've replaced the theoretical \textt{Static} type wrapper with a string, which stores the fully-qualified name of the represented function. Its parameters remain encoded as a \textt{ByteString}. Note that the function named by the string must have type \textt{ByteString -> a}. Before we can make this version of \textt{Closure} work, though, we need a way of mapping functions to their names, and those names back to the original function. This is not trivial, since Haskell does not provide run-time name lookup services.
-
-We must provide a name lookup service ourselves. We want a table that maps function names to functions. This means that in order to execute a closure and get its value, \textt{unClosure} needs access to this table. For convenience, we've put this table of function names in the \textt{ProcessM} monad, and so it see it, we need to revise \textt{unClosure}'s type:
-
-\begin{code}
--- correct!
-unClosure :: Closure a -> ProcessM a
-\end{code}
-
-How would we construct such a lookup table? It would look something like this:
-
-We might construct such a table for the \textt{greet} and \textt{add} functions like this:
-
-\begin{code}
--- wrong
-let lookupTable =
- putReg "Main.add" add
- (putReg "Main.greet" greet LEnd)
-\end{code}
-
-Here we see that the functions are of the wrong type. Since the closure stores the function's environment as a \textt{ByteString}, \textt{unClosure} expects that the function named in the closure will have type \textt{ByteString -> a}, but here \textt{add} has type \textt{Int -> Int -> Int} and \textt{greet} has type \textt{String -> ProcessM ()}. We will need to write a wrapper function to decode the environment and call the original function:
-
-\begin{code}
--- correct, but inconvenient
-
-addWrapper :: ByteString -> Int
-addWrapper bs =
- let (i1, i2) = decode bs
- in add i1 i2
-greetWrapper :: ByteString -> ProcessM ()
-greetWrapper bs =
- let s = decode bs
- in greet s
-
-let lookupTable =
- putReg "Main.add" addWrapper
- (putReg "Main.greet" greetWrapper LEnd)
-\end{code}
-
-What can happen if something goes wrong? What, for example, will happen if we have different code on the two sides of communication? If the function named in the closure doesn't exist on the other side, looking it up in the function table will fail, and \textt{spawn} can report the error to the programmer. More insidiously, what if a function of the same name exists, but with a different environment? In this case, depending on how the types differs, it's possible for the environment's deserialization to not fail, but to succeed by extracting incorrect values from the \textt{ByteString}, which is worse than failing. We can eliminate this risk by including a representation of the environment type in the closure, and checking this type against the expected type on the remote end. In our implementation, we package the \textt{ByteString} of the function's environment along with a string representation of the environment's type.
-
-
-\subsection{Closures, with sugar}
-
-The method given above for remotely invoking a closure seems prohibitively cumbersome. First, the programmer has to write a parameter-decoding wrapper function for each function that can be invoked remotely. Then, he or she needs to add a corresponding entry to the function lookup table, so that the closure can be invoked. Finally, he or she needs to manually create a closure and give it to \textt{spawn} or \textt{call}. It is certainly a far cry from our idealized (and wrong) notion of just writing \textt{spawn someNode (add 2 3)}.
-
-Fortunately, the Template Haskell facility lets us generate some sugar for all this which simplifies the procedure greatly. Template Haskell provides compile-time rewriting facilities that can automagically generate appropriate wrapping functions and lookup tables. Our framework includes a compile-time \textt{remotable} function that operates on lists of function names and automatically produces wrapper functions and closure-generators that can be used with \textt{spawn} and similar functions. Let's revisit the example with \textt{add}. The programmer can request generation of the requisite stub functions using this syntax:
-
-\begin{code}
-$( remotable ['add] )
-\end{code}
-% $
-
-Here, the special brackets \textt{\$( )} demarcate code to be executed at compile time. The \textt{remotable} function is given a list of function names, each quoted with a single apostrophe to prevent its evaluation. The above \textt{remotable} call will produce the following code:
-
-\begin{code}
--- a deserializing wrapper
-add__impl :: ByteString -> Int
-add__impl bs =
- let (a1, a2) = decode bs
- in add a1 a2
-
--- a closure maker
-add__closure :: Int -> Int -> Closure Int
-add__closure a2 a2 =
- let bs = encode (a1, a2)
- in Closure "Main.add__impl" bs
-
--- the lookup table
-__remoteCallMetaData =
- putReg "Main.add__impl" add__impl LEnd
-\end{code}
-
-\textt{remotable} has generated the boilerplate code necessary for invoking \textt{add} remotely, while leaving the original unchanged. Because \textt{remotable} is run at compile time, it has access to the abstract syntax tree of the module we're compiling, and so can examine the \textt{add}'s name and type. Equivalent functionality couldn't be achieved at run-time. \textt{remotable} first gives us \textt{add__impl}, the decoding wrapper, which is identical to the hand-written \textt{addWrapper} above. \textt{add__closure} is a convenience function that creates a closure of the wrapping function, serializing its arguments along the way. And \textt{__remoteCallMetaData} is the lookup table, which will be given to the framework's startup code and referred to by \textt{unClosure}. \textt{add} itself is not in the lookup table, nor should it be: \textt{unClosure} expects that functions in the table will be of type \textt{ByteString-> t}, and the only function of that type here is \textt{add__impl}. Notice that the types of the arguments to \textt{add} are not explicitly given above. Instead, the compiler can infer them from the definition.
-
-Now, when the programmer wants to remotely invoke \textt{add}, all that's necessary is to use \textt{add__closure} in place of \textt{add}:
-
-\begin{code}
--- correct!
-res <- call aNode (add__closure 5 12)
-\end{code}
-
-If we add \textt{greet} to \textt{remotable}'s parameter list, then corresponding support code will be generated for it, as well, allowing us to call it like so:
-
-\begin{code}
--- correct!
-spawn aNode (greet__closure "Zoltan")
-\end{code}
-
-Notice that this is much more convenient than constructing closures manually, and the syntax to invoke code remotely is comfortingly similar to what we initially wanted to write, back at the beginning of this section. We can now even implement the ping-pong example that motivated our consideration of process spawning:
-
-\begin{code}
--- correct!
-do { pingProc <- spawn someNode ping__closure
- ; pongProc <- spawn otherNode pong__closure
- ; send pingProc (Pong pongProc) }
-\end{code}
-
-\subsection{Limitations}
-
-There are some limitations in our approach to remote function invocation. First, since functions are looked up by name, only top-level functions can be called remotely. Also, since the wrapper function needs to know the type of the parameters to the function in order to deserialize them, this approach won't work with polymorphic functions.
-
-A more intrusive drawback of our approach is that serialization cannot handle existential data types or generalized abstract data types (GADTs) at all. That is, no remotely invoked function can accept an existential or GADT parameter or return such a type. The problem is that these extended Haskell types can hide constituent data types that are not reflected in the signature of the enclosing type. As a result, the type-based dispatch for serializing and deserializing them has no way to know which concrete serializer and deserializer to invoke. As far as we know, serializing existentials and GADTs would require some form of run-time introspection, and Haskell does not currently provide that.
+% Unfortunately, our consideration of the \textt{Static} type
+% requires extensions to the underlying language, and Haskell does not currently provide these extensions. How, then, to proceed?
+%
+% Our definition in the previous section allows a serializable representation of any staticable function, where a staticable function is a top-level function or a function whose free variables are staticable. A reasonable approximation of the set of staticable functions is the set of top-level functions, and as it turns out, providing serializable representations of any top-level function is a much easier goal than providing serializable representations of any staticable function; in fact, with this minor restriction, we are able to implement this feature, and the closures that depend on it, without extending the language. Furthermore, we feel that limiting serializability to top-level functions isn't a major additional restriction, since any staticable function can be trivially made top-level. Finally, top-level functions are all automatically staticable and cannot capture non-staticable names from their environment, and so are guaranteed to be safe to serialize.
+%
+% The difficult part of function representation is that whatever identifier we use for a particular function must be unique and recognizable on both sides of the communication. Function pointers, while useful for representing functions on a single system, aren't a valid option in a distributed environment, since the remote system might have a different pointer width, and even if it doesn't variations in compiler and operating system render function pointers incomparable between computers. If we could select a unique identifier for each serializable function and transmit that identifier in place of the function itself, the receiving end would only need to map the identifier back to the original function, and we will have achieved our goal of transmitting a function representation. Such an identifier could take the place of the \textt{Static} type in \textt{Closure}s.
+%
+% Fortunately, since we've restricted the set of serializable functions to top-level functions, we already have a convenient, globally unique identifier for each such function: its fully-qualified name. Here, a fully-qualified name consists of the module in which the function is defined and the name of the function, separated by a period. We can now consider yet another version of the \textt{Closure} data structure:
+%
+% \begin{code}
+% -- correct!
+% data Closure a = Closure String ByteString
+% \end{code}
+%
+% We've replaced the theoretical \textt{Static} type wrapper with a string, which stores the fully-qualified name of the represented function. Its parameters remain encoded as a \textt{ByteString}. Note that the function named by the string must have type \textt{ByteString -> a}. Before we can make this version of \textt{Closure} work, though, we need a way of mapping functions to their names, and those names back to the original function. This is not trivial, since Haskell does not provide run-time name lookup services.
+%
+% We must provide a name lookup service ourselves. We want a table that maps function names to functions. This means that in order to execute a closure and get its value, \textt{unClosure} needs access to this table. For convenience, we've put this table of function names in the \textt{ProcessM} monad, and so it see it, we need to revise \textt{unClosure}'s type:
+%
+% \begin{code}
+% -- correct!
+% unClosure :: Closure a -> ProcessM a
+% \end{code}
+%
+% How would we construct such a lookup table? It would look something like this:
+%
+% We might construct such a table for the \textt{greet} and \textt{add} functions like this:
+%
+% \begin{code}
+% -- wrong
+% let lookupTable =
+% putReg "Main.add" add
+% (putReg "Main.greet" greet LEnd)
+% \end{code}
+%
+% Here we see that the functions are of the wrong type. Since the closure stores the function's environment as a \textt{ByteString}, \textt{unClosure} expects that the function named in the closure will have type \textt{ByteString -> a}, but here \textt{add} has type \textt{Int -> Int -> Int} and \textt{greet} has type \textt{String -> ProcessM ()}. We will need to write a wrapper function to decode the environment and call the original function:
+%
+% \begin{code}
+% -- correct, but inconvenient
+%
+% addWrapper :: ByteString -> Int
+% addWrapper bs =
+% let (i1, i2) = decode bs
+% in add i1 i2
+%
+% greetWrapper :: ByteString -> ProcessM ()
+% greetWrapper bs =
+% let s = decode bs
+% in greet s
+%
+% let lookupTable =
+% putReg "Main.add" addWrapper
+% (putReg "Main.greet" greetWrapper LEnd)
+% \end{code}
+%
+% What can happen if something goes wrong? What, for example, will happen if we have different code on the two sides of communication? If the function named in the closure doesn't exist on the other side, looking it up in the function table will fail, and \textt{spawn} can report the error to the programmer. More insidiously, what if a function of the same name exists, but with a different environment? In this case, depending on how the types differs, it's possible for the environment's deserialization to not fail, but to succeed by extracting incorrect values from the \textt{ByteString}, which is worse than failing. We can eliminate this risk by including a representation of the environment type in the closure, and checking this type against the expected type on the remote end. In our implementation, we package the \textt{ByteString} of the function's environment along with a string representation of the environment's type.
+%
+%
+% \subsection{Closures, with sugar}
+%
+% The method given above for remotely invoking a closure seems prohibitively cumbersome. First, the programmer has to write a parameter-decoding wrapper function for each function that can be invoked remotely. Then, he or she needs to add a corresponding entry to the function lookup table, so that the closure can be invoked. Finally, he or she needs to manually create a closure and give it to \textt{spawn} or \textt{call}. It is certainly a far cry from our idealized (and wrong) notion of just writing \textt{spawn someNode (add 2 3)}.
+%
+% Fortunately, the Template Haskell facility lets us generate some sugar for all this which simplifies the procedure greatly. Template Haskell provides compile-time rewriting facilities that can automagically generate appropriate wrapping functions and lookup tables. Our framework includes a compile-time \textt{remotable} function that operates on lists of function names and automatically produces wrapper functions and closure-generators that can be used with \textt{spawn} and similar functions. Let's revisit the example with \textt{add}. The programmer can request generation of the requisite stub functions using this syntax:
+%
+% \begin{code}
+% $( remotable ['add] )
+% \end{code}
+% % $
+%
+% Here, the special brackets \textt{\$( )} demarcate code to be executed at compile time. The \textt{remotable} function is given a list of function names, each quoted with a single apostrophe to prevent its evaluation. The above \textt{remotable} call will produce the following code:
+%
+% \begin{code}
+% -- a deserializing wrapper
+% add__impl :: ByteString -> Int
+% add__impl bs =
+% let (a1, a2) = decode bs
+% in add a1 a2
+%
+% -- a closure maker
+% add__closure :: Int -> Int -> Closure Int
+% add__closure a2 a2 =
+% let bs = encode (a1, a2)
+% in Closure "Main.add__impl" bs
+%
+% -- the lookup table
+% __remoteCallMetaData =
+% putReg "Main.add__impl" add__impl LEnd
+% \end{code}
+%
+% \textt{remotable} has generated the boilerplate code necessary for invoking \textt{add} remotely, while leaving the original unchanged. Because \textt{remotable} is run at compile time, it has access to the abstract syntax tree of the module we're compiling, and so can examine the \textt{add}'s name and type. Equivalent functionality couldn't be achieved at run-time. \textt{remotable} first gives us \textt{add__impl}, the decoding wrapper, which is identical to the hand-written \textt{addWrapper} above. \textt{add__closure} is a convenience function that creates a closure of the wrapping function, serializing its arguments along the way. And \textt{__remoteCallMetaData} is the lookup table, which will be given to the framework's startup code and referred to by \textt{unClosure}. \textt{add} itself is not in the lookup table, nor should it be: \textt{unClosure} expects that functions in the table will be of type \textt{ByteString-> t}, and the only function of that type here is \textt{add__impl}. Notice that the types of the arguments to \textt{add} are not explicitly given above. Instead, the compiler can infer them from the definition.
+%
+% Now, when the programmer wants to remotely invoke \textt{add}, all that's necessary is to use \textt{add__closure} in place of \textt{add}:
+%
+% \begin{code}
+% -- correct!
+% res <- call aNode (add__closure 5 12)
+% \end{code}
+%
+% If we add \textt{greet} to \textt{remotable}'s parameter list, then corresponding support code will be generated for it, as well, allowing us to call it like so:
+%
+% \begin{code}
+% -- correct!
+% spawn aNode (greet__closure "Zoltan")
+% \end{code}
+%
+% Notice that this is much more convenient than constructing closures manually, and the syntax to invoke code remotely is comfortingly similar to what we initially wanted to write, back at the beginning of this section. We can now even implement the ping-pong example that motivated our consideration of process spawning:
+%
+% \begin{code}
+% -- correct!
+% do { pingProc <- spawn someNode ping__closure
+% ; pongProc <- spawn otherNode pong__closure
+% ; send pingProc (Pong pongProc) }
+% \end{code}
+%
+% \subsection{Limitations}
+%
+% There are some limitations in our approach to remote function invocation. First, since functions are looked up by name, only top-level functions can be called remotely. Also, since the wrapper function needs to know the type of the parameters to the function in order to deserialize them, this approach won't work with polymorphic functions.
+%
% other features? channel combining, peer discovery, multiple nodes per machine
@@ -996,6 +1184,8 @@ \section{Implementation}
\item The compile-time Template Haskell suite was used to write \textt{remotable}, which automatically generates code necessary to invoke remote functions.
\end{itemize}
+\subsection{Dynamic code update} \label{s:code-update}
+
Erlang has a nice feature that allows program modules to be updated over the wire. So, when a new version of code is released, it can be transmitted to every host in the network, where it will replace the old version of the code, without even having to restart the application. We decided not to go in this direction with our framework, partly because code update is a problem that can be separated from the other aspects of building a distributed computing framework, and partly because solving it is hard. The hardness is especially prohibitive in Haskell's case, which compiles programs to machine code and lets the operating system load them, where Erlang's bytecode interpreter retains more control over the loading and execution of programs.
A disadvantage of missing the dynamic updating is that code needs to be distributed to remote hosts out of band. In our development environment this was usually done with \textt{scp} and similar tools. Furthermore, this imposes the responsibility on the programmer to ensure that all hosts are running the same version of the compiled executable. Because we don't make any framework-level provision for rectifying incompatible message types, sending messages between executables that share message types with different structure would most likely crash the deserializing process.
@@ -1019,50 +1209,47 @@ \section{Example}
-- omitted: Serializable instance of CounterMessage
counterLoop :: Int -> ProcessM ()
-counterLoop value =
- let
- counterCommand (CounterQuery pid) =
- do { send pid value
- ; return value }
- counterCommand CounterIncrement =
- return (value+1)
- counterCommand CounterShutdown =
- terminate
- in receiveWait [match counterCommand]
- >>= counterLoop
+counterLoop val
+ = do { val' <- receiveWait [match counterCommand]
+ ; counterLoop val' }
+ where
+ counterCommand (CounterQuery pid)
+ = do { send pid val
+ ; return val }
+ counterCommand CounterIncrement
+ = return (val+1)
+ counterCommand CounterShutdown
+ = terminate
$( remotable ['counterLoop] )
increment :: ProcessId -> ProcessM ()
-increment counterpid = send counterpid msg
- where msg = CounterIncrement
+increment cpid = send cpid CounterIncremetn
shutdown :: ProcessId -> ProcessM ()
-shutdown counterpid = send counterpid msg
- where msg = CounterShutdown
+shutdown cpid = send cpid CounterShutdown
query :: ProcessId -> ProcessM Int
query counterpid =
do { mypid <- getSelfPid
- ; let msg = CounterQuery mypid
- ; send counterpid msg
+ ; send counterpid (CounterQuery mypid)
; receiveWait [match return] }
go "MASTER" =
do { aNode <- liftM (head . flip
findPeerByRole "WORKER") getPeers
- ; counterpid <- spawn aNode (counterLoop__closure 0)
- ; increment counterpid
- ; increment counterpid
- ; newVal <- query counterpid
+ ; cpid <- spawn aNode ($(mkClo 'counterLoop) 0)
+ ; increment cpid
+ ; increment cpid
+ ; newVal <- query cpid
; say (show newVal) -- prints out 2
- ; shutdown counterpid }
+ ; shutdown cpid }
go "WORKER" =
receiveWait []
-main = remoteInit (Just "config")
- [Main.__remoteCallMetaData] go
+main = runRemote (Just "config")
+ [Main.__remoteTable] go
\end{code}
% $

No commit comments for this range

Something went wrong with that request. Please try again.