Skip to content
Browse files

update connection/message docs for newparallel

  • Loading branch information...
1 parent b484660 commit 10b1790cee5fa713d4bdd2dd8403283420de249a @minrk minrk committed Jan 31, 2011
View
BIN docs/source/development/figs/allconnections.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
7,137 docs/source/development/figs/allconnections.svg
3,765 additions, 3,372 deletions not shown because the diff is too large. Please use a local Git client to view these changes.
View
BIN docs/source/development/figs/clientfade.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/source/development/figs/hbfade.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/source/development/figs/notiffade.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/source/development/figs/queryfade.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/source/development/figs/queuefade.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN docs/source/development/figs/regfade.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
156 docs/source/development/parallel_connections.txt
@@ -10,80 +10,91 @@ IPython cluster for parallel computing.
All Connections
===============
-The Parallel Computing code is currently under development in Min RK's IPython fork_ on GitHub.
+The Parallel Computing code is currently under development in IPython's newparallel_
+branch on GitHub.
-.. _fork: http://github.com/minrk/ipython/tree/newparallel
+.. _newparallel: http://github.com/ipython/ipython/tree/newparallel
-The IPython cluster consists of a Controller and one or more clients and engines. The goal
-of the Controller is to manage and monitor the connections and communications between the
-clients and the engines.
+The IPython cluster consists of a Controller, and one or more each of clients and engines.
+The goal of the Controller is to manage and monitor the connections and communications
+between the clients and the engines. The Controller is no longer a single process entity,
+but rather a collection of processes - specifically one Hub, and 3 (or more) Schedulers.
It is important for security/practicality reasons that all connections be inbound to the
-controller process. The arrows in the figures indicate the direction of the connection.
+controller processes. The arrows in the figures indicate the direction of the
+connection.
.. figure:: figs/allconnections.png
- :width: 432px
- :alt: IPython cluster connections
- :align: center
-
- All the connections involved in connecting one client to one engine.
+ :width: 432px
+ :alt: IPython cluster connections
+ :align: center
-The Controller consists of two ZMQ Devices - both MonitoredQueues, one for Tasks (load
-balanced, engine agnostic), one for Multiplexing (explicit targets), a Python device for
-monitoring (the Heartbeat Monitor).
+ All the connections involved in connecting one client to one engine.
+
+The Controller consists of 1-4 processes. Central to the cluster is the **Hub**, which
+monitors engine state, execution traffic, and handles registration and notification. The
+Hub includes a Heartbeat Monitor for keeping track of engines that are alive. Outside the
+Hub are 3 **Schedulers**. The MUX queue and Control queue are MonitoredQueue ØMQ
+devices which relay explicitly addressed messages. The Task queue performs load-balancing
+destination-agnostic scheduling. It may be a MonitoredQueue device, but may also be a
+Python Scheduler that behaves externally in an identical fashion to MQ devices, but with
+additional internal logic.
Registration
------------
.. figure:: figs/regfade.png
- :width: 432px
- :alt: IPython Registration connections
- :align: center
-
- Engines and Clients only need to know where the Registrar ``XREP`` is located to start connecting.
+ :width: 432px
+ :alt: IPython Registration connections
+ :align: center
+
+ Engines and Clients only need to know where the Registrar ``XREP`` is located to start
+ connecting.
Once a controller is launched, the only information needed for connecting clients and/or
-engines to the controller is the IP/port of the ``XREP`` socket called the Registrar. This
-socket handles connections from both clients and engines, and replies with the remaining
+engines is the IP/port of the Hub's ``XREP`` socket called the Registrar. This socket
+handles connections from both clients and engines, and replies with the remaining
information necessary to establish the remaining connections.
Heartbeat
---------
.. figure:: figs/hbfade.png
- :width: 432px
- :alt: IPython Registration connections
- :align: center
-
- The heartbeat sockets.
-
-The heartbeat process has been described elsewhere. To summarize: the controller publishes
-a distinct message periodically via a ``PUB`` socket. Each engine has a ``zmq.FORWARDER``
-device with a ``SUB`` socket for input, and ``XREQ`` socket for output. The ``SUB`` socket
-is connected to the ``PUB`` socket labeled *HB(ping)*, and the ``XREQ`` is connected to
-the ``XREP`` labeled *HB(pong)*. This results in the same message being relayed back to
-the Heartbeat Monitor with the addition of the ``XREQ`` prefix. The Heartbeat Monitor
-receives all the replies via an ``XREP`` socket, and identifies which hearts are still
-beating by the ``zmq.IDENTITY`` prefix of the ``XREQ`` sockets.
-
-Queues
-------
+ :width: 432px
+ :alt: IPython Registration connections
+ :align: center
+
+ The heartbeat sockets.
+
+The heartbeat process has been described elsewhere. To summarize: the Heartbeat Monitor
+publishes a distinct message periodically via a ``PUB`` socket. Each engine has a
+``zmq.FORWARDER`` device with a ``SUB`` socket for input, and ``XREQ`` socket for output.
+The ``SUB`` socket is connected to the ``PUB`` socket labeled *ping*, and the ``XREQ`` is
+connected to the ``XREP`` labeled *pong*. This results in the same message being relayed
+back to the Heartbeat Monitor with the addition of the ``XREQ`` prefix. The Heartbeat
+Monitor receives all the replies via an ``XREP`` socket, and identifies which hearts are
+still beating by the ``zmq.IDENTITY`` prefix of the ``XREQ`` sockets, which information
+the Hub uses to notify clients of any changes in the available engines.
+
+Schedulers
+----------
.. figure:: figs/queuefade.png
:width: 432px
:alt: IPython Queue connections
:align: center
- Load balanced Task queue on the left, explicitly multiplexed queue on the right.
+ Load balanced Task scheduler on the left, explicitly multiplexed schedulers on the
+ right.
-The controller has two MonitoredQueue devices. These devices are primarily for relaying
-messages between clients and engines, but the controller needs to see those messages for
-its own purposes. Since no Python code may exist between the two sockets in a queue, all
-messages sent through these queues (both directions) are also sent via a ``PUB`` socket to
-a monitor, which allows the Controller to monitor queue traffic without interfering with
-it.
+The controller has at least three Schedulers. These devices are primarily for
+relaying messages between clients and engines, but the controller needs to see those
+messages for its own purposes. Since no Python code may exist between the two sockets in a
+queue, all messages sent through these queues (both directions) are also sent via a
+``PUB`` socket to a monitor, which allows the Hub to monitor queue traffic without
+interfering with it.
For tasks, the engine need not be specified. Messages sent to the ``XREP`` socket from the
client side are assigned to an engine via ZMQ's ``XREQ`` round-robin load balancing.
@@ -96,28 +107,53 @@ the downstream end of the device.
At the Kernel level, both of these PAIR sockets are treated in the same way as the ``REP``
socket in the serial version (except using ZMQStreams instead of explicit sockets).
-
+
+IOPub
+-----
+
+.. figure:: figs/iopubfade.png
+ :width: 432px
+ :alt: IOPub connections
+ :align: center
+
+ stdin/out/err are published via a ``PUB/SUB`` relay
+
+.. note::
+
+ This isn't actually hooked up yet.
+
+
+On the kernels, stdin/stdout/stderr are captured and published via a ``PUB`` socket. These
+``PUB`` sockets all connect to a ``SUB`` socket on the Hub, which subscribes to all
+messages. They are then republished via another ``PUB`` socket in the Hub, which can be
+subscribed by the clients.
+
+.. note::
+
+ Once implemented, this will likely be another MonitoredQueue.
+
+
Client connections
------------------
.. figure:: figs/queryfade.png
- :width: 432px
- :alt: IPython client query connections
- :align: center
-
- Clients connect to an ``XREP`` socket to query the controller
-
-The controller listens on an ``XREP`` socket for queries from clients as to queue status,
-and control instructions. Clients can connect to this via a PAIR socket or ``XREQ``.
+ :width: 432px
+ :alt: IPython client query connections
+ :align: center
+
+ Clients connect to an ``XREP`` socket to query the hub
+
+The hub listens on an ``XREP`` socket for queries from clients as to queue status,
+and control instructions. Clients can connect to this via a ``PAIR`` socket or ``XREQ``.
.. figure:: figs/notiffade.png
- :width: 432px
- :alt: IPython Registration connections
- :align: center
-
- Engine registration events are published via a ``PUB`` socket.
+ :width: 432px
+ :alt: IPython Registration connections
+ :align: center
+
+ Engine registration events are published via a ``PUB`` socket.
-The controller publishes all registration/unregistration events via a ``PUB`` socket. This
+The Hub publishes all registration/unregistration events via a ``PUB`` socket. This
allows clients to stay up to date with what engines are available by subscribing to the
feed with a ``SUB`` socket. Other processes could selectively subscribe to just
registration or unregistration events.
View
199 docs/source/development/parallel_messages.txt
@@ -15,25 +15,24 @@ results for future use.
The Controller
--------------
-The controller is the central process of the IPython parallel computing model. It has 3
-Devices:
-
- * Heartbeater
- * Multiplexed Queue
- * Task Queue
-
-and 3 sockets:
-
- * ``XREP`` for both engine and client registration
- * ``PUB`` for notification of engine changes
- * ``XREP`` for client requests
-
-
+The controller is the central collection of processes in the IPython parallel computing
+model. It has two major components:
+
+ * The Hub
+ * A collection of Schedulers
+
+The Hub
+-------
+
+The Hub is the central process for monitoring the state of the engines, and all task
+requests and results. It has no role in execution and does no relay of messages, so
+large blocking requests or database actions in the Hub do not have the ability to impede
+job submission and results.
Registration (``XREP``)
***********************
-The first function of the Controller is to facilitate and monitor connections of clients
+The first function of the Hub is to facilitate and monitor connections of clients
and engines. Both client and engine registration are handled by the same socket, so only
one ip/port pair is needed to connect any number of connections and clients.
@@ -44,10 +43,15 @@ monitor the survival of the Engine process.
Message type: ``registration_request``::
content = {
- 'queue' : 'abcd-1234-...', # the queue XREQ id
- 'heartbeat' : '1234-abcd-...' # the heartbeat XREQ id
+ 'queue' : 'abcd-1234-...', # the MUX queue zmq.IDENTITY
+ 'control' : 'abcd-1234-...', # the control queue zmq.IDENTITY
+ 'heartbeat' : 'abcd-1234-...' # the heartbeat zmq.IDENTITY
}
+.. note::
+
+ these are always the same, at least for now.
+
The Controller replies to an Engine's registration request with the engine's integer ID,
and all the remaining connection information for connecting the heartbeat process, and
kernel queue socket(s). The message status will be an error if the Engine requests IDs that
@@ -61,7 +65,7 @@ Message type: ``registration_reply``::
'id' : 0, # int, the engine id
'queue' : 'tcp://127.0.0.1:12345', # connection for engine side of the queue
'control' : 'tcp://...', # addr for control queue
- 'heartbeat' : (a,b), # tuple containing two interfaces needed for heartbeat
+ 'heartbeat' : ('tcp://...','tcp://...'), # tuple containing two interfaces needed for heartbeat
'task' : 'tcp://...', # addr for task queue, or None if no task queue running
}
@@ -73,7 +77,7 @@ Message type: ``connection_request``::
content = {}
The reply to a Client registration request contains the connection information for the
-multiplexer and load balanced queues, as well as the address for direct controller
+multiplexer and load balanced queues, as well as the address for direct hub
queries. If any of these addresses is `None`, that functionality is not available.
Message type: ``connection_reply``::
@@ -83,24 +87,24 @@ Message type: ``connection_reply``::
# if ok:
'queue' : 'tcp://127.0.0.1:12345', # connection for client side of the MUX queue
'task' : 'tcp...', # addr for task queue, or None if no task queue running
- 'query' : 'tcp...' # addr for methods to query the controller, like queue_request, etc.
- 'control' : 'tcp...' # addr for control methods, like abort, etc.
+ 'query' : 'tcp...', # addr for methods to query the hub, like queue_request, etc.
+ 'control' : 'tcp...', # addr for control methods, like abort, etc.
}
Heartbeat
*********
-The controller uses a heartbeat system to monitor engines, and track when they become
-unresponsive. As described in :ref:`messages <messages>`, and shown in :ref:`connections
+The hub uses a heartbeat system to monitor engines, and track when they become
+unresponsive. As described in :ref:`messaging <messaging>`, and shown in :ref:`connections
<parallel_connections>`.
Notification (``PUB``)
**********************
-The controller published all engine registration/unregistration events on a PUB socket.
+The hub publishes all engine registration/unregistration events on a ``PUB`` socket.
This allows clients to have up-to-date engine ID sets without polling. Registration
notifications contain both the integer engine ID and the queue ID, which is necessary for
-sending messages via the Multiplexer Queue.
+sending messages via the Multiplexer Queue and Control Queues.
Message type: ``registration_notification``::
@@ -119,7 +123,7 @@ Message type : ``unregistration_notification``::
Client Queries (``XREP``)
*************************
-The controller monitors and logs all queue traffic, so that clients can retrieve past
+The hub monitors and logs all queue traffic, so that clients can retrieve past
results or monitor pending tasks. Currently, this information resides in memory on the
Controller, but will ultimately be offloaded to a database over an additional ZMQ
connection. The interface should remain the same or at least similar.
@@ -135,47 +139,64 @@ Message type: ``queue_request``::
'targets' : [0,3,1] # list of ints
}
-The content of a reply to a :func:queue_request request is a dict, keyed by the engine
+The content of a reply to a :func:`queue_request` request is a dict, keyed by the engine
IDs. Note that they will be the string representation of the integer keys, since JSON
-cannot handle number keys.
+cannot handle number keys. The three keys of each dict are::
+
+ 'completed' : messages submitted via any queue that ran on the engine
+ 'queue' : jobs submitted via MUX queue, whose results have not been received
+ 'tasks' : tasks that are known to have been submitted to the engine, but
+ have not completed. Note that with the pure zmq scheduler, this will
+ always be 0/[].
Message type: ``queue_reply``::
content = {
- '0' : {'completed' : 1, 'queue' : 7},
- '1' : {'completed' : 10, 'queue' : 1}
+ 'status' : 'ok', # or 'error'
+ # if verbose=False:
+ '0' : {'completed' : 1, 'queue' : 7, 'tasks' : 0},
+ # if verbose=True:
+ '1' : {'completed' : ['abcd-...','1234-...'], 'queue' : ['58008-'], 'tasks' : []},
}
-Clients can request individual results directly from the controller. This is primarily for
-use gathering results of executions not submitted by the particular client, as the client
+Clients can request individual results directly from the hub. This is primarily for
+gathering results of executions not submitted by the requesting client, as the client
will have all its own results already. Requests are made by msg_id, and can contain one or
-more msg_id.
+more msg_id. An additional boolean key 'statusonly' can be used to not request the
+results, but simply poll the status of the jobs.
Message type: ``result_request``::
content = {
- 'msg_ids' : ['uuid','...'] # list of strs
+ 'msg_ids' : ['uuid','...'], # list of strs
+ 'targets' : [1,2,3], # list of int ids or uuids
+ 'statusonly' : False, # bool
}
The :func:`result_request` reply contains the content objects of the actual execution
-reply messages
+reply messages. If `statusonly=True`, then there will be only the 'pending' and
+'completed' lists.
Message type: ``result_reply``::
content = {
'status' : 'ok', # else error
# if ok:
- msg_id : msg, # the content dict is keyed by msg_ids,
+ 'acbd-...' : msg, # the content dict is keyed by msg_ids,
# values are the result messages
+ # there will be none of these if `statusonly=True`
'pending' : ['msg_id','...'], # msg_ids still pending
'completed' : ['msg_id','...'], # list of completed msg_ids
}
+ buffers = ['bufs','...'] # the buffers that contained the results of the objects.
+ # this will be empty if no messages are complete, or if
+ # statusonly is True.
-For memory management purposes, Clients can also instruct the controller to forget the
+For memory management purposes, Clients can also instruct the hub to forget the
results of messages. This can be done by message ID or engine ID. Individual messages are
-dropped by msg_id, and all messages completed on an engine are dropped by engine ID. This will likely no longer
-be necessary once we move to a DB-based message logging backend.
+dropped by msg_id, and all messages completed on an engine are dropped by engine ID. This
+may no longer be necessary with the mongodb-based message logging backend.
If the msg_ids element is the string ``'all'`` instead of a list, then all completed
results are forgotten.
@@ -197,9 +218,45 @@ Message type: ``purge_reply``::
'status' : 'ok', # or 'error'
}
+
+Schedulers
+----------
+
+There are three basic schedulers:
+
+ * Task Scheduler
+ * MUX Scheduler
+ * Control Scheduler
+
+The MUX and Control schedulers are simple MonitoredQueue ØMQ devices, with ``XREP``
+sockets on either side. This allows the queue to relay individual messages to particular
+targets via ``zmq.IDENTITY`` routing. The Task scheduler may be a MonitoredQueue ØMQ
+device, in which case the client-facing socket is ``XREP``, and the engine-facing socket
+is ``XREQ``. The result of this is that client-submitted messages are load-balanced via
+the ``XREQ`` socket, but the engine's replies to each message go to the requesting client.
+
+Raw ``XREQ`` scheduling is quite primitive, and doesn't allow message introspection, so
+there are also Python Schedulers that can be used. These Schedulers behave in much the
+same way as a MonitoredQueue does from the outside, but have rich internal logic to
+determine destinations, as well as handle dependency graphs Their sockets are always
+``XREP`` on both sides.
+
+The Python task schedulers have an additional message type, which informs the Hub of
+the destination of a task as soon as that destination is known.
+
+Message type: ``task_destination``::
+
+ content = {
+ 'msg_id' : 'abcd-1234-...', # the msg's uuid
+ 'engine_id' : '1234-abcd-...', # the destination engine's zmq.IDENTITY
+ }
+
:func:`apply` and :func:`apply_bound`
*************************************
+In terms of message classes, the MUX scheduler and Task scheduler relay the exact same
+message types. Their only difference lies in how the destination is selected.
+
The `Namespace <http://gist.github.com/483294>`_ model suggests that execution be able to
use the model::
@@ -220,14 +277,17 @@ Message type: ``apply_request``::
content = {
'bound' : True, # whether to execute in the engine's namespace or unbound
- 'after' : [msg_ids,...], # list of msg_ids or output of Dependency.as_dict()
- 'follow' : [msg_ids,...], # list of msg_ids or output of Dependency.as_dict()
+ 'after' : ['msg_id',...], # list of msg_ids or output of Dependency.as_dict()
+ 'follow' : ['msg_id',...], # list of msg_ids or output of Dependency.as_dict()
}
buffers = ['...'] # at least 3 in length
# as built by build_apply_message(f,args,kwargs)
-after/follow represent task dependencies
+after/follow represent task dependencies. 'after' corresponds to a time dependency. The
+request will not arrive at an engine until the 'after' dependency tasks have completed.
+'follow' corresponds to a location dependency. The task will be submitted to the same
+engine as these msg_ids (see :class:`Dependency` docs for details).
Message type: ``apply_reply``::
@@ -239,14 +299,65 @@ Message type: ``apply_reply``::
# a serialization of the return value of f(*args,**kwargs)
# only populated if status is 'ok'
+All engine execution and data movement is performed via apply messages.
+
+Control Messages
+----------------
+
+Messages that interact with the engines, but are not meant to execute code, are submitted
+via the Control queue. These messages have high priority, and are thus received and
+handled before any execution requests.
+
+Clients may want to clear the namespace on the engine. There are no arguments nor
+information involved in this request, so the content is empty.
+
+Message type: ``clear_request``::
+ content = {}
+
+Message type: ``clear_reply``::
+
+ content = {
+ 'status' : 'ok' # 'ok' or 'error'
+ # other error info here, as in other messages
+ }
+
+Clients may want to abort tasks that have not yet run. This can by done by message id, or
+all enqueued messages can be aborted if None is specified.
+
+Message type: ``abort_request``::
+
+ content = {
+ 'msg_ids' : ['1234-...', '...'] # list of msg_ids or None
+ }
+
+Message type: ``abort_reply``::
+
+ content = {
+ 'status' : 'ok' # 'ok' or 'error'
+ # other error info here, as in other messages
+ }
+
+The last action a client may want to do is shutdown the kernel. If a kernel receives a
+shutdown request, then it aborts all queued messages, replies to the request, and exits.
+
+Message type: ``shutdown_request``::
+
+ content = {}
+
+Message type: ``shutdown_reply``::
+
+ content = {
+ 'status' : 'ok' # 'ok' or 'error'
+ # other error info here, as in other messages
+ }
Implementation
--------------
There are a few differences in implementation between the `StreamSession` object used in
-the parallel computing fork and the `Session` object, the main one being that messages are
+the newparallel branch and the `Session` object, the main one being that messages are
sent in parts, rather than as a single serialized object. `StreamSession` objects also
take pack/unpack functions, which are to be used when serializing/deserializing objects.
These can be any functions that translate to/from formats that ZMQ sockets can send
@@ -256,7 +367,7 @@ Split Sends
***********
Previously, messages were bundled as a single json object and one call to
-:func:`socket.send_json`. Since the controller inspects all messages, and doesn't need to
+:func:`socket.send_json`. Since the hub inspects all messages, and doesn't need to
see the content of the messages, which can be large, messages are now serialized and sent in
pieces. All messages are sent in at least 3 parts: the header, the parent header, and the
content. This allows the controller to unpack and inspect the (always small) header,
View
4 docs/source/parallelz/parallel_demos.txt
@@ -2,6 +2,10 @@
Parallel examples
=================
+.. note::
+
+ Not adapted to zmq yet
+
In this section we describe two more involved examples of using an IPython
cluster to perform a parallel computation. In these examples, we will be using
IPython's "pylab" mode, which enables interactive plotting using the
View
51 docs/source/parallelz/parallel_intro.txt
@@ -54,7 +54,6 @@ The IPython architecture consists of four components:
* The IPython engine.
* The IPython controller.
-* The IPython scheduler.
* The controller client.
These components live in the :mod:`IPython.zmq.parallel` package and are
@@ -80,13 +79,11 @@ to the user.
IPython controller
------------------
-The IPython controller provides an interface for working with a set of
-engines. At an general level, the controller is a collection of processes to
-which IPython engines can connect. For each connected engine, the controller
-manages two queues. All actions that can be performed on the engine go through
-this queue. While the engines themselves block when user code is run, the
-controller hides that from the user to provide a fully asynchronous interface
-to a set of engines.
+The IPython controller provides an interface for working with a set of engines. At a
+general level, the controller is a collection of processes to which IPython engines and
+clients can connect. The controller is composed of a :class:`Hub` and a collection of
+:class:`Schedulers`. These Schedulers are typically run in separate processes but on the
+same machine as the Hub, but can be run anywhere from local threads or on remote machines.
The controller also provides a single point of contact for users who wish to
utilize the engines connected to the controller. There are different ways of
@@ -107,11 +104,31 @@ styles of parallelism.
A single controller and set of engines can be used with multiple models
simultaneously. This opens the door for lots of interesting things.
-Controller client
------------------
-There is one primary object, the :class:`~.parallel.client.Client`, for connecting to a controller. For each model, there is a corresponding view. These views allow users to interact with a set of engines through the
-interface. Here are the two default views:
+The Hub
+*******
+
+The center of an IPython cluster is the Controller Hub. This is the process that keeps
+track of engine connections, schedulers, clients, as well as all task requests and
+results. The primary role of the Hub is to facilitate queries of the cluster state, and
+minimize the necessary information required to establish the many connections involved in
+connecting new clients and engines.
+
+
+Schedulers
+**********
+
+All actions that can be performed on the engine go through a Scheduler. While the engines
+themselves block when user code is run, the schedulers hide that from the user to provide
+a fully asynchronous interface to a set of engines.
+
+
+IPython client
+--------------
+
+There is one primary object, the :class:`~.parallel.client.Client`, for connecting to a
+controller. For each model, there is a corresponding view. These views allow users to
+interact with a set of engines through the interface. Here are the two default views:
* The :class:`DirectView` class for explicit addressing.
* The :class:`LoadBalancedView` class for destination-agnostic scheduling.
@@ -175,16 +192,16 @@ everything is working correctly, try the following commands:
Out[5]: {0: 'Hello, World', 1: 'Hello, World', 2: 'Hello, World', 3:
'Hello, World'}
-Remember, a client needs to be able to see the Controller. So if the
-controller is on a different machine, and you have ssh access to that machine,
+Remember, a client needs to be able to see the Hub. So if they
+are on a different machine, and you have ssh access to that machine,
then you would connect to it with::
.. sourcecode:: ipython
- In [2]: c = client.Client(sshserver='mycontroller.example.com')
+ In [2]: c = client.Client(sshserver='myhub.example.com')
-Where 'mycontroller.example.com' is the url or IP address of the machine on
-which the Controller is running.
+Where 'myhub.example.com' is the url or IP address of the machine on
+which the Hub process is running.
You are now ready to learn more about the :ref:`MUX
<parallelmultiengine>` and :ref:`Task <paralleltask>` interfaces to the
View
6 docs/source/parallelz/parallel_multiengine.txt
@@ -129,8 +129,8 @@ apply
The main method for doing remote execution (in fact, all methods that
communicate with the engines are built on top of it), is :meth:`Client.apply`.
-Ideally, :meth:`apply` would have the signature :meth:`apply(f,*args,**kwargs)`,
-which would call f(*args,**kwargs) remotely. However, since :class:`Clients`
+Ideally, :meth:`apply` would have the signature ``apply(f,*args,**kwargs)``,
+which would call ``f(*args,**kwargs)`` remotely. However, since :class:`Clients`
require some more options, they cannot reasonably provide this interface.
Instead, they provide the signature::
@@ -613,7 +613,7 @@ basic effect using :meth:`scatter` and :meth:`gather`:
Out[66]: [None,None,None,None]
In [67]: px y = [i**10 for i in x]
- Executing command on Controller
+ Parallel execution on engines: [0, 1, 2, 3]
Out[67]:
In [68]: y = rc.gather('y')

0 comments on commit 10b1790

Please sign in to comment.
Something went wrong with that request. Please try again.