Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

move old parallel figures into newparallel dir

  • Loading branch information...
commit 24641d17451d170d3759215cc6c0ee270a6fd254 1 parent 45e272d
@minrk minrk authored
Showing with 165 additions and 34 deletions.
  1. BIN  docs/source/parallelz/asian_call.pdf
  2. BIN  docs/source/parallelz/asian_call.png
  3. BIN  docs/source/parallelz/asian_put.pdf
  4. BIN  docs/source/parallelz/asian_put.png
  5. BIN  docs/source/parallelz/hpc_job_manager.pdf
  6. BIN  docs/source/parallelz/hpc_job_manager.png
  7. BIN  docs/source/parallelz/ipcluster_create.pdf
  8. BIN  docs/source/parallelz/ipcluster_create.png
  9. BIN  docs/source/parallelz/ipcluster_start.pdf
  10. BIN  docs/source/parallelz/ipcluster_start.png
  11. BIN  docs/source/parallelz/ipython_shell.pdf
  12. BIN  docs/source/parallelz/ipython_shell.png
  13. BIN  docs/source/parallelz/mec_simple.pdf
  14. BIN  docs/source/parallelz/mec_simple.png
  15. +4 −4 docs/source/parallelz/parallel_demos.txt
  16. +147 −3 docs/source/parallelz/parallel_details.txt
  17. BIN  docs/source/parallelz/parallel_pi.pdf
  18. BIN  docs/source/parallelz/parallel_pi.png
  19. +6 −16 docs/source/parallelz/parallel_transition.txt
  20. +8 −11 docs/source/parallelz/parallel_winhpc.txt
  21. BIN  docs/source/parallelz/single_digits.pdf
  22. BIN  docs/source/parallelz/single_digits.png
  23. BIN  docs/source/parallelz/two_digit_counts.pdf
  24. BIN  docs/source/parallelz/two_digit_counts.png
View
BIN  docs/source/parallelz/asian_call.pdf
Binary file not shown
View
BIN  docs/source/parallelz/asian_call.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/source/parallelz/asian_put.pdf
Binary file not shown
View
BIN  docs/source/parallelz/asian_put.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/source/parallelz/hpc_job_manager.pdf
Binary file not shown
View
BIN  docs/source/parallelz/hpc_job_manager.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/source/parallelz/ipcluster_create.pdf
Binary file not shown
View
BIN  docs/source/parallelz/ipcluster_create.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/source/parallelz/ipcluster_start.pdf
Binary file not shown
View
BIN  docs/source/parallelz/ipcluster_start.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/source/parallelz/ipython_shell.pdf
Binary file not shown
View
BIN  docs/source/parallelz/ipython_shell.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/source/parallelz/mec_simple.pdf
Binary file not shown
View
BIN  docs/source/parallelz/mec_simple.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
8 docs/source/parallelz/parallel_demos.txt
@@ -75,7 +75,7 @@ The resulting plot of the single digit counts shows that each digit occurs
approximately 1,000 times, but that with only 10,000 digits the
statistical fluctuations are still rather large:
-.. image:: ../parallel/single_digits.*
+.. image:: single_digits.*
It is clear that to reduce the relative fluctuations in the counts, we need
to look at many more digits of pi. That brings us to the parallel calculation.
@@ -188,7 +188,7 @@ most likely and that "06" and "07" are least likely. Further analysis would
show that the relative size of the statistical fluctuations have decreased
compared to the 10,000 digit calculation.
-.. image:: ../parallel/two_digit_counts.*
+.. image:: two_digit_counts.*
Parallel options pricing
@@ -257,9 +257,9 @@ entire calculation (10 strike prices, 10 volatilities, 100,000 paths for each)
took 30 seconds in parallel, giving a speedup of 7.7x, which is comparable
to the speedup observed in our previous example.
-.. image:: ../parallel/asian_call.*
+.. image:: asian_call.*
-.. image:: ../parallel/asian_put.*
+.. image:: asian_put.*
Conclusion
==========
View
150 docs/source/parallelz/parallel_details.txt
@@ -363,11 +363,151 @@ Reference
Results
=======
-AsyncResults are the primary class
+AsyncResults
+------------
-get_result
+Our primary representation is the AsyncResult object, based on the object of the same name in
+the built-in :mod:`multiprocessing.pool` module. Our version provides a superset of that
+interface.
-results, metadata
+The basic principle of the AsyncResult is the encapsulation of one or more results not yet completed. Execution methods (including data movement, such as push/pull) will all return
+AsyncResults when `block=False`.
+
+The mp.pool.AsyncResult interface
+---------------------------------
+
+The basic interface of the AsyncResult is exactly that of the AsyncResult in :mod:`multiprocessing.pool`, and consists of four methods:
+
+.. AsyncResult spec directly from docs.python.org
+
+.. class:: AsyncResult
+
+ The stdlib AsyncResult spec
+
+ .. method:: wait([timeout])
+
+ Wait until the result is available or until *timeout* seconds pass. This
+ method always returns ``None``.
+
+ .. method:: ready()
+
+ Return whether the call has completed.
+
+ .. method:: successful()
+
+ Return whether the call completed without raising an exception. Will
+ raise :exc:`AssertionError` if the result is not ready.
+
+ .. method:: get([timeout])
+
+ Return the result when it arrives. If *timeout* is not ``None`` and the
+ result does not arrive within *timeout* seconds then
+ :exc:`TimeoutError` is raised. If the remote call raised
+ an exception then that exception will be reraised as a :exc:`RemoteError`
+ by :meth:`get`.
+
+
+While an AsyncResult is not done, you can check on it with its :meth:`ready` method, which will
+return whether the AR is done. You can also wait on an AsyncResult with its :meth:`wait` method.
+This method blocks until the result arrives. If you don't want to wait forever, you can pass a
+timeout (in seconds) as an argument to :meth:`wait`. :meth:`wait` will *always return None*, and
+should never raise an error.
+
+:meth:`ready` and :meth:`wait` are insensitive to the success or failure of the call. After a
+result is done, :meth:`successful` will tell you whether the call completed without raising an
+exception.
+
+If you actually want the result of the call, you can use :meth:`get`. Initially, :meth:`get`
+behaves just like :meth:`wait`, in that it will block until the result is ready, or until a
+timeout is met. However, unlike :meth:`wait`, :meth:`get` will raise a :exc:`TimeoutError` if
+the timeout is reached and the result is still not ready. If the result arrives before the
+timeout is reached, then :meth:`get` will return the result itself if no exception was raised,
+and will raise an exception if there was.
+
+Here is where we start to expand on the multiprocessing interface. Rather than raising the
+original exception, a RemoteError will be raised, encapsulating the remote exception with some
+metadata. If the AsyncResult represents multiple calls (e.g. any time `targets` is plural), then
+a CompositeError, a subclass of RemoteError, will be raised.
+
+.. seealso::
+
+ For more information on remote exceptions, see :ref:`the section in the Direct Interface
+ <Parallel_exceptions>`.
+
+Extended interface
+******************
+
+
+Other extensions of the AsyncResult interface include convenience wrappers for :meth:`get`.
+AsyncResults have a property, :attr:`result`, with the short alias :attr:`r`, which simply call
+:meth:`get`. Since our object is designed for representing *parallel* results, it is expected
+that many calls (any of those submitted via DirectView) will map results to engine IDs. We
+provide a :meth:`get_dict`, which is also a wrapper on :meth:`get`, which returns a dictionary
+of the individual results, keyed by engine ID.
+
+You can also prevent a submitted job from actually executing, via the AsyncResult's :meth:`abort` method. This will instruct engines to not execute the job when it arrives.
+
+The larger extension of the AsyncResult API is the :attr:`metadata` attribute. The metadata
+is a dictionary (with attribute access) that contains, logically enough, metadata about the
+execution.
+
+Metadata keys:
+
+timestamps
+
+submitted
+ When the task left the Client
+started
+ When the task started execution on the engine
+completed
+ When execution finished on the engine
+received
+ When the result arrived on the Client
+
+ note that it is not known when the result arrived in 0MQ on the client, only when it
+ arrived in Python via :meth:`Client.spin`, so in interactive use, this may not be
+ strictly informative.
+
+Information about the engine
+
+engine_id
+ The integer id
+engine_uuid
+ The UUID of the engine
+
+output of the call
+
+pyerr
+ Python exception, if there was one
+pyout
+ Python output
+stderr
+ stderr stream
+stdout
+ stdout (e.g. print) stream
+
+And some extended information
+
+status
+ either 'ok' or 'error'
+msg_id
+ The UUID of the message
+after
+ For tasks: the time-based msg_id dependencies
+follow
+ For tasks: the location-based msg_id dependencies
+
+While in most cases, the Clients that submitted a request will be the ones using the results,
+other Clients can also request results directly from the Hub. This is done via the Client's
+:meth:`get_result` method. This method will *always* return an AsyncResult object. If the call
+was not submitted by the client, then it will be a subclass, called :class:`AsyncHubResult`.
+These behave in the same way as an AsyncResult, but if the result is not ready, waiting on an
+AsyncHubResult polls the Hub, which is much more expensive than the passive polling used
+in regular AsyncResults.
+
+
+The Client keeps track of all results
+history, results, metadata
Querying the Hub
================
@@ -382,8 +522,12 @@ queue_status
result_status
+ check on results
+
purge_results
+ forget results (conserve resources)
+
Controlling the Engines
=======================
View
BIN  docs/source/parallelz/parallel_pi.pdf
Binary file not shown
View
BIN  docs/source/parallelz/parallel_pi.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
22 docs/source/parallelz/parallel_transition.txt
@@ -80,8 +80,8 @@ arrays and buffers, there is also a `track` flag, which instructs PyZMQ to produ
The result of a non-blocking call to `apply` is now an AsyncResult_ object, described below.
-MultiEngine
-===========
+MultiEngine to DirectView
+=========================
The multiplexing interface previously provided by the MultiEngineClient is now provided by the
DirectView. Once you have a Client connected, you can create a DirectView with index-access
@@ -131,8 +131,8 @@ the natural return value is the actual Python objects. It is no longer the recom
to use stdout as your results, due to stream decoupling and the asynchronous nature of how the
stdout streams are handled in the new system.
-Task
-====
+Task to LoadBalancedView
+========================
Load-Balancing has changed more than Multiplexing. This is because there is no longer a notion
of a StringTask or a MapTask, there are simply Python functions to call. Tasks are now
@@ -203,18 +203,8 @@ the engine beyond the duration of the task.
LoadBalancedView.
-.. _AsyncResult:
-PendingResults
-==============
-
-Since we no longer use Twisted, we also lose the use of Deferred objects. The results of
-non-blocking calls were represented as PendingDeferred or PendingResult objects. The object used
-for this in the new code is an AsyncResult object. The AsyncResult object is based on the object
-of the same name in the built-in :py-mod:`multiprocessing.pool` module. Our version provides a
-superset of that interface.
-
-Some things that behave the same:
+There are still some things that behave the same as IPython.kernel:
.. sourcecode:: ipython
@@ -224,7 +214,7 @@ Some things that behave the same:
Out[6]: [5, 5]
# new
- In [5]: ar = rc[0,1].pull('a', block=False)
+ In [5]: ar = dview.pull('a', targets=[0,1], block=False)
In [6]: ar.r
Out[6]: [5, 5]
View
19 docs/source/parallelz/parallel_winhpc.txt
@@ -80,19 +80,16 @@ These packages provide a powerful and cost-effective approach to numerical and
scientific computing on Windows. The following dependencies are needed to run
IPython on Windows:
-* Python 2.5 or 2.6 (http://www.python.org)
+* Python 2.6 or 2.7 (http://www.python.org)
* pywin32 (http://sourceforge.net/projects/pywin32/)
* PyReadline (https://launchpad.net/pyreadline)
-* zope.interface and Twisted (http://twistedmatrix.com)
-* Foolcap (http://foolscap.lothar.com/trac)
-* pyOpenSSL (https://launchpad.net/pyopenssl)
+* pyzmq (http://github.com/zeromq/pyzmq/downloads)
* IPython (http://ipython.scipy.org)
In addition, the following dependencies are needed to run the demos described
in this document.
* NumPy and SciPy (http://www.scipy.org)
-* wxPython (http://www.wxpython.org)
* Matplotlib (http://matplotlib.sourceforge.net/)
The easiest way of obtaining these dependencies is through the Enthought
@@ -109,7 +106,7 @@ need to follow:
1. Install all of the packages listed above, either individually or using EPD
on the head node, compute nodes and user workstations.
-2. Make sure that :file:`C:\\Python25` and :file:`C:\\Python25\\Scripts` are
+2. Make sure that :file:`C:\\Python27` and :file:`C:\\Python27\\Scripts` are
in the system :envvar:`%PATH%` variable on each node.
3. Install the latest development version of IPython. This can be done by
@@ -123,7 +120,7 @@ opening a Windows Command Prompt and typing ``ipython``. This will
start IPython's interactive shell and you should see something like the
following screenshot:
-.. image:: ../parallel/ipython_shell.*
+.. image:: ipython_shell.*
Starting an IPython cluster
===========================
@@ -171,7 +168,7 @@ You should see a number of messages printed to the screen, ending with
"IPython cluster: started". The result should look something like the following
screenshot:
-.. image:: ../parallel/ipcluster_start.*
+.. image:: ipcluster_start.*
At this point, the controller and two engines are running on your local host.
This configuration is useful for testing and for situations where you want to
@@ -213,7 +210,7 @@ The output of this command is shown in the screenshot below. Notice how
:command:`ipclusterz` prints out the location of the newly created cluster
directory.
-.. image:: ../parallel/ipcluster_create.*
+.. image:: ipcluster_create.*
Configuring a cluster profile
-----------------------------
@@ -282,7 +279,7 @@ must be run again to regenerate the XML job description files. The
following screenshot shows what the HPC Job Manager interface looks like
with a running IPython cluster.
-.. image:: ../parallel/hpc_job_manager.*
+.. image:: hpc_job_manager.*
Performing a simple interactive parallel computation
====================================================
@@ -333,5 +330,5 @@ The :meth:`map` method has the same signature as Python's builtin :func:`map`
function, but runs the calculation in parallel. More involved examples of using
:class:`MultiEngineClient` are provided in the examples that follow.
-.. image:: ../parallel/mec_simple.*
+.. image:: mec_simple.*
View
BIN  docs/source/parallelz/single_digits.pdf
Binary file not shown
View
BIN  docs/source/parallelz/single_digits.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
BIN  docs/source/parallelz/two_digit_counts.pdf
Binary file not shown
View
BIN  docs/source/parallelz/two_digit_counts.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Please sign in to comment.
Something went wrong with that request. Please try again.