Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

newparallel branch (add zmq.parallel submodule) #254

Merged
merged 137 commits into from
Commits on Apr 8, 2011
  1. @minrk

    control channel progress

    minrk authored
  2. @minrk

    prep newparallel for rebase

    minrk authored
    This mainly involves checking out files @ 568f2f4, to allow for cleaner application of changes after that point, where there are
    no longer name conflicts.
  3. @minrk

    use new stream.flush()

    minrk authored
  4. @minrk

    heartmonitor uses flush

    minrk authored
  5. @minrk
  6. @minrk

    whitespace

    minrk authored
  7. @minrk
  8. @minrk

    added dependency decorator

    minrk authored
  9. @minrk

    dependency cleanup

    minrk authored
  10. @minrk
  11. @minrk
  12. @minrk
  13. @minrk

    scheduler progress

    minrk authored
  14. @minrk

    added simple cluster entry point

    minrk authored
  15. @minrk

    use print_function

    minrk authored
  16. @minrk
  17. @minrk
  18. @minrk

    general parallel code cleanup

    minrk authored
  19. @minrk
  20. @minrk

    removed unicode from error dict

    minrk authored
  21. @minrk
  22. @minrk
  23. @minrk

    added zmq.parallel to setupbase

    minrk authored
  24. @minrk
  25. @minrk
  26. @minrk
  27. @minrk
  28. @minrk

    reorganized a few files

    minrk authored
  29. @minrk

    ignore docs/build,_build

    minrk authored
  30. @minrk

    util files into utils dir

    minrk authored
  31. @minrk
  32. @minrk
  33. @minrk
  34. @minrk
  35. @minrk

    Moved parallel test files to parallel subpackages

    minrk authored
    and tweaks related to tests
  36. @minrk

    mostly docstrings

    minrk authored
  37. @minrk
  38. @fperez @minrk
  39. @fperez @minrk

    Fix small bug when no key given to client

    fperez authored minrk committed
  40. @minrk
  41. @minrk
  42. @minrk
  43. @minrk
  44. @minrk

    str(etype)

    minrk authored
  45. @minrk
  46. @minrk

    some docstring cleanup

    minrk authored
  47. @minrk
  48. @minrk

    clone parallel docs to parallelz

    minrk authored
  49. @minrk
  50. @minrk
  51. @minrk
  52. @minrk
  53. @minrk
  54. @minrk
  55. @minrk
  56. @minrk

    docs include 'apply'

    minrk authored
  57. @minrk
  58. @minrk
  59. @minrk

    parallelz updates

    minrk authored
  60. @minrk
  61. @minrk
  62. @minrk
  63. @minrk

    add rich AsyncResult behavior

    minrk authored
  64. @minrk

    propagate iopub to clients

    minrk authored
  65. @minrk
  66. @fperez @minrk

    Add little soma workflow example

    fperez authored minrk committed
  67. @minrk

    Refactor newparallel to use Config system

    minrk authored
    This is working, but incomplete.
  68. @minrk
  69. @minrk
  70. @minrk

    Improvements to dependency handling

    minrk authored
    Specifically:
      * add 'success_only' switch to Dependencies
      * Scheduler handles some cases where Dependencies are impossible to meet.
  71. @minrk
  72. @minrk

    rework logging connections

    minrk authored
  73. @minrk
  74. @minrk

    tasks on engines when they die fail instead of hang

    minrk authored
    This is only true in the Python scheduler, and
    not for any ZMQ scheduler (MUX,control,pure)
  75. @minrk
  76. @minrk
  77. @minrk
  78. @minrk
  79. @minrk
  80. @minrk
  81. @minrk
  82. @minrk
  83. @minrk
  84. @minrk

    newparallel tweaks, fixes

    minrk authored
    * warning on unregistered engine
    * Python LRU scheduler is now default
    * Client registration includes task scheme
    * Fix typos associated with some renaming
    * fix demo links
    * warning typo
  85. @minrk
  86. @minrk
  87. @minrk
  88. @minrk
  89. @minrk
  90. @minrk

    small bugs

    minrk authored
  91. @minrk
  92. @minrk

    add default ip<x>z_config files

    minrk authored
  93. @minrk
  94. @minrk

    initial loglevel back to INFO

    minrk authored
  95. @minrk
  96. @minrk
  97. @minrk
  98. @minrk
  99. @minrk
  100. @minrk

    fix/test pushed function globals

    minrk authored
  101. @minrk
  102. @minrk

    add zmq checking in iptest

    minrk authored
  103. @minrk

    testing fixes

    minrk authored
  104. @minrk

    eliminate relative imports

    minrk authored
  105. @minrk

    add Reference object

    minrk authored
  106. @minrk

    cleanup pass

    minrk authored
  107. @minrk

    launcher updates for PBS

    minrk authored
  108. @minrk

    Add SQLite backend, DB backends are Configurable

    minrk authored
    also fix small numpy typo in newserialized
  109. @minrk
  110. @minrk

    pickle length-0 arrays.

    minrk authored
  111. @minrk

    update mpi doc

    minrk authored
  112. @minrk

    fix small client bugs + tests

    minrk authored
  113. @minrk
  114. @minrk

    add sqlitedb backend

    minrk authored
  115. @minrk
  116. @minrk
  117. @minrk
  118. @minrk

    Add wave2D example

    minrk authored
  119. @minrk
  120. @minrk
  121. @minrk
  122. @minrk

    copyright statements

    minrk authored
  123. @minrk
  124. @minrk
  125. @minrk

    Doc tweaks and updates

    minrk authored
  126. @minrk

    update API after sagedays29

    minrk authored
    tests, docs updated to match
    
    * Client no longer has high-level methods (only in Views)
    * module functions can be pushed
    * clients can have a connection timeout
    * dependencies have separate switches for success/failure, not just success_only
    * add `with view.temp_flags(**flags):` for temporary flags
    
    Also updated some docs and examples
  127. @minrk

    add DirectView.importer contextmanager, demote targets to mutable flag

    minrk authored
    * @require now also takes modules, and will import
    * IPython.zmq.parallel is the new entrypoint, not client
  128. @minrk
  129. @minrk

    add shutdown to Views

    minrk authored
  130. @minrk

    SGE test related fixes

    minrk authored
    * allow iopub messages to arrive first on Hub
    * SQLite no longer commits immediately
    * parallelwave example
    * typos in launcher.py
  131. @minrk
  132. @minrk

    updates to docs and examples

    minrk authored
  133. @minrk
  134. @minrk

    rebase IPython.parallel after removal of IPython.kernel

    minrk authored
    This commit removes all '*z' suffixes from scripts and docs,
    as there is no longer conflict with IPython.kernel.
  135. @minrk
  136. @minrk
  137. @minrk
This page is out of date. Refresh to see the latest.
Showing with 23,864 additions and 68,256 deletions.
  1. +2 −1  .gitignore
  2. +90 −33 IPython/config/default/ipcluster_config.py
  3. +122 −78 IPython/config/default/ipcontroller_config.py
  4. +5 −10 IPython/config/default/ipengine_config.py
  5. 0  IPython/external/ssh/__init__.py
  6. +90 −0 IPython/external/ssh/forward.py
  7. +295 −0 IPython/external/ssh/tunnel.py
  8. +26 −0 IPython/parallel/__init__.py
  9. 0  IPython/parallel/apps/__init__.py
  10. +537 −0 IPython/parallel/apps/clusterdir.py
  11. +592 −0 IPython/parallel/apps/ipclusterapp.py
  12. +432 −0 IPython/parallel/apps/ipcontrollerapp.py
  13. +295 −0 IPython/parallel/apps/ipengineapp.py
  14. +132 −0 IPython/parallel/apps/iploggerapp.py
  15. +971 −0 IPython/parallel/apps/launcher.py
  16. +98 −0 IPython/parallel/apps/logwatcher.py
  17. +316 −0 IPython/parallel/apps/winhpcjob.py
  18. 0  IPython/parallel/client/__init__.py
  19. +340 −0 IPython/parallel/client/asyncresult.py
  20. +1,279 −0 IPython/parallel/client/client.py
  21. +158 −0 IPython/parallel/client/map.py
  22. +200 −0 IPython/parallel/client/remotefunction.py
  23. +1,033 −0 IPython/parallel/client/view.py
  24. 0  IPython/parallel/controller/__init__.py
  25. +117 −0 IPython/parallel/controller/controller.py
  26. +196 −0 IPython/parallel/controller/dependency.py
  27. +155 −0 IPython/parallel/controller/dictdb.py
  28. +163 −0 IPython/parallel/controller/heartmonitor.py
  29. +1,089 −0 IPython/parallel/controller/hub.py
  30. +80 −0 IPython/parallel/controller/mongodb.py
  31. +592 −0 IPython/parallel/controller/scheduler.py
  32. +284 −0 IPython/parallel/controller/sqlitedb.py
  33. 0  IPython/parallel/engine/__init__.py
  34. +156 −0 IPython/parallel/engine/engine.py
  35. +225 −0 IPython/parallel/engine/kernelstarter.py
  36. +423 −0 IPython/parallel/engine/streamkernel.py
  37. +313 −0 IPython/parallel/error.py
  38. +152 −0 IPython/parallel/factory.py
  39. 0  IPython/parallel/scheduler/__init__.py
  40. +16 −0 IPython/parallel/scripts/__init__.py
  41. +18 −0 IPython/parallel/scripts/ipcluster
  42. +18 −0 IPython/parallel/scripts/ipcontroller
  43. +20 −0 IPython/parallel/scripts/ipengine
  44. +20 −0 IPython/parallel/scripts/iplogger
  45. +418 −0 IPython/parallel/streamsession.py
  46. +69 −0 IPython/parallel/tests/__init__.py
  47. +115 −0 IPython/parallel/tests/clienttest.py
  48. +69 −0 IPython/parallel/tests/test_asyncresult.py
  49. +147 −0 IPython/parallel/tests/test_client.py
  50. +101 −0 IPython/parallel/tests/test_dependency.py
  51. +108 −0 IPython/parallel/tests/test_newserialized.py
  52. +111 −0 IPython/parallel/tests/test_streamsession.py
  53. +301 −0 IPython/parallel/tests/test_view.py
  54. +462 −0 IPython/parallel/util.py
  55. +18 −3 IPython/testing/iptest.py
  56. +39 −0 IPython/utils/codeutil.py
  57. +169 −0 IPython/utils/newserialized.py
  58. +153 −0 IPython/utils/pickleutil.py
  59. +22 −2 IPython/utils/traitlets.py
  60. +3 −1 IPython/zmq/displayhook.py
  61. +5 −4 IPython/zmq/iostream.py
  62. +23 −0 IPython/zmq/log.py
  63. BIN  docs/examples/kernel/HISTORY.gz
  64. +0 −32,118 docs/examples/kernel/davinci.txt
  65. +0 −8,050 docs/examples/kernel/davinci0.txt
  66. +0 −8,050 docs/examples/kernel/davinci1.txt
  67. +0 −8,050 docs/examples/kernel/davinci2.txt
  68. +0 −7,968 docs/examples/kernel/davinci3.txt
  69. +0 −14 docs/examples/kernel/helloworld.py
  70. +0 −18 docs/examples/kernel/multienginemap.py
  71. +0 −33 docs/examples/kernel/phistogram.py
  72. +0 −46 docs/examples/kernel/pwordfreq.py
  73. +0 −35 docs/examples/kernel/pwordfreq_skel.py
  74. +0 −18 docs/examples/kernel/wordfreq_skel.py
  75. +120 −0 docs/examples/newparallel/dagdeps.py
  76. +73 −0 docs/examples/newparallel/davinci/pwordfreq.py
  77. +4 −2 docs/examples/{kernel → newparallel/davinci}/wordfreq.py
  78. +128 −0 docs/examples/newparallel/demo/dependencies.py
  79. +37 −0 docs/examples/newparallel/demo/map.py
  80. +44 −0 docs/examples/newparallel/demo/noncopying.py
  81. +86 −0 docs/examples/newparallel/demo/throughput.py
  82. +15 −0 docs/examples/newparallel/demo/views.py
  83. +38 −31 docs/examples/{kernel → newparallel}/fetchparse.py
  84. +19 −0 docs/examples/newparallel/helloworld.py
  85. +77 −0 docs/examples/newparallel/interengine/communicator.py
  86. +43 −0 docs/examples/newparallel/interengine/interengine.py
  87. +14 −18 docs/examples/{kernel → newparallel}/mcdriver.py
  88. +3 −3 docs/examples/{kernel → newparallel}/mcpricer.py
  89. +17 −0 docs/examples/newparallel/multienginemap.py
  90. +19 −19 docs/examples/{kernel → newparallel}/nwmerge.py
  91. +23 −13 docs/examples/{kernel → newparallel}/parallelpi.py
  92. +40 −0 docs/examples/newparallel/phistogram.py
  93. +14 −1 docs/examples/{kernel → newparallel}/pidigits.py
  94. +3 −3 docs/examples/{kernel → newparallel/plotting}/plotting_backend.py
  95. +23 −15 docs/examples/{kernel → newparallel/plotting}/plotting_frontend.py
  96. +12 −13 docs/examples/{kernel → newparallel/rmt}/rmt.ipy
  97. 0  docs/examples/{kernel → newparallel/rmt}/rmtkernel.py
  98. +13 −20 docs/examples/{kernel → newparallel}/task_profiler.py
  99. +491 −0 docs/examples/newparallel/wave2D/RectPartitioner.py
  100. +59 −0 docs/examples/newparallel/wave2D/communicator.py
  101. +205 −0 docs/examples/newparallel/wave2D/parallelwave-mpi.py
  102. +209 −0 docs/examples/newparallel/wave2D/parallelwave.py
  103. +267 −0 docs/examples/newparallel/wave2D/wavesolver.py
  104. +3 −0  docs/examples/newparallel/workflow/client.py
  105. +25 −0 docs/examples/newparallel/workflow/job_wrapper.py
  106. +44 −0 docs/examples/newparallel/workflow/wmanager.py
  107. BIN  docs/source/development/figs/allconnections.png
  108. +3,707 −3,379 docs/source/development/figs/allconnections.svg
  109. BIN  docs/source/development/figs/clientfade.png
  110. BIN  docs/source/development/figs/hbfade.png
  111. BIN  docs/source/development/figs/notiffade.png
  112. BIN  docs/source/development/figs/queryfade.png
  113. BIN  docs/source/development/figs/queuefade.png
  114. BIN  docs/source/development/figs/regfade.png
  115. +1 −0  docs/source/development/index.txt
  116. +115 −50 docs/source/development/parallel_connections.txt
  117. +224 −55 docs/source/development/parallel_messages.txt
  118. +45 −74 docs/source/install/install.txt
  119. BIN  docs/source/parallel/asian_call.pdf
  120. BIN  docs/source/parallel/asian_call.png
  121. BIN  docs/source/parallel/asian_put.pdf
  122. BIN  docs/source/parallel/asian_put.png
  123. +173 −0 docs/source/parallel/dag_dependencies.txt
  124. BIN  docs/source/parallel/dagdeps.pdf
  125. BIN  docs/source/parallel/dagdeps.png
  126. BIN  docs/source/parallel/hpc_job_manager.pdf
  127. BIN  docs/source/parallel/hpc_job_manager.png
  128. +15 −5 docs/source/parallel/index.txt
  129. BIN  docs/source/parallel/ipcluster_create.pdf
  130. BIN  docs/source/parallel/ipcluster_create.png
  131. BIN  docs/source/parallel/ipcluster_start.pdf
  132. BIN  docs/source/parallel/ipcluster_start.png
  133. BIN  docs/source/parallel/ipython_shell.pdf
  134. BIN  docs/source/parallel/ipython_shell.png
  135. BIN  docs/source/parallel/mec_simple.pdf
  136. BIN  docs/source/parallel/mec_simple.png
  137. +284 −0 docs/source/parallel/parallel_demos.txt
  138. +621 −0 docs/source/parallel/parallel_details.txt
  139. +253 −0 docs/source/parallel/parallel_intro.txt
  140. +156 −0 docs/source/parallel/parallel_mpi.txt
  141. +843 −0 docs/source/parallel/parallel_multiengine.txt
  142. BIN  docs/source/parallel/parallel_pi.pdf
  143. BIN  docs/source/parallel/parallel_pi.png
  144. +506 −0 docs/source/parallel/parallel_process.txt
  145. +324 −0 docs/source/parallel/parallel_security.txt
  146. +418 −0 docs/source/parallel/parallel_task.txt
  147. +245 −0 docs/source/parallel/parallel_transition.txt
  148. +334 −0 docs/source/parallel/parallel_winhpc.txt
  149. BIN  docs/source/parallel/simpledag.pdf
  150. BIN  docs/source/parallel/simpledag.png
  151. BIN  docs/source/parallel/single_digits.pdf
  152. BIN  docs/source/parallel/single_digits.png
  153. BIN  docs/source/parallel/two_digit_counts.pdf
  154. BIN  docs/source/parallel/two_digit_counts.png
  155. +14 −0 docs/source/parallel/winhpc_index.txt
  156. +3 −3 docs/source/whatsnew/development.txt
  157. +6 −4 setup.py
  158. +12 −5 setupbase.py
  159. +14 −11 setupext/setupext.py
View
3  .gitignore
@@ -1,7 +1,8 @@
build
./dist
docs/dist
-docs/build/*
+docs/build
+docs/_build
docs/source/api/generated
docs/gh-pages
*.py[co]
View
123 IPython/config/default/ipcluster_config.py
@@ -11,8 +11,8 @@
# - Start as a regular process on localhost.
# - Start using mpiexec.
# - Start using the Windows HPC Server 2008 scheduler
-# - Start using PBS
-# - Start using SSH (currently broken)
+# - Start using PBS/SGE
+# - Start using SSH
# The selected launchers can be configured below.
@@ -21,15 +21,18 @@
# - LocalControllerLauncher
# - MPIExecControllerLauncher
# - PBSControllerLauncher
+# - SGEControllerLauncher
# - WindowsHPCControllerLauncher
-# c.Global.controller_launcher = 'IPython.kernel.launcher.LocalControllerLauncher'
+# c.Global.controller_launcher = 'IPython.parallel.apps.launcher.LocalControllerLauncher'
+# c.Global.controller_launcher = 'IPython.parallel.apps.launcher.PBSControllerLauncher'
# Options are:
# - LocalEngineSetLauncher
# - MPIExecEngineSetLauncher
# - PBSEngineSetLauncher
+# - SGEEngineSetLauncher
# - WindowsHPCEngineSetLauncher
-# c.Global.engine_launcher = 'IPython.kernel.launcher.LocalEngineSetLauncher'
+# c.Global.engine_launcher = 'IPython.parallel.apps.launcher.LocalEngineSetLauncher'
#-----------------------------------------------------------------------------
# Global configuration
@@ -68,23 +71,23 @@
# MPIExec launchers
#-----------------------------------------------------------------------------
-# The mpiexec/mpirun command to use in started the controller.
-# c.MPIExecControllerLauncher.mpi_cmd = ['mpiexec']
+# The mpiexec/mpirun command to use in both the controller and engines.
+# c.MPIExecLauncher.mpi_cmd = ['mpiexec']
# Additional arguments to pass to the actual mpiexec command.
+# c.MPIExecLauncher.mpi_args = []
+
+# The mpiexec/mpirun command and args can be overridden if they should be different
+# for controller and engines.
+# c.MPIExecControllerLauncher.mpi_cmd = ['mpiexec']
# c.MPIExecControllerLauncher.mpi_args = []
+# c.MPIExecEngineSetLauncher.mpi_cmd = ['mpiexec']
+# c.MPIExecEngineSetLauncher.mpi_args = []
# The command line argument to call the controller with.
# c.MPIExecControllerLauncher.controller_args = \
# ['--log-to-file','--log-level', '40']
-
-# The mpiexec/mpirun command to use in started the controller.
-# c.MPIExecEngineSetLauncher.mpi_cmd = ['mpiexec']
-
-# Additional arguments to pass to the actual mpiexec command.
-# c.MPIExecEngineSetLauncher.mpi_args = []
-
# Command line argument passed to the engines.
# c.MPIExecEngineSetLauncher.engine_args = ['--log-to-file','--log-level', '40']
@@ -95,51 +98,105 @@
# SSH launchers
#-----------------------------------------------------------------------------
-# Todo
+# ipclusterz can be used to launch controller and engines remotely via ssh.
+# Note that currently ipclusterz does not do any file distribution, so if
+# machines are not on a shared filesystem, config and json files must be
+# distributed. For this reason, the reuse_files defaults to True on an
+# ssh-launched Controller. This flag can be overridded by the program_args
+# attribute of c.SSHControllerLauncher.
+
+# set the ssh cmd for launching remote commands. The default is ['ssh']
+# c.SSHLauncher.ssh_cmd = ['ssh']
+
+# set the ssh cmd for launching remote commands. The default is ['ssh']
+# c.SSHLauncher.ssh_args = ['tt']
+
+# Set the user and hostname for the controller
+# c.SSHControllerLauncher.hostname = 'controller.example.com'
+# c.SSHControllerLauncher.user = os.environ.get('USER','username')
+
+# Set the arguments to be passed to ipcontrollerz
+# note that remotely launched ipcontrollerz will not get the contents of
+# the local ipcontrollerz_config.py unless it resides on the *remote host*
+# in the location specified by the --cluster_dir argument.
+# c.SSHControllerLauncher.program_args = ['-r', '-ip', '0.0.0.0', '--cluster_dir', '/path/to/cd']
+
+# Set the default args passed to ipenginez for SSH launched engines
+# c.SSHEngineSetLauncher.engine_args = ['--mpi', 'mpi4py']
+# SSH engines are launched as a dict of locations/n-engines.
+# if a value is a tuple instead of an int, it is assumed to be of the form
+# (n, [args]), setting the arguments to passed to ipenginez on `host`.
+# otherwise, c.SSHEngineSetLauncher.engine_args will be used as the default.
+
+# In this case, there will be 3 engines at my.example.com, and
+# 2 at you@ipython.scipy.org with a special json connector location.
+# c.SSHEngineSetLauncher.engines = {'my.example.com' : 3,
+# 'you@ipython.scipy.org' : (2, ['-f', '/path/to/ipcontroller-engine.json']}
+# }
#-----------------------------------------------------------------------------
# Unix batch (PBS) schedulers launchers
#-----------------------------------------------------------------------------
+# SGE and PBS are very similar. All configurables in this section called 'PBS*'
+# also exist as 'SGE*'.
+
# The command line program to use to submit a PBS job.
-# c.PBSControllerLauncher.submit_command = 'qsub'
+# c.PBSLauncher.submit_command = ['qsub']
# The command line program to use to delete a PBS job.
-# c.PBSControllerLauncher.delete_command = 'qdel'
+# c.PBSLauncher.delete_command = ['qdel']
+
+# The PBS queue in which the job should run
+# c.PBSLauncher.queue = 'myqueue'
# A regular expression that takes the output of qsub and find the job id.
-# c.PBSControllerLauncher.job_id_regexp = r'\d+'
+# c.PBSLauncher.job_id_regexp = r'\d+'
+
+# If for some reason the Controller and Engines have different options above, they
+# can be set as c.PBSControllerLauncher.<option> etc.
+
+# PBS and SGE have default templates, but you can specify your own, either as strings
+# or from files, as described here:
# The batch submission script used to start the controller. This is where
-# environment variables would be setup, etc. This string is interpolated using
+# environment variables would be setup, etc. This string is interpreted using
# the Itpl module in IPython.external. Basically, you can use ${n} for the
# number of engine and ${cluster_dir} for the cluster_dir.
-# c.PBSControllerLauncher.batch_template = """"""
+# c.PBSControllerLauncher.batch_template = """
+# #PBS -N ipcontroller
+# #PBS -q $queue
+#
+# ipcontrollerz --cluster-dir $cluster_dir
+# """
+
+# You can also load this template from a file
+# c.PBSControllerLauncher.batch_template_file = u"/path/to/my/template.sh"
# The name of the instantiated batch script that will actually be used to
# submit the job. This will be written to the cluster directory.
-# c.PBSControllerLauncher.batch_file_name = u'pbs_batch_script_controller'
-
-
-# The command line program to use to submit a PBS job.
-# c.PBSEngineSetLauncher.submit_command = 'qsub'
-
-# The command line program to use to delete a PBS job.
-# c.PBSEngineSetLauncher.delete_command = 'qdel'
-
-# A regular expression that takes the output of qsub and find the job id.
-# c.PBSEngineSetLauncher.job_id_regexp = r'\d+'
+# c.PBSControllerLauncher.batch_file_name = u'pbs_controller'
# The batch submission script used to start the engines. This is where
-# environment variables would be setup, etc. This string is interpolated using
+# environment variables would be setup, etc. This string is interpreted using
# the Itpl module in IPython.external. Basically, you can use ${n} for the
# number of engine and ${cluster_dir} for the cluster_dir.
-# c.PBSEngineSetLauncher.batch_template = """"""
+# c.PBSEngineSetLauncher.batch_template = """
+# #PBS -N ipcontroller
+# #PBS -l nprocs=$n
+#
+# ipenginez --cluster-dir $cluster_dir$s
+# """
+
+# You can also load this template from a file
+# c.PBSControllerLauncher.batch_template_file = u"/path/to/my/template.sh"
# The name of the instantiated batch script that will actually be used to
# submit the job. This will be written to the cluster directory.
-# c.PBSEngineSetLauncher.batch_file_name = u'pbs_batch_script_engines'
+# c.PBSEngineSetLauncher.batch_file_name = u'pbs_engines'
+
+
#-----------------------------------------------------------------------------
# Windows HPC Server 2008 launcher configuration
View
200 IPython/config/default/ipcontroller_config.py
@@ -25,112 +25,156 @@
# be imported in the controller for pickling to work.
# c.Global.import_statements = ['import math']
-# Reuse the controller's FURL files. If False, FURL files are regenerated
+# Reuse the controller's JSON files. If False, JSON files are regenerated
# each time the controller is run. If True, they will be reused, *but*, you
# also must set the network ports by hand. If set, this will override the
# values set for the client and engine connections below.
-# c.Global.reuse_furls = True
+# c.Global.reuse_files = True
-# Enable SSL encryption on all connections to the controller. If set, this
-# will override the values set for the client and engine connections below.
+# Enable exec_key authentication on all messages. Default is True
# c.Global.secure = True
# The working directory for the process. The application will use os.chdir
# to change to this directory before starting.
# c.Global.work_dir = os.getcwd()
+# The log url for logging to an `iploggerz` application. This will override
+# log-to-file.
+# c.Global.log_url = 'tcp://127.0.0.1:20202'
+
+# The specific external IP that is used to disambiguate multi-interface URLs.
+# The default behavior is to guess from external IPs gleaned from `socket`.
+# c.Global.location = '192.168.1.123'
+
+# The ssh server remote clients should use to connect to this controller.
+# It must be a machine that can see the interface specified in client_ip.
+# The default for client_ip is localhost, in which case the sshserver must
+# be an external IP of the controller machine.
+# c.Global.sshserver = 'controller.example.com'
+
+# the url to use for registration. If set, this overrides engine-ip,
+# engine-transport client-ip,client-transport, and regport.
+# c.RegistrationFactory.url = 'tcp://*:12345'
+
+# the port to use for registration. Clients and Engines both use this
+# port for registration.
+# c.RegistrationFactory.regport = 10101
+
#-----------------------------------------------------------------------------
-# Configure the client services
+# Configure the Task Scheduler
#-----------------------------------------------------------------------------
-# Basic client service config attributes
+# The routing scheme. 'pure' will use the pure-ZMQ scheduler. Any other
+# value will use a Python scheduler with various routing schemes.
+# python schemes are: lru, weighted, random, twobin. Default is 'weighted'.
+# Note that the pure ZMQ scheduler does not support many features, such as
+# dying engines, dependencies, or engine-subset load-balancing.
+# c.ControllerFactory.scheme = 'pure'
-# The network interface the controller will listen on for client connections.
-# This should be an IP address or hostname of the controller's host. The empty
-# string means listen on all interfaces.
-# c.FCClientServiceFactory.ip = ''
+# The pure ZMQ scheduler can limit the number of outstanding tasks per engine
+# by using the ZMQ HWM option. This allows engines with long-running tasks
+# to not steal too many tasks from other engines. The default is 0, which
+# means agressively distribute messages, never waiting for them to finish.
+# c.ControllerFactory.hwm = 1
-# The TCP/IP port the controller will listen on for client connections. If 0
-# a random port will be used. If the controller's host has a firewall running
-# it must allow incoming traffic on this port.
-# c.FCClientServiceFactory.port = 0
+# Whether to use Threads or Processes to start the Schedulers. Threads will
+# use less resources, but potentially reduce throughput. Default is to
+# use processes. Note that the a Python scheduler will always be in a Process.
+# c.ControllerFactory.usethreads
-# The client learns how to connect to the controller by looking at the
-# location field embedded in the FURL. If this field is empty, all network
-# interfaces that the controller is listening on will be listed. To have the
-# client connect on a particular interface, list it here.
-# c.FCClientServiceFactory.location = ''
+#-----------------------------------------------------------------------------
+# Configure the Hub
+#-----------------------------------------------------------------------------
+
+# Which class to use for the db backend. Currently supported are DictDB (the
+# default), and MongoDB. Uncomment this line to enable MongoDB, which will
+# slow-down the Hub's responsiveness, but also reduce its memory footprint.
+# c.HubFactory.db_class = 'IPython.parallel.controller.mongodb.MongoDB'
-# Use SSL encryption for the client connection.
-# c.FCClientServiceFactory.secure = True
+# The heartbeat ping frequency. This is the frequency (in ms) at which the
+# Hub pings engines for heartbeats. This determines how quickly the Hub
+# will react to engines coming and going. A lower number means faster response
+# time, but more network activity. The default is 100ms
+# c.HubFactory.ping = 100
-# Reuse the client FURL each time the controller is started. If set, you must
-# also pick a specific network port above (FCClientServiceFactory.port).
-# c.FCClientServiceFactory.reuse_furls = False
+# HubFactory queue port pairs, to set by name: mux, iopub, control, task. Set
+# each as a tuple of length 2 of ints. The default is to find random
+# available ports
+# c.HubFactory.mux = (10102,10112)
#-----------------------------------------------------------------------------
-# Configure the engine services
+# Configure the client connections
#-----------------------------------------------------------------------------
-# Basic config attributes for the engine services.
+# Basic client connection config attributes
-# The network interface the controller will listen on for engine connections.
-# This should be an IP address or hostname of the controller's host. The empty
-# string means listen on all interfaces.
-# c.FCEngineServiceFactory.ip = ''
+# The network interface the controller will listen on for client connections.
+# This should be an IP address or interface on the controller. An asterisk
+# means listen on all interfaces. The transport can be any transport
+# supported by zeromq (tcp,epgm,pgm,ib,ipc):
+# c.HubFactory.client_ip = '*'
+# c.HubFactory.client_transport = 'tcp'
-# The TCP/IP port the controller will listen on for engine connections. If 0
-# a random port will be used. If the controller's host has a firewall running
-# it must allow incoming traffic on this port.
-# c.FCEngineServiceFactory.port = 0
+# individual client ports to configure by name: query_port, notifier_port
+# c.HubFactory.query_port = 12345
-# The engine learns how to connect to the controller by looking at the
-# location field embedded in the FURL. If this field is empty, all network
-# interfaces that the controller is listening on will be listed. To have the
-# client connect on a particular interface, list it here.
-# c.FCEngineServiceFactory.location = ''
+#-----------------------------------------------------------------------------
+# Configure the engine connections
+#-----------------------------------------------------------------------------
-# Use SSL encryption for the engine connection.
-# c.FCEngineServiceFactory.secure = True
+# Basic config attributes for the engine connections.
-# Reuse the client FURL each time the controller is started. If set, you must
-# also pick a specific network port above (FCClientServiceFactory.port).
-# c.FCEngineServiceFactory.reuse_furls = False
+# The network interface the controller will listen on for engine connections.
+# This should be an IP address or interface on the controller. An asterisk
+# means listen on all interfaces. The transport can be any transport
+# supported by zeromq (tcp,epgm,pgm,ib,ipc):
+# c.HubFactory.engine_ip = '*'
+# c.HubFactory.engine_transport = 'tcp'
+
+# set the engine heartbeat ports to use:
+# c.HubFactory.hb = (10303,10313)
#-----------------------------------------------------------------------------
-# Developer level configuration attributes
+# Configure the TaskRecord database backend
#-----------------------------------------------------------------------------
-# You shouldn't have to modify anything in this section. These attributes
-# are more for developers who want to change the behavior of the controller
-# at a fundamental level.
-
-# c.FCClientServiceFactory.cert_file = u'ipcontroller-client.pem'
-
-# default_client_interfaces = Config()
-# default_client_interfaces.Task.interface_chain = [
-# 'IPython.kernel.task.ITaskController',
-# 'IPython.kernel.taskfc.IFCTaskController'
-# ]
-#
-# default_client_interfaces.Task.furl_file = u'ipcontroller-tc.furl'
-#
-# default_client_interfaces.MultiEngine.interface_chain = [
-# 'IPython.kernel.multiengine.IMultiEngine',
-# 'IPython.kernel.multienginefc.IFCSynchronousMultiEngine'
-# ]
-#
-# default_client_interfaces.MultiEngine.furl_file = u'ipcontroller-mec.furl'
-#
-# c.FCEngineServiceFactory.interfaces = default_client_interfaces
-
-# c.FCEngineServiceFactory.cert_file = u'ipcontroller-engine.pem'
-
-# default_engine_interfaces = Config()
-# default_engine_interfaces.Default.interface_chain = [
-# 'IPython.kernel.enginefc.IFCControllerBase'
-# ]
-#
-# default_engine_interfaces.Default.furl_file = u'ipcontroller-engine.furl'
-#
-# c.FCEngineServiceFactory.interfaces = default_engine_interfaces
+# For memory/persistance reasons, tasks can be stored out-of-memory in a database.
+# Currently, only sqlite and mongodb are supported as backends, but the interface
+# is fairly simple, so advanced developers could write their own backend.
+
+# ----- in-memory configuration --------
+# this line restores the default behavior: in-memory storage of all results.
+# c.HubFactory.db_class = 'IPython.parallel.controller.dictdb.DictDB'
+
+# ----- sqlite configuration --------
+# use this line to activate sqlite:
+# c.HubFactory.db_class = 'IPython.parallel.controller.sqlitedb.SQLiteDB'
+
+# You can specify the name of the db-file. By default, this will be located
+# in the active cluster_dir, e.g. ~/.ipython/clusterz_default/tasks.db
+# c.SQLiteDB.filename = 'tasks.db'
+
+# You can also specify the location of the db-file, if you want it to be somewhere
+# other than the cluster_dir.
+# c.SQLiteDB.location = '/scratch/'
+
+# This will specify the name of the table for the controller to use. The default
+# behavior is to use the session ID of the SessionFactory object (a uuid). Overriding
+# this will result in results persisting for multiple sessions.
+# c.SQLiteDB.table = 'results'
+
+# ----- mongodb configuration --------
+# use this line to activate mongodb:
+# c.HubFactory.db_class = 'IPython.parallel.controller.mongodb.MongoDB'
+
+# You can specify the args and kwargs pymongo will use when creating the Connection.
+# For more information on what these options might be, see pymongo documentation.
+# c.MongoDB.connection_kwargs = {}
+# c.MongoDB.connection_args = []
+
+# This will specify the name of the mongo database for the controller to use. The default
+# behavior is to use the session ID of the SessionFactory object (a uuid). Overriding
+# this will result in task results persisting through multiple sessions.
+# c.MongoDB.database = 'ipythondb'
+
+
View
15 IPython/config/default/ipengine_config.py
@@ -29,10 +29,10 @@
# c.Global.connect_delay = 0.1
# c.Global.connect_max_tries = 15
-# By default, the engine will look for the controller's FURL file in its own
-# cluster directory. Sometimes, the FURL file will be elsewhere and this
-# attribute can be set to the full path of the FURL file.
-# c.Global.furl_file = u''
+# By default, the engine will look for the controller's JSON file in its own
+# cluster directory. Sometimes, the JSON file will be elsewhere and this
+# attribute can be set to the full path of the JSON file.
+# c.Global.url_file = u'/path/to/my/ipcontroller-engine.json'
# The working directory for the process. The application will use os.chdir
# to change to this directory before starting.
@@ -78,12 +78,7 @@
# You should not have to change these attributes.
-# c.Global.shell_class = 'IPython.kernel.core.interpreter.Interpreter'
-
-# c.Global.furl_file_name = u'ipcontroller-engine.furl'
-
-
-
+# c.Global.url_file_name = u'ipcontroller-engine.furl'
View
0  IPython/external/ssh/__init__.py
No changes.
View
90 IPython/external/ssh/forward.py
@@ -0,0 +1,90 @@
+#!/usr/bin/env python
+
+#
+# This file is adapted from a paramiko demo, and thus licensed under LGPL 2.1.
+# Original Copyright (C) 2003-2007 Robey Pointer <robeypointer@gmail.com>
+# Edits Copyright (C) 2010 The IPython Team
+#
+# Paramiko is free software; you can redistribute it and/or modify it under the
+# terms of the GNU Lesser General Public License as published by the Free
+# Software Foundation; either version 2.1 of the License, or (at your option)
+# any later version.
+#
+# Paramiko is distrubuted in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
+# A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
+# details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with Paramiko; if not, write to the Free Software Foundation, Inc.,
+# 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+
+"""
+Sample script showing how to do local port forwarding over paramiko.
+
+This script connects to the requested SSH server and sets up local port
+forwarding (the openssh -L option) from a local port through a tunneled
+connection to a destination reachable from the SSH server machine.
+"""
+
+from __future__ import print_function
+
+import logging
+import select
+import SocketServer
+
+logger = logging.getLogger('ssh')
+
+class ForwardServer (SocketServer.ThreadingTCPServer):
+ daemon_threads = True
+ allow_reuse_address = True
+
+
+class Handler (SocketServer.BaseRequestHandler):
+
+ def handle(self):
+ try:
+ chan = self.ssh_transport.open_channel('direct-tcpip',
+ (self.chain_host, self.chain_port),
+ self.request.getpeername())
+ except Exception, e:
+ logger.debug('Incoming request to %s:%d failed: %s' % (self.chain_host,
+ self.chain_port,
+ repr(e)))
+ return
+ if chan is None:
+ logger.debug('Incoming request to %s:%d was rejected by the SSH server.' %
+ (self.chain_host, self.chain_port))
+ return
+
+ logger.debug('Connected! Tunnel open %r -> %r -> %r' % (self.request.getpeername(),
+ chan.getpeername(), (self.chain_host, self.chain_port)))
+ while True:
+ r, w, x = select.select([self.request, chan], [], [])
+ if self.request in r:
+ data = self.request.recv(1024)
+ if len(data) == 0:
+ break
+ chan.send(data)
+ if chan in r:
+ data = chan.recv(1024)
+ if len(data) == 0:
+ break
+ self.request.send(data)
+ chan.close()
+ self.request.close()
+ logger.debug('Tunnel closed ')
+
+
+def forward_tunnel(local_port, remote_host, remote_port, transport):
+ # this is a little convoluted, but lets me configure things for the Handler
+ # object. (SocketServer doesn't give Handlers any way to access the outer
+ # server normally.)
+ class SubHander (Handler):
+ chain_host = remote_host
+ chain_port = remote_port
+ ssh_transport = transport
+ ForwardServer(('127.0.0.1', local_port), SubHander).serve_forever()
+
+
+__all__ = ['forward_tunnel']
View
295 IPython/external/ssh/tunnel.py
@@ -0,0 +1,295 @@
+"""Basic ssh tunneling utilities."""
+
+#-----------------------------------------------------------------------------
+# Copyright (C) 2008-2010 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
+
+
+#-----------------------------------------------------------------------------
+# Imports
+#-----------------------------------------------------------------------------
+
+from __future__ import print_function
+
+import os,sys, atexit
+from multiprocessing import Process
+from getpass import getpass, getuser
+import warnings
+
+try:
+ with warnings.catch_warnings():
+ warnings.simplefilter('ignore', DeprecationWarning)
+ import paramiko
+except ImportError:
+ paramiko = None
+else:
+ from forward import forward_tunnel
+
+try:
+ from IPython.external import pexpect
+except ImportError:
+ pexpect = None
+
+from IPython.parallel.util import select_random_ports
+
+#-----------------------------------------------------------------------------
+# Code
+#-----------------------------------------------------------------------------
+
+#-----------------------------------------------------------------------------
+# Check for passwordless login
+#-----------------------------------------------------------------------------
+
+def try_passwordless_ssh(server, keyfile, paramiko=None):
+ """Attempt to make an ssh connection without a password.
+ This is mainly used for requiring password input only once
+ when many tunnels may be connected to the same server.
+
+ If paramiko is None, the default for the platform is chosen.
+ """
+ if paramiko is None:
+ paramiko = sys.platform == 'win32'
+ if not paramiko:
+ f = _try_passwordless_openssh
+ else:
+ f = _try_passwordless_paramiko
+ return f(server, keyfile)
+
+def _try_passwordless_openssh(server, keyfile):
+ """Try passwordless login with shell ssh command."""
+ if pexpect is None:
+ raise ImportError("pexpect unavailable, use paramiko")
+ cmd = 'ssh -f '+ server
+ if keyfile:
+ cmd += ' -i ' + keyfile
+ cmd += ' exit'
+ p = pexpect.spawn(cmd)
+ while True:
+ try:
+ p.expect('[Ppassword]:', timeout=.1)
+ except pexpect.TIMEOUT:
+ continue
+ except pexpect.EOF:
+ return True
+ else:
+ return False
+
+def _try_passwordless_paramiko(server, keyfile):
+ """Try passwordless login with paramiko."""
+ if paramiko is None:
+ raise ImportError("paramiko unavailable, use openssh")
+ username, server, port = _split_server(server)
+ client = paramiko.SSHClient()
+ client.load_system_host_keys()
+ client.set_missing_host_key_policy(paramiko.WarningPolicy())
+ try:
+ client.connect(server, port, username=username, key_filename=keyfile,
+ look_for_keys=True)
+ except paramiko.AuthenticationException:
+ return False
+ else:
+ client.close()
+ return True
+
+
+def tunnel_connection(socket, addr, server, keyfile=None, password=None, paramiko=None):
+ """Connect a socket to an address via an ssh tunnel.
+
+ This is a wrapper for socket.connect(addr), when addr is not accessible
+ from the local machine. It simply creates an ssh tunnel using the remaining args,
+ and calls socket.connect('tcp://localhost:lport') where lport is the randomly
+ selected local port of the tunnel.
+
+ """
+ lport = select_random_ports(1)[0]
+ transport, addr = addr.split('://')
+ ip,rport = addr.split(':')
+ rport = int(rport)
+ if paramiko is None:
+ paramiko = sys.platform == 'win32'
+ if paramiko:
+ tunnelf = paramiko_tunnel
+ else:
+ tunnelf = openssh_tunnel
+ tunnel = tunnelf(lport, rport, server, remoteip=ip, keyfile=keyfile, password=password)
+ socket.connect('tcp://127.0.0.1:%i'%lport)
+ return tunnel
+
+def openssh_tunnel(lport, rport, server, remoteip='127.0.0.1', keyfile=None, password=None, timeout=15):
+ """Create an ssh tunnel using command-line ssh that connects port lport
+ on this machine to localhost:rport on server. The tunnel
+ will automatically close when not in use, remaining open
+ for a minimum of timeout seconds for an initial connection.
+
+ This creates a tunnel redirecting `localhost:lport` to `remoteip:rport`,
+ as seen from `server`.
+
+ keyfile and password may be specified, but ssh config is checked for defaults.
+
+ Parameters
+ ----------
+
+ lport : int
+ local port for connecting to the tunnel from this machine.
+ rport : int
+ port on the remote machine to connect to.
+ server : str
+ The ssh server to connect to. The full ssh server string will be parsed.
+ user@server:port
+ remoteip : str [Default: 127.0.0.1]
+ The remote ip, specifying the destination of the tunnel.
+ Default is localhost, which means that the tunnel would redirect
+ localhost:lport on this machine to localhost:rport on the *server*.
+
+ keyfile : str; path to public key file
+ This specifies a key to be used in ssh login, default None.
+ Regular default ssh keys will be used without specifying this argument.
+ password : str;
+ Your ssh password to the ssh server. Note that if this is left None,
+ you will be prompted for it if passwordless key based login is unavailable.
+
+ """
+ if pexpect is None:
+ raise ImportError("pexpect unavailable, use paramiko_tunnel")
+ ssh="ssh "
+ if keyfile:
+ ssh += "-i " + keyfile
+ cmd = ssh + " -f -L 127.0.0.1:%i:%s:%i %s sleep %i"%(lport, remoteip, rport, server, timeout)
+ tunnel = pexpect.spawn(cmd)
+ failed = False
+ while True:
+ try:
+ tunnel.expect('[Pp]assword:', timeout=.1)
+ except pexpect.TIMEOUT:
+ continue
+ except pexpect.EOF:
+ if tunnel.exitstatus:
+ print (tunnel.exitstatus)
+ print (tunnel.before)
+ print (tunnel.after)
+ raise RuntimeError("tunnel '%s' failed to start"%(cmd))
+ else:
+ return tunnel.pid
+ else:
+ if failed:
+ print("Password rejected, try again")
+ password=None
+ if password is None:
+ password = getpass("%s's password: "%(server))
+ tunnel.sendline(password)
+ failed = True
+
+def _split_server(server):
+ if '@' in server:
+ username,server = server.split('@', 1)
+ else:
+ username = getuser()
+ if ':' in server:
+ server, port = server.split(':')
+ port = int(port)
+ else:
+ port = 22
+ return username, server, port
+
+def paramiko_tunnel(lport, rport, server, remoteip='127.0.0.1', keyfile=None, password=None, timeout=15):
+ """launch a tunner with paramiko in a subprocess. This should only be used
+ when shell ssh is unavailable (e.g. Windows).
+
+ This creates a tunnel redirecting `localhost:lport` to `remoteip:rport`,
+ as seen from `server`.
+
+ If you are familiar with ssh tunnels, this creates the tunnel:
+
+ ssh server -L localhost:lport:remoteip:rport
+
+ keyfile and password may be specified, but ssh config is checked for defaults.
+
+
+ Parameters
+ ----------
+
+ lport : int
+ local port for connecting to the tunnel from this machine.
+ rport : int
+ port on the remote machine to connect to.
+ server : str
+ The ssh server to connect to. The full ssh server string will be parsed.
+ user@server:port
+ remoteip : str [Default: 127.0.0.1]
+ The remote ip, specifying the destination of the tunnel.
+ Default is localhost, which means that the tunnel would redirect
+ localhost:lport on this machine to localhost:rport on the *server*.
+
+ keyfile : str; path to public key file
+ This specifies a key to be used in ssh login, default None.
+ Regular default ssh keys will be used without specifying this argument.
+ password : str;
+ Your ssh password to the ssh server. Note that if this is left None,
+ you will be prompted for it if passwordless key based login is unavailable.
+
+ """
+ if paramiko is None:
+ raise ImportError("Paramiko not available")
+
+ if password is None:
+ if not _check_passwordless_paramiko(server, keyfile):
+ password = getpass("%s's password: "%(server))
+
+ p = Process(target=_paramiko_tunnel,
+ args=(lport, rport, server, remoteip),
+ kwargs=dict(keyfile=keyfile, password=password))
+ p.daemon=False
+ p.start()
+ atexit.register(_shutdown_process, p)
+ return p
+
+def _shutdown_process(p):
+ if p.isalive():
+ p.terminate()
+
+def _paramiko_tunnel(lport, rport, server, remoteip, keyfile=None, password=None):
+ """Function for actually starting a paramiko tunnel, to be passed
+ to multiprocessing.Process(target=this), and not called directly.
+ """
+ username, server, port = _split_server(server)
+ client = paramiko.SSHClient()
+ client.load_system_host_keys()
+ client.set_missing_host_key_policy(paramiko.WarningPolicy())
+
+ try:
+ client.connect(server, port, username=username, key_filename=keyfile,
+ look_for_keys=True, password=password)
+# except paramiko.AuthenticationException:
+# if password is None:
+# password = getpass("%s@%s's password: "%(username, server))
+# client.connect(server, port, username=username, password=password)
+# else:
+# raise
+ except Exception as e:
+ print ('*** Failed to connect to %s:%d: %r' % (server, port, e))
+ sys.exit(1)
+
+ # print ('Now forwarding port %d to %s:%d ...' % (lport, server, rport))
+
+ try:
+ forward_tunnel(lport, remoteip, rport, client.get_transport())
+ except KeyboardInterrupt:
+ print ('SIGINT: Port forwarding stopped cleanly')
+ sys.exit(0)
+ except Exception as e:
+ print ("Port forwarding stopped uncleanly: %s"%e)
+ sys.exit(255)
+
+if sys.platform == 'win32':
+ ssh_tunnel = paramiko_tunnel
+else:
+ ssh_tunnel = openssh_tunnel
+
+
+__all__ = ['tunnel_connection', 'ssh_tunnel', 'openssh_tunnel', 'paramiko_tunnel', 'try_passwordless_ssh']
+
+
View
26 IPython/parallel/__init__.py
@@ -0,0 +1,26 @@
+"""The IPython ZMQ-based parallel computing interface."""
+#-----------------------------------------------------------------------------
+# Copyright (C) 2011 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
+#-----------------------------------------------------------------------------
+# Imports
+#-----------------------------------------------------------------------------
+
+import zmq
+
+if zmq.__version__ < '2.1.4':
+ raise ImportError("IPython.parallel requires pyzmq/0MQ >= 2.1.4, you appear to have %s"%zmq.__version__)
+
+from IPython.utils.pickleutil import Reference
+
+from .client.asyncresult import *
+from .client.client import Client
+from .client.remotefunction import *
+from .client.view import *
+from .controller.dependency import *
+
+
View
0  IPython/parallel/apps/__init__.py
No changes.
View
537 IPython/parallel/apps/clusterdir.py
@@ -0,0 +1,537 @@
+#!/usr/bin/env python
+# encoding: utf-8
+"""
+The IPython cluster directory
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (C) 2008-2009 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
+#-----------------------------------------------------------------------------
+# Imports
+#-----------------------------------------------------------------------------
+
+from __future__ import with_statement
+
+import os
+import logging
+import re
+import shutil
+import sys
+
+from IPython.config.loader import PyFileConfigLoader
+from IPython.config.configurable import Configurable
+from IPython.core.application import Application, BaseAppConfigLoader
+from IPython.core.crashhandler import CrashHandler
+from IPython.core import release
+from IPython.utils.path import (
+ get_ipython_package_dir,
+ expand_path
+)
+from IPython.utils.traitlets import Unicode
+
+#-----------------------------------------------------------------------------
+# Module errors
+#-----------------------------------------------------------------------------
+
+class ClusterDirError(Exception):
+ pass
+
+
+class PIDFileError(Exception):
+ pass
+
+
+#-----------------------------------------------------------------------------
+# Class for managing cluster directories
+#-----------------------------------------------------------------------------
+
+class ClusterDir(Configurable):
+ """An object to manage the cluster directory and its resources.
+
+ The cluster directory is used by :command:`ipengine`,
+ :command:`ipcontroller` and :command:`ipclsuter` to manage the
+ configuration, logging and security of these applications.
+
+ This object knows how to find, create and manage these directories. This
+ should be used by any code that want's to handle cluster directories.
+ """
+
+ security_dir_name = Unicode('security')
+ log_dir_name = Unicode('log')
+ pid_dir_name = Unicode('pid')
+ security_dir = Unicode(u'')
+ log_dir = Unicode(u'')
+ pid_dir = Unicode(u'')
+ location = Unicode(u'')
+
+ def __init__(self, location=u''):
+ super(ClusterDir, self).__init__(location=location)
+
+ def _location_changed(self, name, old, new):
+ if not os.path.isdir(new):
+ os.makedirs(new)
+ self.security_dir = os.path.join(new, self.security_dir_name)
+ self.log_dir = os.path.join(new, self.log_dir_name)
+ self.pid_dir = os.path.join(new, self.pid_dir_name)
+ self.check_dirs()
+
+ def _log_dir_changed(self, name, old, new):
+ self.check_log_dir()
+
+ def check_log_dir(self):
+ if not os.path.isdir(self.log_dir):
+ os.mkdir(self.log_dir)
+
+ def _security_dir_changed(self, name, old, new):
+ self.check_security_dir()
+
+ def check_security_dir(self):
+ if not os.path.isdir(self.security_dir):
+ os.mkdir(self.security_dir, 0700)
+ os.chmod(self.security_dir, 0700)
+
+ def _pid_dir_changed(self, name, old, new):
+ self.check_pid_dir()
+
+ def check_pid_dir(self):
+ if not os.path.isdir(self.pid_dir):
+ os.mkdir(self.pid_dir, 0700)
+ os.chmod(self.pid_dir, 0700)
+
+ def check_dirs(self):
+ self.check_security_dir()
+ self.check_log_dir()
+ self.check_pid_dir()
+
+ def load_config_file(self, filename):
+ """Load a config file from the top level of the cluster dir.
+
+ Parameters
+ ----------
+ filename : unicode or str
+ The filename only of the config file that must be located in
+ the top-level of the cluster directory.
+ """
+ loader = PyFileConfigLoader(filename, self.location)
+ return loader.load_config()
+
+ def copy_config_file(self, config_file, path=None, overwrite=False):
+ """Copy a default config file into the active cluster directory.
+
+ Default configuration files are kept in :mod:`IPython.config.default`.
+ This function moves these from that location to the working cluster
+ directory.
+ """
+ if path is None:
+ import IPython.config.default
+ path = IPython.config.default.__file__.split(os.path.sep)[:-1]
+ path = os.path.sep.join(path)
+ src = os.path.join(path, config_file)
+ dst = os.path.join(self.location, config_file)
+ if not os.path.isfile(dst) or overwrite:
+ shutil.copy(src, dst)
+
+ def copy_all_config_files(self, path=None, overwrite=False):
+ """Copy all config files into the active cluster directory."""
+ for f in [u'ipcontroller_config.py', u'ipengine_config.py',
+ u'ipcluster_config.py']:
+ self.copy_config_file(f, path=path, overwrite=overwrite)
+
+ @classmethod
+ def create_cluster_dir(csl, cluster_dir):
+ """Create a new cluster directory given a full path.
+
+ Parameters
+ ----------
+ cluster_dir : str
+ The full path to the cluster directory. If it does exist, it will
+ be used. If not, it will be created.
+ """
+ return ClusterDir(location=cluster_dir)
+
+ @classmethod
+ def create_cluster_dir_by_profile(cls, path, profile=u'default'):
+ """Create a cluster dir by profile name and path.
+
+ Parameters
+ ----------
+ path : str
+ The path (directory) to put the cluster directory in.
+ profile : str
+ The name of the profile. The name of the cluster directory will
+ be "cluster_<profile>".
+ """
+ if not os.path.isdir(path):
+ raise ClusterDirError('Directory not found: %s' % path)
+ cluster_dir = os.path.join(path, u'cluster_' + profile)
+ return ClusterDir(location=cluster_dir)
+
+ @classmethod
+ def find_cluster_dir_by_profile(cls, ipython_dir, profile=u'default'):
+ """Find an existing cluster dir by profile name, return its ClusterDir.
+
+ This searches through a sequence of paths for a cluster dir. If it
+ is not found, a :class:`ClusterDirError` exception will be raised.
+
+ The search path algorithm is:
+ 1. ``os.getcwd()``
+ 2. ``ipython_dir``
+ 3. The directories found in the ":" separated
+ :env:`IPCLUSTER_DIR_PATH` environment variable.
+
+ Parameters
+ ----------
+ ipython_dir : unicode or str
+ The IPython directory to use.
+ profile : unicode or str
+ The name of the profile. The name of the cluster directory
+ will be "cluster_<profile>".
+ """
+ dirname = u'cluster_' + profile
+ cluster_dir_paths = os.environ.get('IPCLUSTER_DIR_PATH','')
+ if cluster_dir_paths:
+ cluster_dir_paths = cluster_dir_paths.split(':')
+ else:
+ cluster_dir_paths = []
+ paths = [os.getcwd(), ipython_dir] + cluster_dir_paths
+ for p in paths:
+ cluster_dir = os.path.join(p, dirname)
+ if os.path.isdir(cluster_dir):
+ return ClusterDir(location=cluster_dir)
+ else:
+ raise ClusterDirError('Cluster directory not found in paths: %s' % dirname)
+
+ @classmethod
+ def find_cluster_dir(cls, cluster_dir):
+ """Find/create a cluster dir and return its ClusterDir.
+
+ This will create the cluster directory if it doesn't exist.
+
+ Parameters
+ ----------
+ cluster_dir : unicode or str
+ The path of the cluster directory. This is expanded using
+ :func:`IPython.utils.genutils.expand_path`.
+ """
+ cluster_dir = expand_path(cluster_dir)
+ if not os.path.isdir(cluster_dir):
+ raise ClusterDirError('Cluster directory not found: %s' % cluster_dir)
+ return ClusterDir(location=cluster_dir)
+
+
+#-----------------------------------------------------------------------------
+# Command line options
+#-----------------------------------------------------------------------------
+
+class ClusterDirConfigLoader(BaseAppConfigLoader):
+
+ def _add_cluster_profile(self, parser):
+ paa = parser.add_argument
+ paa('-p', '--profile',
+ dest='Global.profile',type=unicode,
+ help=
+ """The string name of the profile to be used. This determines the name
+ of the cluster dir as: cluster_<profile>. The default profile is named
+ 'default'. The cluster directory is resolve this way if the
+ --cluster-dir option is not used.""",
+ metavar='Global.profile')
+
+ def _add_cluster_dir(self, parser):
+ paa = parser.add_argument
+ paa('--cluster-dir',
+ dest='Global.cluster_dir',type=unicode,
+ help="""Set the cluster dir. This overrides the logic used by the
+ --profile option.""",
+ metavar='Global.cluster_dir')
+
+ def _add_work_dir(self, parser):
+ paa = parser.add_argument
+ paa('--work-dir',
+ dest='Global.work_dir',type=unicode,
+ help='Set the working dir for the process.',
+ metavar='Global.work_dir')
+
+ def _add_clean_logs(self, parser):
+ paa = parser.add_argument
+ paa('--clean-logs',
+ dest='Global.clean_logs', action='store_true',
+ help='Delete old log flies before starting.')
+
+ def _add_no_clean_logs(self, parser):
+ paa = parser.add_argument
+ paa('--no-clean-logs',
+ dest='Global.clean_logs', action='store_false',
+ help="Don't Delete old log flies before starting.")
+
+ def _add_arguments(self):
+ super(ClusterDirConfigLoader, self)._add_arguments()
+ self._add_cluster_profile(self.parser)
+ self._add_cluster_dir(self.parser)
+ self._add_work_dir(self.parser)
+ self._add_clean_logs(self.parser)
+ self._add_no_clean_logs(self.parser)
+
+
+#-----------------------------------------------------------------------------
+# Crash handler for this application
+#-----------------------------------------------------------------------------
+
+
+_message_template = """\
+Oops, $self.app_name crashed. We do our best to make it stable, but...
+
+A crash report was automatically generated with the following information:
+ - A verbatim copy of the crash traceback.
+ - Data on your current $self.app_name configuration.
+
+It was left in the file named:
+\t'$self.crash_report_fname'
+If you can email this file to the developers, the information in it will help
+them in understanding and correcting the problem.
+
+You can mail it to: $self.contact_name at $self.contact_email
+with the subject '$self.app_name Crash Report'.
+
+If you want to do it now, the following command will work (under Unix):
+mail -s '$self.app_name Crash Report' $self.contact_email < $self.crash_report_fname
+
+To ensure accurate tracking of this issue, please file a report about it at:
+$self.bug_tracker
+"""
+
+class ClusterDirCrashHandler(CrashHandler):
+ """sys.excepthook for IPython itself, leaves a detailed report on disk."""
+
+ message_template = _message_template
+
+ def __init__(self, app):
+ contact_name = release.authors['Brian'][0]
+ contact_email = release.authors['Brian'][1]
+ bug_tracker = 'http://github.com/ipython/ipython/issues'
+ super(ClusterDirCrashHandler,self).__init__(
+ app, contact_name, contact_email, bug_tracker
+ )
+
+
+#-----------------------------------------------------------------------------
+# Main application
+#-----------------------------------------------------------------------------
+
+class ApplicationWithClusterDir(Application):
+ """An application that puts everything into a cluster directory.
+
+ Instead of looking for things in the ipython_dir, this type of application
+ will use its own private directory called the "cluster directory"
+ for things like config files, log files, etc.
+
+ The cluster directory is resolved as follows:
+
+ * If the ``--cluster-dir`` option is given, it is used.
+ * If ``--cluster-dir`` is not given, the application directory is
+ resolve using the profile name as ``cluster_<profile>``. The search
+ path for this directory is then i) cwd if it is found there
+ and ii) in ipython_dir otherwise.
+
+ The config file for the application is to be put in the cluster
+ dir and named the value of the ``config_file_name`` class attribute.
+ """
+
+ command_line_loader = ClusterDirConfigLoader
+ crash_handler_class = ClusterDirCrashHandler
+ auto_create_cluster_dir = True
+ # temporarily override default_log_level to INFO
+ default_log_level = logging.INFO
+
+ def create_default_config(self):
+ super(ApplicationWithClusterDir, self).create_default_config()
+ self.default_config.Global.profile = u'default'
+ self.default_config.Global.cluster_dir = u''
+ self.default_config.Global.work_dir = os.getcwd()
+ self.default_config.Global.log_to_file = False
+ self.default_config.Global.log_url = None
+ self.default_config.Global.clean_logs = False
+
+ def find_resources(self):
+ """This resolves the cluster directory.
+
+ This tries to find the cluster directory and if successful, it will
+ have done:
+ * Sets ``self.cluster_dir_obj`` to the :class:`ClusterDir` object for
+ the application.
+ * Sets ``self.cluster_dir`` attribute of the application and config
+ objects.
+
+ The algorithm used for this is as follows:
+ 1. Try ``Global.cluster_dir``.
+ 2. Try using ``Global.profile``.
+ 3. If both of these fail and ``self.auto_create_cluster_dir`` is
+ ``True``, then create the new cluster dir in the IPython directory.
+ 4. If all fails, then raise :class:`ClusterDirError`.
+ """
+
+ try:
+ cluster_dir = self.command_line_config.Global.cluster_dir
+ except AttributeError:
+ cluster_dir = self.default_config.Global.cluster_dir
+ cluster_dir = expand_path(cluster_dir)
+ try:
+ self.cluster_dir_obj = ClusterDir.find_cluster_dir(cluster_dir)
+ except ClusterDirError:
+ pass
+ else:
+ self.log.info('Using existing cluster dir: %s' % \
+ self.cluster_dir_obj.location
+ )
+ self.finish_cluster_dir()
+ return
+
+ try:
+ self.profile = self.command_line_config.Global.profile
+ except AttributeError:
+ self.profile = self.default_config.Global.profile
+ try:
+ self.cluster_dir_obj = ClusterDir.find_cluster_dir_by_profile(
+ self.ipython_dir, self.profile)
+ except ClusterDirError:
+ pass
+ else:
+ self.log.info('Using existing cluster dir: %s' % \
+ self.cluster_dir_obj.location
+ )
+ self.finish_cluster_dir()
+ return
+
+ if self.auto_create_cluster_dir:
+ self.cluster_dir_obj = ClusterDir.create_cluster_dir_by_profile(
+ self.ipython_dir, self.profile
+ )
+ self.log.info('Creating new cluster dir: %s' % \
+ self.cluster_dir_obj.location
+ )
+ self.finish_cluster_dir()
+ else:
+ raise ClusterDirError('Could not find a valid cluster directory.')
+
+ def finish_cluster_dir(self):
+ # Set the cluster directory
+ self.cluster_dir = self.cluster_dir_obj.location
+
+ # These have to be set because they could be different from the one
+ # that we just computed. Because command line has the highest
+ # priority, this will always end up in the master_config.
+ self.default_config.Global.cluster_dir = self.cluster_dir
+ self.command_line_config.Global.cluster_dir = self.cluster_dir
+
+ def find_config_file_name(self):
+ """Find the config file name for this application."""
+ # For this type of Application it should be set as a class attribute.
+ if not hasattr(self, 'default_config_file_name'):
+ self.log.critical("No config filename found")
+ else:
+ self.config_file_name = self.default_config_file_name
+
+ def find_config_file_paths(self):
+ # Set the search path to to the cluster directory. We should NOT
+ # include IPython.config.default here as the default config files
+ # are ALWAYS automatically moved to the cluster directory.
+ conf_dir = os.path.join(get_ipython_package_dir(), 'config', 'default')
+ self.config_file_paths = (self.cluster_dir,)
+
+ def pre_construct(self):
+ # The log and security dirs were set earlier, but here we put them
+ # into the config and log them.
+ config = self.master_config
+ sdir = self.cluster_dir_obj.security_dir
+ self.security_dir = config.Global.security_dir = sdir
+ ldir = self.cluster_dir_obj.log_dir
+ self.log_dir = config.Global.log_dir = ldir
+ pdir = self.cluster_dir_obj.pid_dir
+ self.pid_dir = config.Global.pid_dir = pdir
+ self.log.info("Cluster directory set to: %s" % self.cluster_dir)
+ config.Global.work_dir = unicode(expand_path(config.Global.work_dir))
+ # Change to the working directory. We do this just before construct
+ # is called so all the components there have the right working dir.
+ self.to_work_dir()
+
+ def to_work_dir(self):
+ wd = self.master_config.Global.work_dir
+ if unicode(wd) != unicode(os.getcwd()):
+ os.chdir(wd)
+ self.log.info("Changing to working dir: %s" % wd)
+
+ def start_logging(self):
+ # Remove old log files
+ if self.master_config.Global.clean_logs:
+ log_dir = self.master_config.Global.log_dir
+ for f in os.listdir(log_dir):
+ if re.match(r'%s-\d+\.(log|err|out)'%self.name,f):
+ # if f.startswith(self.name + u'-') and f.endswith('.log'):
+ os.remove(os.path.join(log_dir, f))
+ # Start logging to the new log file
+ if self.master_config.Global.log_to_file:
+ log_filename = self.name + u'-' + str(os.getpid()) + u'.log'
+ logfile = os.path.join(self.log_dir, log_filename)
+ open_log_file = open(logfile, 'w')
+ elif self.master_config.Global.log_url:
+ open_log_file = None
+ else:
+ open_log_file = sys.stdout
+ if open_log_file is not None:
+ self.log.removeHandler(self._log_handler)
+ self._log_handler = logging.StreamHandler(open_log_file)
+ self._log_formatter = logging.Formatter("[%(name)s] %(message)s")
+ self._log_handler.setFormatter(self._log_formatter)
+ self.log.addHandler(self._log_handler)
+ # log.startLogging(open_log_file)
+
+ def write_pid_file(self, overwrite=False):
+ """Create a .pid file in the pid_dir with my pid.
+
+ This must be called after pre_construct, which sets `self.pid_dir`.
+ This raises :exc:`PIDFileError` if the pid file exists already.
+ """
+ pid_file = os.path.join(self.pid_dir, self.name + u'.pid')
+ if os.path.isfile(pid_file):
+ pid = self.get_pid_from_file()
+ if not overwrite:
+ raise PIDFileError(
+ 'The pid file [%s] already exists. \nThis could mean that this '
+ 'server is already running with [pid=%s].' % (pid_file, pid)
+ )
+ with open(pid_file, 'w') as f:
+ self.log.info("Creating pid file: %s" % pid_file)
+ f.write(repr(os.getpid())+'\n')
+
+ def remove_pid_file(self):
+ """Remove the pid file.
+
+ This should be called at shutdown by registering a callback with
+ :func:`reactor.addSystemEventTrigger`. This needs to return
+ ``None``.
+ """
+ pid_file = os.path.join(self.pid_dir, self.name + u'.pid')
+ if os.path.isfile(pid_file):
+ try:
+ self.log.info("Removing pid file: %s" % pid_file)
+ os.remove(pid_file)
+ except:
+ self.log.warn("Error removing the pid file: %s" % pid_file)
+
+ def get_pid_from_file(self):
+ """Get the pid from the pid file.
+
+ If the pid file doesn't exist a :exc:`PIDFileError` is raised.
+ """
+ pid_file = os.path.join(self.pid_dir, self.name + u'.pid')
+ if os.path.isfile(pid_file):
+ with open(pid_file, 'r') as f:
+ pid = int(f.read().strip())
+ return pid
+ else:
+ raise PIDFileError('pid file not found: %s' % pid_file)
+
View
592 IPython/parallel/apps/ipclusterapp.py
@@ -0,0 +1,592 @@
+#!/usr/bin/env python
+# encoding: utf-8
+"""
+The ipcluster application.
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (C) 2008-2009 The IPython Development Team
+#
+# Distributed under the terms of the BSD License. The full license is in
+# the file COPYING, distributed as part of this software.
+#-----------------------------------------------------------------------------
+
+#-----------------------------------------------------------------------------
+# Imports
+#-----------------------------------------------------------------------------
+
+import errno
</