Skip to content

Commit

Permalink
rename MPIExecLaunchers to MPILaunchers
Browse files Browse the repository at this point in the history
MPIExec is unwieldy, and unnecessarily specific when they can be
configured to use mpirun, aprun, etc.

This means you can now start an MPI cluster with just:

    $> ipcluster start --engines=MPI

old names still work, logging deprecation warnings.

closes ipython#1137
  • Loading branch information
minrk committed Dec 12, 2011
1 parent 75fa782 commit 5b49695
Show file tree
Hide file tree
Showing 5 changed files with 90 additions and 45 deletions.
51 changes: 27 additions & 24 deletions IPython/parallel/apps/ipclusterapp.py
Expand Up @@ -242,25 +242,25 @@ def _engine_launcher_changed(self, name, old, new):
engine_launcher_class = DottedObjectName('LocalEngineSetLauncher',
config=True,
help="""The class for launching a set of Engines. Change this value
to use various batch systems to launch your engines, such as PBS,SGE,MPIExec,etc.
to use various batch systems to launch your engines, such as PBS,SGE,MPI,etc.
Each launcher class has its own set of configuration options, for making sure
it will work in your environment.
You can also write your own launcher, and specify it's absolute import path,
as in 'mymodule.launcher.FTLEnginesLauncher`.
Examples include:
IPython's bundled examples include:
LocalEngineSetLauncher : start engines locally as subprocesses [default]
MPIExecEngineSetLauncher : use mpiexec to launch in an MPI environment
PBSEngineSetLauncher : use PBS (qsub) to submit engines to a batch queue
SGEEngineSetLauncher : use SGE (qsub) to submit engines to a batch queue
LSFEngineSetLauncher : use LSF (bsub) to submit engines to a batch queue
SSHEngineSetLauncher : use SSH to start the controller
Note that SSH does *not* move the connection files
around, so you will likely have to do this manually
unless the machines are on a shared file system.
WindowsHPCEngineSetLauncher : use Windows HPC
Local : start engines locally as subprocesses [default]
MPI : use mpiexec to launch engines in an MPI environment
PBS : use PBS (qsub) to submit engines to a batch queue
SGE : use SGE (qsub) to submit engines to a batch queue
LSF : use LSF (bsub) to submit engines to a batch queue
SSH : use SSH to start the controller
Note that SSH does *not* move the connection files
around, so you will likely have to do this manually
unless the machines are on a shared file system.
WindowsHPC : use Windows HPC
If you are using one of IPython's builtin launchers, you can specify just the
prefix, e.g:
Expand All @@ -269,7 +269,7 @@ def _engine_launcher_changed(self, name, old, new):
or:
ipcluster start --engines 'MPIExec'
ipcluster start --engines=MPI
"""
)
Expand Down Expand Up @@ -307,7 +307,7 @@ def build_launcher(self, clsname, kind=None):
# not a module, presume it's the raw name in apps.launcher
if kind and kind not in clsname:
# doesn't match necessary full class name, assume it's
# just 'PBS' or 'MPIExec' prefix:
# just 'PBS' or 'MPI' prefix:
clsname = clsname + kind + 'Launcher'
clsname = 'IPython.parallel.apps.launcher.'+clsname
try:
Expand Down Expand Up @@ -451,20 +451,23 @@ def _controller_launcher_changed(self, name, old, new):
controller_launcher_class = DottedObjectName('LocalControllerLauncher',
config=True,
help="""The class for launching a Controller. Change this value if you want
your controller to also be launched by a batch system, such as PBS,SGE,MPIExec,etc.
your controller to also be launched by a batch system, such as PBS,SGE,MPI,etc.
Each launcher class has its own set of configuration options, for making sure
it will work in your environment.
Note that using a batch launcher for the controller *does not* put it
in the same batch job as the engines, so they will still start separately.
Examples include:
IPython's bundled examples include:
LocalControllerLauncher : start engines locally as subprocesses
MPIExecControllerLauncher : use mpiexec to launch engines in an MPI universe
PBSControllerLauncher : use PBS (qsub) to submit engines to a batch queue
SGEControllerLauncher : use SGE (qsub) to submit engines to a batch queue
LSFControllerLauncher : use LSF (bsub) to submit engines to a batch queue
SSHControllerLauncher : use SSH to start the controller
WindowsHPCControllerLauncher : use Windows HPC
Local : start engines locally as subprocesses
MPI : use mpiexec to launch the controller in an MPI universe
PBS : use PBS (qsub) to submit the controller to a batch queue
SGE : use SGE (qsub) to submit the controller to a batch queue
LSF : use LSF (bsub) to submit the controller to a batch queue
SSH : use SSH to start the controller
WindowsHPC : use Windows HPC
If you are using one of IPython's builtin launchers, you can specify just the
prefix, e.g:
Expand All @@ -473,7 +476,7 @@ def _controller_launcher_changed(self, name, old, new):
or:
ipcluster start --controller 'MPIExec'
ipcluster start --controller=MPI
"""
)
Expand Down
62 changes: 50 additions & 12 deletions IPython/parallel/apps/launcher.py
Expand Up @@ -440,11 +440,11 @@ def _notice_engine_stopped(self, data):


#-----------------------------------------------------------------------------
# MPIExec launchers
# MPI launchers
#-----------------------------------------------------------------------------


class MPIExecLauncher(LocalProcessLauncher):
class MPILauncher(LocalProcessLauncher):
"""Launch an external process using mpiexec."""

mpi_cmd = List(['mpiexec'], config=True,
Expand All @@ -459,6 +459,18 @@ class MPIExecLauncher(LocalProcessLauncher):
help="The command line argument to the program."
)
n = Integer(1)

def __init__(self, *args, **kwargs):
# deprecation for old MPIExec names:
config = kwargs.get('config', {})
for oldname in ('MPIExecLauncher', 'MPIExecControllerLauncher', 'MPIExecEngineSetLauncher'):
deprecated = config.get(oldname)
if deprecated:
newname = oldname.replace('MPIExec', 'MPI')
config[newname].update(deprecated)
self.log.warn("WARNING: %s name has been deprecated, use %s", oldname, newname)

super(MPILauncher, self).__init__(*args, **kwargs)

def find_args(self):
"""Build self.args using all the fields."""
Expand All @@ -468,10 +480,10 @@ def find_args(self):
def start(self, n):
"""Start n instances of the program using mpiexec."""
self.n = n
return super(MPIExecLauncher, self).start()
return super(MPILauncher, self).start()


class MPIExecControllerLauncher(MPIExecLauncher, ControllerMixin):
class MPIControllerLauncher(MPILauncher, ControllerMixin):
"""Launch a controller using mpiexec."""

# alias back to *non-configurable* program[_args] for use in find_args()
Expand All @@ -487,11 +499,11 @@ def program_args(self):

def start(self):
"""Start the controller by profile_dir."""
self.log.info("Starting MPIExecControllerLauncher: %r" % self.args)
return super(MPIExecControllerLauncher, self).start(1)
self.log.info("Starting MPIControllerLauncher: %r", self.args)
return super(MPIControllerLauncher, self).start(1)


class MPIExecEngineSetLauncher(MPIExecLauncher, EngineMixin):
class MPIEngineSetLauncher(MPILauncher, EngineMixin):
"""Launch engines using mpiexec"""

# alias back to *non-configurable* program[_args] for use in find_args()
Expand All @@ -508,8 +520,34 @@ def program_args(self):
def start(self, n):
"""Start n engines by profile or profile_dir."""
self.n = n
self.log.info('Starting MPIExecEngineSetLauncher: %r' % self.args)
return super(MPIExecEngineSetLauncher, self).start(n)
self.log.info('Starting MPIEngineSetLauncher: %r', self.args)
return super(MPIEngineSetLauncher, self).start(n)

# deprecated MPIExec names
class DeprecatedMPILauncher(object):
def warn(self):
oldname = self.__class__.__name__
newname = oldname.replace('MPIExec', 'MPI')
self.log.warn("WARNING: %s name is deprecated, use %s", oldname, newname)

class MPIExecLauncher(MPILauncher, DeprecatedMPILauncher):
"""Deprecated, use MPILauncher"""
def __init__(self, *args, **kwargs):
super(MPIExecLauncher, self).__init__(*args, **kwargs)
self.warn()

class MPIExecControllerLauncher(MPIControllerLauncher, DeprecatedMPILauncher):
"""Deprecated, use MPIControllerLauncher"""
def __init__(self, *args, **kwargs):
super(MPIExecControllerLauncher, self).__init__(*args, **kwargs)
self.warn()

class MPIExecEngineSetLauncher(MPIEngineSetLauncher, DeprecatedMPILauncher):
"""Deprecated, use MPIEngineSetLauncher"""
def __init__(self, *args, **kwargs):
super(MPIExecEngineSetLauncher, self).__init__(*args, **kwargs)
self.warn()


#-----------------------------------------------------------------------------
# SSH launchers
Expand Down Expand Up @@ -1149,9 +1187,9 @@ def start(self):
LocalEngineSetLauncher,
]
mpi_launchers = [
MPIExecLauncher,
MPIExecControllerLauncher,
MPIExecEngineSetLauncher,
MPILauncher,
MPIControllerLauncher,
MPIEngineSetLauncher,
]
ssh_launchers = [
SSHLauncher,
Expand Down
8 changes: 4 additions & 4 deletions docs/source/parallel/parallel_mpi.txt
Expand Up @@ -48,11 +48,11 @@ these things to happen.
Automatic starting using :command:`mpiexec` and :command:`ipcluster`
--------------------------------------------------------------------

The easiest approach is to use the `MPIExec` Launchers in :command:`ipcluster`,
The easiest approach is to use the `MPI` Launchers in :command:`ipcluster`,
which will first start a controller and then a set of engines using
:command:`mpiexec`::

$ ipcluster start -n 4 --elauncher=MPIExecEngineSetLauncher
$ ipcluster start -n 4 --engines=MPIEngineSetLauncher

This approach is best as interrupting :command:`ipcluster` will automatically
stop and clean up the controller and engines.
Expand All @@ -63,14 +63,14 @@ Manual starting using :command:`mpiexec`
If you want to start the IPython engines using the :command:`mpiexec`, just
do::

$ mpiexec n=4 ipengine --mpi=mpi4py
$ mpiexec -n 4 ipengine --mpi=mpi4py

This requires that you already have a controller running and that the FURL
files for the engines are in place. We also have built in support for
PyTrilinos [PyTrilinos]_, which can be used (assuming is installed) by
starting the engines with::

$ mpiexec n=4 ipengine --mpi=pytrilinos
$ mpiexec -n 4 ipengine --mpi=pytrilinos

Automatic starting using PBS and :command:`ipcluster`
------------------------------------------------------
Expand Down
8 changes: 4 additions & 4 deletions docs/source/parallel/parallel_process.txt
Expand Up @@ -163,7 +163,7 @@ get an IPython cluster running with engines started with MPI is:

.. sourcecode:: bash

$> ipcluster start --engines=MPIExec
$> ipcluster start --engines=MPI

Assuming that the default MPI config is sufficient.

Expand Down Expand Up @@ -196,11 +196,11 @@ If these are satisfied, you can create a new profile::

and edit the file :file:`IPYTHONDIR/profile_mpi/ipcluster_config.py`.

There, instruct ipcluster to use the MPIExec launchers by adding the lines:
There, instruct ipcluster to use the MPI launchers by adding the lines:

.. sourcecode:: python

c.IPClusterEngines.engine_launcher_class = 'MPIExecEngineSetLauncher'
c.IPClusterEngines.engine_launcher_class = 'MPIEngineSetLauncher'

If the default MPI configuration is correct, then you can now start your cluster, with::

Expand All @@ -215,7 +215,7 @@ If you have a reason to also start the Controller with mpi, you can specify:

.. sourcecode:: python

c.IPClusterStart.controller_launcher_class = 'MPIExecControllerLauncher'
c.IPClusterStart.controller_launcher_class = 'MPIControllerLauncher'

.. note::

Expand Down
6 changes: 5 additions & 1 deletion docs/source/whatsnew/development.txt
Expand Up @@ -140,11 +140,15 @@ Backwards incompatible changes

would now be specified as::

IPClusterEngines.engine_launcher_class = 'MPIExec'
IPClusterEngines.engine_launcher_class = 'MPI'
IPClusterStart.controller_launcher_class = 'SSH'

The full path will still work, and is necessary for using custom launchers not in
IPython's launcher module.

Further, MPIExec launcher names are now prefixed with just MPI, to better match
other batch launchers, and be generally more intuitive. The MPIExec names are
deprecated, but continue to work.

* For embedding a shell, note that the parameter ``user_global_ns`` has been
replaced by ``user_module``, and expects a module-like object, rather than
Expand Down

0 comments on commit 5b49695

Please sign in to comment.