Skip to content


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Scalable cluster administration Python framework — Manage node sets, node groups and execute commands on cluster nodes in parallel.

Worker: add new StreamWorker class

The StreamWorker manages a single group of streams (using one
EngineClient internally). These streams can be configured in read or
write modes. StreamWorker doesn't execute any commands, the streams have
to be already established.

StreamWorker is a now an important class for Tree mode support as it is
used directly in the module. The gateway acts as a
StreamWorker reading from stdin (from the parent ssh process) and
writing to stdout (back to the parent ssh process).

It is now also used as a base class for WorkerSimple/WorkerPopen to
factorize code.

This commit also renames previously introduced Task default option
'worker' to 'distant_worker'.

Change-Id: Icfd6613e06e01cdc250510560acbbc39c5554035
latest commit f7aea2f25d
@thiell thiell authored


 ClusterShell 1.6 Python Library and Tools

ClusterShell is an event-driven open source Python library, designed to run
local or distant commands in parallel on server farms or on large Linux
clusters. It will take care of common issues encountered on HPC clusters, such
as operating on groups of nodes, running distributed commands using optimized
execution algorithms, as well as gathering results and merging identical
outputs, or retrieving return codes. ClusterShell takes advantage of existing
remote shell facilities already installed on your systems, like SSH.

ClusterShell's primary goal is to improve the administration of high-
performance clusters by providing a lightweight but scalable Python API for
developers. It also provides clush, clubak and nodeset, three convenient
command-line tools that allow traditional shell scripts to benefit from some
of the library features.

 Requirements (v1.6)

 * GNU/Linux, *BSD, Mac OS X, etc.

 * OpenSSH (ssh/scp)

 * Python 2.x (x >= 4)


ClusterShell is distributed under the CeCILL-C license, a French transposition
of the GNU LGPL, and is fully LGPL-compatible (see Licence_CeCILL-C_V1-en.txt).


When possible, please use the RPM/deb package distribution:

Otherwise in the source directory, use:

    # python install
    # cp -r conf /etc/clustershell

For installation on Mac OS X, please see:

 Test Suite

Regression testing scripts are available in the 'tests' directory:

    $ cd tests
    $ nosetests -sv <>
    $ nosetests -sv --all-modules

You have to allow 'ssh localhost' with no warning for "remote" tests to run.


Local API documentation is available, just type:

    $ pydoc ClusterShell

Online API documentation (epydoc) is available here:

 ClusterShell interactively

Python 2.7.1 (r271:86832, Apr 12 2011, 16:15:16)
[GCC 4.6.0 20110331 (Red Hat 4.6.0-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from ClusterShell.Task import task_self
>>> from ClusterShell.NodeSet import NodeSet
>>> task = task_self()
>>>"/bin/uname -r", nodes="linux[4-6,32-39]")
<ClusterShell.Worker.Ssh.WorkerSsh object at 0x20a5e90>
>>> for buf, key in task.iter_buffers():
...     print NodeSet.fromlist(key), buf

linux[4-6] 2.6.32-71.el6.x86_64

 ClusterShell Tools

Powerful tools are provided: clush, nodeset and clubak.

* clush is a friendly and full-featured parallel shell (see: man clush).
  If in doubt, just check if your other parallel tools can do things like:
  # tar -czf - dir | clush -w node[10-44] tar -C /tmp -xzvf -

* nodeset is used to deal with your cluster nodeset, it can be bound to
  external groups (see: man nodeset and man groups.conf).

* clubak is a tool used to format output from clush/pdsh-like output
  (already included in clush with -b), see man clubak.


Main web site:

Github source respository:

Github Wiki:

Github Issue tracking system: project page:

Python Package Index (PyPI) link:

ClusterShell was born along with Shine, a scalable Lustre FS admin tool:


Stephane Thiell           <>
Aurelien Degremont     <>
Henri Doreau                 <>

CEA/DAM 2010, 2011, 2012 -
Something went wrong with that request. Please try again.