Skip to content

Commit

Permalink
docs: fixed some typos.
Browse files Browse the repository at this point in the history
This also fixes a bug in delete_root_volume().
  • Loading branch information
Kashif Rasul committed Dec 4, 2011
1 parent 13fbcb3 commit 509590e
Show file tree
Hide file tree
Showing 35 changed files with 70 additions and 70 deletions.
2 changes: 1 addition & 1 deletion distribute_setup.py
Expand Up @@ -256,7 +256,7 @@ def _rename_path(path):

def _remove_flat_installation(placeholder):
if not os.path.isdir(placeholder):
log.warn('Unkown installation at %s', placeholder)
log.warn('Unknown installation at %s', placeholder)
return False
found = False
for file in os.listdir(placeholder):
Expand Down
4 changes: 2 additions & 2 deletions docs/sphinx/contribute.rst
Expand Up @@ -39,7 +39,7 @@ Setup a virtualenv for StarCluster development
When developing a Python project it's useful to work inside an isolated Python
environment that lives inside your *$HOME* folder. This helps to avoid
dependency version mismatches between projects and also removes the need to
obtain root priviliges to install Python modules/packages for development.
obtain root privileges to install Python modules/packages for development.

Fortunately there exists a couple of projects that make creating and managing
isolated Python environments quick and easy:
Expand Down Expand Up @@ -96,7 +96,7 @@ will also modify your current shell's environment to work with the StarCluster
virtual environment. As you can see from the *echo $PATH* command above your
PATH environment variable has been modified to include the virtual
environment's *bin* directory at the front of the path. This means when you
type *python* or other Python-related commands (e.g. easy_install, pip, etc)
type *python* or other Python-related commands (e.g. easy_install, pip, etc.)
you will be using the virtual environment's isolated Python installation.

To see a list of your virtual environments:
Expand Down
4 changes: 2 additions & 2 deletions docs/sphinx/features.rst
Expand Up @@ -6,7 +6,7 @@ Features
clusters on EC2
* Support for attaching and NFS-sharing Amazon Elastic Block Storage (EBS)
volumes for persistent storage across a cluster
* Comes with a publicly avilable Amazon Machine Image (AMI) configured for
* Comes with a publicly available Amazon Machine Image (AMI) configured for
scientific computing
* AMI includes OpenMPI, ATLAS, Lapack, NumPy, SciPy, and other useful
libraries
Expand Down Expand Up @@ -38,7 +38,7 @@ Features
started.
* Multiple Instance Types - Added support for specifying instance types on a
per-node basis. Thanks to Dan Yamins for his contributions.
* Unpartitioned Volumes - StarCluster now supportsboth partitioned and
* Unpartitioned Volumes - StarCluster now supports both partitioned and
unpartitioned EBS volumes.
* New Plugin Hooks - Plugins can now play a part when adding/removing a node
as well as when restarting/shutting down the entire cluster by implementing
Expand Down
8 changes: 4 additions & 4 deletions docs/sphinx/guides/sge.rst
Expand Up @@ -160,11 +160,11 @@ external programs/scripts::
echo "finishing job :D"

As you can see, this script simply executes a few commands (such as echo, date,
cat, etc) and exits. Anything printed to the screen will be put in the job's
cat, etc.) and exits. Anything printed to the screen will be put in the job's
stdout file by Sun Grid Engine.

Since this is just a bash script, you can put any form of logic necessary in
the job script (i.e. if statements, while loops, for loops, etc) and you may
the job script (i.e. if statements, while loops, for loops, etc.) and you may
call any number of external programs needed to complete the job.

Let's see how you run this new job script. Save the script above to
Expand Down Expand Up @@ -331,7 +331,7 @@ Submitting OpenMPI Jobs using a Parallel Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The general workflow for running MPI code is:

1. Compile the code using mpicc, mpicxx, mpif77, mpif90, etc
1. Compile the code using mpicc, mpicxx, mpif77, mpif90, etc.
2. Copy the resulting executable to the same path on all nodes or to an
NFS-shared location on the master node

Expand All @@ -354,7 +354,7 @@ where the hostfile looks something like::
node003 slots=2

However, when using an SGE parallel environment with OpenMPI **you no longer
have to specify the -np, -hostfile, -host, etc options to mpirun**. This is
have to specify the -np, -hostfile, -host, etc. options to mpirun**. This is
because SGE will *automatically* assign hosts and processors to be used by
OpenMPI for your job. You also do not need to pass the --byslot and --bynode
options to mpirun given that these mechanisms are now handled by the *fill_up*
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/installation.rst
@@ -1,7 +1,7 @@
**********************
Installing StarCluster
**********************
StarCluster is available via the Python Package Index (PyPI) and comes with two
StarCluster is available via the PYthon Package Index (PyPI) and comes with two
public Amazon EC2 AMIs (i386 and x86_64). Below are instructions for
installing the latest stable release of StarCluster via PyPI (**recommended**).
There are also instructions for installing the latest development version from
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/manual/addremovenode.rst
Expand Up @@ -82,7 +82,7 @@ tag* representing the cluster you want to remove nodes from and a node *alias*::
>>> Removing node001 from known_hosts files
>>> Removing node001 from /etc/hosts
>>> Removing node001 from NFS
>>> Cancelling spot request sir-3567ba14
>>> Canceling spot request sir-3567ba14
>>> Terminating node: node001 (i-8bec7ce5)

The above command takes care to properly remove the node from the cluster by
Expand Down
6 changes: 3 additions & 3 deletions docs/sphinx/manual/configuration.rst
Expand Up @@ -441,7 +441,7 @@ world for the ``smallcluster`` template:
A permission section specifies a port range to open to a given network range
(cidr_ip). By default, the network range is set to ``0.0.0.0/0`` which
represents any ip address (ie the "world"). In the above example, we created a
represents any ip address (i.e. the "world"). In the above example, we created a
permission section called ``www`` that opens port 80 to the "world" by setting
the from_port and to_port both to be 80. You can restrict the ip addresses
that the rule applies to by specifying the proper cidr_ip setting. In the above
Expand Down Expand Up @@ -624,13 +624,13 @@ Finally, to (optionally) create new EBS volumes in the target region::
Given that a *cluster template* references these region-specific items you must
either override the relevant settings at the command line using the *start*
command's option flags or create separate *cluster templates* configured for
each region you use. To override the releveant settings at the command line::
each region you use. To override the relevant settings at the command line::

$ starcluster -r us-west-1 start -k myuswestkey -n ami-99999999

If you often use multiple regions you will most likely want to create separate
*cluster templates* for each region by extending a common template,
*smallcluster* for examle, and overriding the relevant settings:
*smallcluster* for example, and overriding the relevant settings:

.. code-block:: ini
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/manual/getting_started.rst
Expand Up @@ -133,7 +133,7 @@ the CLUSTER_USER you specified.

To test this out, let's login to the master node and attempt to run the
hostname command via SSH on node001 without a password for both root and
sgeadmin (ie CLUSTER_USER)::
sgeadmin (i.e. CLUSTER_USER)::

$ starcluster sshmaster mycluster
root@master:~# ssh node001 hostname
Expand Down
6 changes: 3 additions & 3 deletions docs/sphinx/manual/load_balancer.rst
Expand Up @@ -86,7 +86,7 @@ Load Balancer Statistics
========================
The *loadbalance* command supports outputting various load balancing stats over
time such as the number of nodes, number of running jobs, number of queued
jobs, etc while it's running:
jobs, etc. while it's running:

.. image:: ../_static/balancer_visualizer.png

Expand Down Expand Up @@ -205,7 +205,7 @@ complete the maximum workload from the queue, and use 75% of the hour you have
already paid for.

Leaving a node up for this amount of time also increases the stability of the
cluster. It is detrimental to the cluster and wasteful to be continuosly adding
cluster. It is detrimental to the cluster and wasteful to be continuously adding
and removing nodes.

The Process of Adding a Node
Expand All @@ -223,7 +223,7 @@ Adding a new node is a multi-stage process:
the master, and then exportfs so the shares are open to the slave nodes.
#. Mount the NFS shares on the new node.
#. Configure SGE: inform the master of the new host's address, and inform the
new host of the master, and excute the sge commands to establish
new host of the master, and execute the sge commands to establish
communications.

The Process of Removing a Node
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/manual/shell_completion.rst
Expand Up @@ -77,7 +77,7 @@ list of suggestions::
-i mediumcluster -x
-I molsim

In the example above, *smallcluster*, *mediumcluster*, *largecluster*, etc are
In the example above, *smallcluster*, *mediumcluster*, *largecluster*, etc. are
all cluster templates defined in ~/.starcluster/config. Typing an **s**
character after the *start* action will autocomplete the first argument to
*smallcluster*
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/manual/volumes.rst
Expand Up @@ -67,7 +67,7 @@ volume.
The **createvolume** command simply formats the *entire volume* using all
of the space on the device rather than creating partitions. This makes it
easier to resize the volume and expand the filesystem later on if you run
out of diskspace.
out of disk space.

To create and format a new volume simply specify a volume size in GB and the
availability zone to create the volume in::
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/overview.rst
Expand Up @@ -69,7 +69,7 @@ This means you can simply login to the master node of a cluster::

$ starcluster sshmaster mycluster

and connect to any of the nodes (e.g. node001, node002, etc) simply by
and connect to any of the nodes (e.g. node001, node002, etc.) simply by
running::

$ ssh node001
Expand Down
4 changes: 2 additions & 2 deletions starcluster/awsutils.py
Expand Up @@ -703,7 +703,7 @@ def get_zones(self, filters=None):

def get_zone(self, zone):
"""
Return zone object respresenting an EC2 availability zone
Return zone object representing an EC2 availability zone
Raises exception.ZoneDoesNotExist if not successful
"""
try:
Expand All @@ -716,7 +716,7 @@ def get_zone(self, zone):

def get_zone_or_none(self, zone):
"""
Return zone object respresenting an EC2 availability zone
Return zone object representing an EC2 availability zone
Returns None if unsuccessful
"""
try:
Expand Down
6 changes: 3 additions & 3 deletions starcluster/balancers/sge/__init__.py
Expand Up @@ -87,7 +87,7 @@ def parse_qstat(self, string, fields=None):

def job_multiply(self, hash):
"""
this function deals with sge jobs with a task range, ie qsub -t 1-20:1
this function deals with sge jobs with a task range, i.e. qsub -t 1-20:1
makes 20 jobs. self.jobs needs to represent that it is 20 jobs instead
of just 1.
"""
Expand Down Expand Up @@ -494,7 +494,7 @@ def get_stats(self):
it will feed these stats to SGEStats, which parses the XML.
it will return two arrays: one of hosts, each host has a hash with its
host information inside. The job array contains a hash for every job,
containing statistics about the job name, priority, etc
containing statistics about the job name, priority, etc.
"""
log.debug("starting get_stats")
master = self._cluster.master_node
Expand All @@ -518,7 +518,7 @@ def get_stats(self):
ignore_exit_status=True,
source_profile=True))
except Exception, e:
log.error("Error occured getting SGE stats via ssh. "
log.error("Error occurred getting SGE stats via ssh. "
"Cluster terminated?")
log.error(e)
return -1
Expand Down
14 changes: 7 additions & 7 deletions starcluster/cluster.py
Expand Up @@ -377,7 +377,7 @@ def zone(self):
availability zone between those volumes. If an availability zone
is explicitly specified in the config and does not match the common
availability zone of the volumes, an exception is raised. If all
volumes are not in the same availabilty zone an exception is raised.
volumes are not in the same availability zone an exception is raised.
If no volumes are specified, returns the user specified availability
zone if it exists.
"""
Expand Down Expand Up @@ -408,7 +408,7 @@ def load_volumes(self, vols):
not specified.
This method assigns the first volume to /dev/sdz, second to /dev/sdy,
etc for all volumes that do not include a device/partition setting
etc. for all volumes that do not include a device/partition setting
"""
devices = ['/dev/sd%s' % s for s in string.lowercase]
devmap = {}
Expand Down Expand Up @@ -574,7 +574,7 @@ def load_receipt(self, load_plugins=True):
self.plugins = self.load_plugins(self._plugins)
except exception.PluginError, e:
log.warn(e)
log.warn("An error occured while loading plugins")
log.warn("An error occurred while loading plugins")
log.warn("Not running any plugins")
except Exception, e:
raise exception.ClusterReceiptError(
Expand Down Expand Up @@ -849,7 +849,7 @@ def remove_nodes(self, nodes, terminate=True):
if not terminate:
continue
if node.spot_id:
log.info("Cancelling spot request %s" % node.spot_id)
log.info("Canceling spot request %s" % node.spot_id)
node.get_spot_request().cancel()
node.terminate()

Expand Down Expand Up @@ -1325,7 +1325,7 @@ def stop_cluster(self, terminate_unstoppable=False):
def terminate_cluster(self):
"""
Destroy this cluster by first detaching all volumes, shutting down all
instances, cancelling all spot requests (if any), removing its
instances, canceling all spot requests (if any), removing its
placement group (if any), and removing its security group.
"""
try:
Expand All @@ -1338,7 +1338,7 @@ def terminate_cluster(self):
node.terminate()
for spot in self.spot_requests:
if spot.state not in ['cancelled', 'closed']:
log.info("Cancelling spot instance request: %s" % spot.id)
log.info("Canceling spot instance request: %s" % spot.id)
spot.cancel()
sg = self.ec2.get_group_or_none(self._security_group)
pg = self.ec2.get_placement_group_or_none(self._security_group)
Expand Down Expand Up @@ -1481,7 +1481,7 @@ def run_plugin(self, plugin, name='', method_name='run', node=None):
except exception.MasterDoesNotExist:
raise
except Exception, e:
log.error("Error occured while running plugin '%s':" % plugin_name)
log.error("Error occurred while running plugin '%s':" % plugin_name)
if isinstance(e, exception.ThreadPoolException):
e.print_excs()
log.debug(e.format_excs())
Expand Down
2 changes: 1 addition & 1 deletion starcluster/clustersetup.py
Expand Up @@ -253,7 +253,7 @@ def _setup_ebs_volumes(self):
log.error(
"volume has more than one partition, please specify "
"which partition to use (e.g. partition=0, "
"partition=1, etc) in the volume's config")
"partition=1, etc.) in the volume's config")
continue
elif not master.ssh.path_exists(volume_partition):
log.warn("Cannot find partition %s on volume %s" % \
Expand Down
2 changes: 1 addition & 1 deletion starcluster/commands/addnode.py
Expand Up @@ -55,7 +55,7 @@ def addopts(self, parser):
parser.add_option("-a", "--alias", dest="alias",
action="append", type="string", default=[],
help=("alias to give to the new node " + \
"(e.g. node007, mynode, etc)"))
"(e.g. node007, mynode, etc.)"))
parser.add_option("-n", "--num-nodes", dest="num_nodes",
action="store", type="int", default=1,
help=("number of new nodes to launch"))
Expand Down
2 changes: 1 addition & 1 deletion starcluster/commands/removeimage.py
Expand Up @@ -37,7 +37,7 @@ class CmdRemoveImage(ImageCompleter):
def addopts(self, parser):
parser.add_option("-p", "--pretend", dest="pretend",
action="store_true", default=False,
help="pretend run, dont actually remove anything")
help="pretend run, do not actually remove anything")
parser.add_option("-c", "--confirm", dest="confirm",
action="store_true", default=False,
help="do not prompt for confirmation, "
Expand Down
2 changes: 1 addition & 1 deletion starcluster/commands/runplugin.py
Expand Up @@ -5,7 +5,7 @@ class CmdRunPlugin(CmdBase):
"""
runplugin <plugin_name> <cluster_tag>
Run a StarCluster plugin on a runnning cluster
Run a StarCluster plugin on a running cluster
plugin_name - name of plugin section defined in the config
cluster_tag - tag name of a running StarCluster
Expand Down
4 changes: 2 additions & 2 deletions starcluster/config.py
Expand Up @@ -311,7 +311,7 @@ def _load_extends_settings(self, section_name, store):
step is a group of settings for a section in the form of a dictionary.
A 'master' dictionary is updated with the settings at each step. This
causes the next group of settings to override the previous, and so on.
The 'section_name' settings are at the top of the dep tree.
The 'section_name' settings are at the top of the dependency tree.
"""
section = store[section_name]
extends = section.get('extends')
Expand Down Expand Up @@ -522,7 +522,7 @@ def _load_sections(self, section_prefix, section_settings,
def _load_cluster_sections(self, cluster_sections):
"""
Loads all cluster sections. Similar to _load_sections but also handles
populating specified keypair,volume,plugins,permissions,etc settings
populating specified keypair, volume, plugins, permissions, etc. settings
"""
clusters = cluster_sections
cluster_store = AttributeDict()
Expand Down
4 changes: 2 additions & 2 deletions starcluster/exception.py
Expand Up @@ -117,7 +117,7 @@ def __init__(self, snap_id):
class BucketAlreadyExists(AWSError):
def __init__(self, bucket_name):
self.msg = "bucket with name '%s' already exists on S3\n" % bucket_name
self.msg += "(NOTE: S3's bucket namepsace is shared by all AWS users)"
self.msg += "(NOTE: S3's bucket namespace is shared by all AWS users)"


class BucketDoesNotExist(AWSError):
Expand Down Expand Up @@ -471,7 +471,7 @@ def format_excs(self):
excs = []
for exception in self.exceptions:
e, tb_msg, jobid = exception
excs.append('error occured in job (id=%s): %s' % (jobid, str(e)))
excs.append('error occurred in job (id=%s): %s' % (jobid, str(e)))
excs.append(tb_msg)
return '\n'.join(excs)

Expand Down
2 changes: 1 addition & 1 deletion starcluster/image.py
Expand Up @@ -212,7 +212,7 @@ def create_image(self, size=15):
return self._create_image_from_ebs(size)
return self._create_image_from_instance_store(size)
except:
log.error("Error occured while creating image")
log.error("Error occurred while creating image")
if self._snap:
log.error("Removing generated snapshot '%s'" % self._snap)
self._snap.delete()
Expand Down
6 changes: 3 additions & 3 deletions starcluster/iptools.py
Expand Up @@ -22,7 +22,7 @@
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Utitlities for dealing with ip addresses.
"""Utilities for dealing with ip addresses.
Functions:
- validate_ip: Validate a dotted-quad ip address.
Expand Down Expand Up @@ -89,7 +89,7 @@ def validate_ip(s):
"""Validate a dotted-quad ip address.
The string is considered a valid dotted-quad address if it consists of
one to four octets (0-255) seperated by periods (.).
one to four octets (0-255) separated by periods (.).
>>> validate_ip('127.0.0.1')
Expand Down Expand Up @@ -128,7 +128,7 @@ def validate_cidr(s):
"""Validate a CIDR notation ip address.
The string is considered a valid CIDR address if it consists of one to
four octets (0-255) seperated by periods (.) followed by a forward slash
four octets (0-255) separated by periods (.) followed by a forward slash
(/) and a bit mask length (1-32).
Expand Down

0 comments on commit 509590e

Please sign in to comment.