Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

docs: fixed some typos.

This also fixes a bug in delete_root_volume().
  • Loading branch information...
commit 509590e2abfc89e3d55c149f165f0f3291588371 1 parent 13fbcb3
@kashif kashif authored
Showing with 70 additions and 70 deletions.
  1. +1 −1  distribute_setup.py
  2. +2 −2 docs/sphinx/contribute.rst
  3. +2 −2 docs/sphinx/features.rst
  4. +4 −4 docs/sphinx/guides/sge.rst
  5. +1 −1  docs/sphinx/installation.rst
  6. +1 −1  docs/sphinx/manual/addremovenode.rst
  7. +3 −3 docs/sphinx/manual/configuration.rst
  8. +1 −1  docs/sphinx/manual/getting_started.rst
  9. +3 −3 docs/sphinx/manual/load_balancer.rst
  10. +1 −1  docs/sphinx/manual/shell_completion.rst
  11. +1 −1  docs/sphinx/manual/volumes.rst
  12. +1 −1  docs/sphinx/overview.rst
  13. +2 −2 starcluster/awsutils.py
  14. +3 −3 starcluster/balancers/sge/__init__.py
  15. +7 −7 starcluster/cluster.py
  16. +1 −1  starcluster/clustersetup.py
  17. +1 −1  starcluster/commands/addnode.py
  18. +1 −1  starcluster/commands/removeimage.py
  19. +1 −1  starcluster/commands/runplugin.py
  20. +2 −2 starcluster/config.py
  21. +2 −2 starcluster/exception.py
  22. +1 −1  starcluster/image.py
  23. +3 −3 starcluster/iptools.py
  24. +1 −1  starcluster/logger.py
  25. +3 −3 starcluster/node.py
  26. +2 −2 starcluster/optcomplete.py
  27. +1 −1  starcluster/plugins/mysql.py
  28. +1 −1  starcluster/plugins/pkginstaller.py
  29. +5 −5 starcluster/progressbar.py
  30. +6 −6 starcluster/scp.py
  31. +1 −1  starcluster/tests/test_config.py
  32. +1 −1  starcluster/threadpool.py
  33. +1 −1  starcluster/utils.py
  34. +2 −2 starcluster/volume.py
  35. +1 −1  starcluster/webtools.py
View
2  distribute_setup.py
@@ -256,7 +256,7 @@ def _rename_path(path):
def _remove_flat_installation(placeholder):
if not os.path.isdir(placeholder):
- log.warn('Unkown installation at %s', placeholder)
+ log.warn('Unknown installation at %s', placeholder)
return False
found = False
for file in os.listdir(placeholder):
View
4 docs/sphinx/contribute.rst
@@ -39,7 +39,7 @@ Setup a virtualenv for StarCluster development
When developing a Python project it's useful to work inside an isolated Python
environment that lives inside your *$HOME* folder. This helps to avoid
dependency version mismatches between projects and also removes the need to
-obtain root priviliges to install Python modules/packages for development.
+obtain root privileges to install Python modules/packages for development.
Fortunately there exists a couple of projects that make creating and managing
isolated Python environments quick and easy:
@@ -96,7 +96,7 @@ will also modify your current shell's environment to work with the StarCluster
virtual environment. As you can see from the *echo $PATH* command above your
PATH environment variable has been modified to include the virtual
environment's *bin* directory at the front of the path. This means when you
-type *python* or other Python-related commands (e.g. easy_install, pip, etc)
+type *python* or other Python-related commands (e.g. easy_install, pip, etc.)
you will be using the virtual environment's isolated Python installation.
To see a list of your virtual environments:
View
4 docs/sphinx/features.rst
@@ -6,7 +6,7 @@ Features
clusters on EC2
* Support for attaching and NFS-sharing Amazon Elastic Block Storage (EBS)
volumes for persistent storage across a cluster
- * Comes with a publicly avilable Amazon Machine Image (AMI) configured for
+ * Comes with a publicly available Amazon Machine Image (AMI) configured for
scientific computing
* AMI includes OpenMPI, ATLAS, Lapack, NumPy, SciPy, and other useful
libraries
@@ -38,7 +38,7 @@ Features
started.
* Multiple Instance Types - Added support for specifying instance types on a
per-node basis. Thanks to Dan Yamins for his contributions.
- * Unpartitioned Volumes - StarCluster now supportsboth partitioned and
+ * Unpartitioned Volumes - StarCluster now supports both partitioned and
unpartitioned EBS volumes.
* New Plugin Hooks - Plugins can now play a part when adding/removing a node
as well as when restarting/shutting down the entire cluster by implementing
View
8 docs/sphinx/guides/sge.rst
@@ -160,11 +160,11 @@ external programs/scripts::
echo "finishing job :D"
As you can see, this script simply executes a few commands (such as echo, date,
-cat, etc) and exits. Anything printed to the screen will be put in the job's
+cat, etc.) and exits. Anything printed to the screen will be put in the job's
stdout file by Sun Grid Engine.
Since this is just a bash script, you can put any form of logic necessary in
-the job script (i.e. if statements, while loops, for loops, etc) and you may
+the job script (i.e. if statements, while loops, for loops, etc.) and you may
call any number of external programs needed to complete the job.
Let's see how you run this new job script. Save the script above to
@@ -331,7 +331,7 @@ Submitting OpenMPI Jobs using a Parallel Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The general workflow for running MPI code is:
-1. Compile the code using mpicc, mpicxx, mpif77, mpif90, etc
+1. Compile the code using mpicc, mpicxx, mpif77, mpif90, etc.
2. Copy the resulting executable to the same path on all nodes or to an
NFS-shared location on the master node
@@ -354,7 +354,7 @@ where the hostfile looks something like::
node003 slots=2
However, when using an SGE parallel environment with OpenMPI **you no longer
-have to specify the -np, -hostfile, -host, etc options to mpirun**. This is
+have to specify the -np, -hostfile, -host, etc. options to mpirun**. This is
because SGE will *automatically* assign hosts and processors to be used by
OpenMPI for your job. You also do not need to pass the --byslot and --bynode
options to mpirun given that these mechanisms are now handled by the *fill_up*
View
2  docs/sphinx/installation.rst
@@ -1,7 +1,7 @@
**********************
Installing StarCluster
**********************
-StarCluster is available via the Python Package Index (PyPI) and comes with two
+StarCluster is available via the PYthon Package Index (PyPI) and comes with two
public Amazon EC2 AMIs (i386 and x86_64). Below are instructions for
installing the latest stable release of StarCluster via PyPI (**recommended**).
There are also instructions for installing the latest development version from
View
2  docs/sphinx/manual/addremovenode.rst
@@ -82,7 +82,7 @@ tag* representing the cluster you want to remove nodes from and a node *alias*::
>>> Removing node001 from known_hosts files
>>> Removing node001 from /etc/hosts
>>> Removing node001 from NFS
- >>> Cancelling spot request sir-3567ba14
+ >>> Canceling spot request sir-3567ba14
>>> Terminating node: node001 (i-8bec7ce5)
The above command takes care to properly remove the node from the cluster by
View
6 docs/sphinx/manual/configuration.rst
@@ -441,7 +441,7 @@ world for the ``smallcluster`` template:
A permission section specifies a port range to open to a given network range
(cidr_ip). By default, the network range is set to ``0.0.0.0/0`` which
-represents any ip address (ie the "world"). In the above example, we created a
+represents any ip address (i.e. the "world"). In the above example, we created a
permission section called ``www`` that opens port 80 to the "world" by setting
the from_port and to_port both to be 80. You can restrict the ip addresses
that the rule applies to by specifying the proper cidr_ip setting. In the above
@@ -624,13 +624,13 @@ Finally, to (optionally) create new EBS volumes in the target region::
Given that a *cluster template* references these region-specific items you must
either override the relevant settings at the command line using the *start*
command's option flags or create separate *cluster templates* configured for
-each region you use. To override the releveant settings at the command line::
+each region you use. To override the relevant settings at the command line::
$ starcluster -r us-west-1 start -k myuswestkey -n ami-99999999
If you often use multiple regions you will most likely want to create separate
*cluster templates* for each region by extending a common template,
-*smallcluster* for examle, and overriding the relevant settings:
+*smallcluster* for example, and overriding the relevant settings:
.. code-block:: ini
View
2  docs/sphinx/manual/getting_started.rst
@@ -133,7 +133,7 @@ the CLUSTER_USER you specified.
To test this out, let's login to the master node and attempt to run the
hostname command via SSH on node001 without a password for both root and
-sgeadmin (ie CLUSTER_USER)::
+sgeadmin (i.e. CLUSTER_USER)::
$ starcluster sshmaster mycluster
root@master:~# ssh node001 hostname
View
6 docs/sphinx/manual/load_balancer.rst
@@ -86,7 +86,7 @@ Load Balancer Statistics
========================
The *loadbalance* command supports outputting various load balancing stats over
time such as the number of nodes, number of running jobs, number of queued
-jobs, etc while it's running:
+jobs, etc. while it's running:
.. image:: ../_static/balancer_visualizer.png
@@ -205,7 +205,7 @@ complete the maximum workload from the queue, and use 75% of the hour you have
already paid for.
Leaving a node up for this amount of time also increases the stability of the
-cluster. It is detrimental to the cluster and wasteful to be continuosly adding
+cluster. It is detrimental to the cluster and wasteful to be continuously adding
and removing nodes.
The Process of Adding a Node
@@ -223,7 +223,7 @@ Adding a new node is a multi-stage process:
the master, and then exportfs so the shares are open to the slave nodes.
#. Mount the NFS shares on the new node.
#. Configure SGE: inform the master of the new host's address, and inform the
- new host of the master, and excute the sge commands to establish
+ new host of the master, and execute the sge commands to establish
communications.
The Process of Removing a Node
View
2  docs/sphinx/manual/shell_completion.rst
@@ -77,7 +77,7 @@ list of suggestions::
-i mediumcluster -x
-I molsim
-In the example above, *smallcluster*, *mediumcluster*, *largecluster*, etc are
+In the example above, *smallcluster*, *mediumcluster*, *largecluster*, etc. are
all cluster templates defined in ~/.starcluster/config. Typing an **s**
character after the *start* action will autocomplete the first argument to
*smallcluster*
View
2  docs/sphinx/manual/volumes.rst
@@ -67,7 +67,7 @@ volume.
The **createvolume** command simply formats the *entire volume* using all
of the space on the device rather than creating partitions. This makes it
easier to resize the volume and expand the filesystem later on if you run
- out of diskspace.
+ out of disk space.
To create and format a new volume simply specify a volume size in GB and the
availability zone to create the volume in::
2  docs/sphinx/overview.rst
@@ -69,7 +69,7 @@ This means you can simply login to the master node of a cluster::
$ starcluster sshmaster mycluster
-and connect to any of the nodes (e.g. node001, node002, etc) simply by
+and connect to any of the nodes (e.g. node001, node002, etc.) simply by
running::
$ ssh node001
View
4 starcluster/awsutils.py
@@ -703,7 +703,7 @@ def get_zones(self, filters=None):
def get_zone(self, zone):
"""
- Return zone object respresenting an EC2 availability zone
+ Return zone object representing an EC2 availability zone
Raises exception.ZoneDoesNotExist if not successful
"""
try:
@@ -716,7 +716,7 @@ def get_zone(self, zone):
def get_zone_or_none(self, zone):
"""
- Return zone object respresenting an EC2 availability zone
+ Return zone object representing an EC2 availability zone
Returns None if unsuccessful
"""
try:
View
6 starcluster/balancers/sge/__init__.py
@@ -87,7 +87,7 @@ def parse_qstat(self, string, fields=None):
def job_multiply(self, hash):
"""
- this function deals with sge jobs with a task range, ie qsub -t 1-20:1
+ this function deals with sge jobs with a task range, i.e. qsub -t 1-20:1
makes 20 jobs. self.jobs needs to represent that it is 20 jobs instead
of just 1.
"""
@@ -494,7 +494,7 @@ def get_stats(self):
it will feed these stats to SGEStats, which parses the XML.
it will return two arrays: one of hosts, each host has a hash with its
host information inside. The job array contains a hash for every job,
- containing statistics about the job name, priority, etc
+ containing statistics about the job name, priority, etc.
"""
log.debug("starting get_stats")
master = self._cluster.master_node
@@ -518,7 +518,7 @@ def get_stats(self):
ignore_exit_status=True,
source_profile=True))
except Exception, e:
- log.error("Error occured getting SGE stats via ssh. "
+ log.error("Error occurred getting SGE stats via ssh. "
"Cluster terminated?")
log.error(e)
return -1
View
14 starcluster/cluster.py
@@ -377,7 +377,7 @@ def zone(self):
availability zone between those volumes. If an availability zone
is explicitly specified in the config and does not match the common
availability zone of the volumes, an exception is raised. If all
- volumes are not in the same availabilty zone an exception is raised.
+ volumes are not in the same availability zone an exception is raised.
If no volumes are specified, returns the user specified availability
zone if it exists.
"""
@@ -408,7 +408,7 @@ def load_volumes(self, vols):
not specified.
This method assigns the first volume to /dev/sdz, second to /dev/sdy,
- etc for all volumes that do not include a device/partition setting
+ etc. for all volumes that do not include a device/partition setting
"""
devices = ['/dev/sd%s' % s for s in string.lowercase]
devmap = {}
@@ -574,7 +574,7 @@ def load_receipt(self, load_plugins=True):
self.plugins = self.load_plugins(self._plugins)
except exception.PluginError, e:
log.warn(e)
- log.warn("An error occured while loading plugins")
+ log.warn("An error occurred while loading plugins")
log.warn("Not running any plugins")
except Exception, e:
raise exception.ClusterReceiptError(
@@ -849,7 +849,7 @@ def remove_nodes(self, nodes, terminate=True):
if not terminate:
continue
if node.spot_id:
- log.info("Cancelling spot request %s" % node.spot_id)
+ log.info("Canceling spot request %s" % node.spot_id)
node.get_spot_request().cancel()
node.terminate()
@@ -1325,7 +1325,7 @@ def stop_cluster(self, terminate_unstoppable=False):
def terminate_cluster(self):
"""
Destroy this cluster by first detaching all volumes, shutting down all
- instances, cancelling all spot requests (if any), removing its
+ instances, canceling all spot requests (if any), removing its
placement group (if any), and removing its security group.
"""
try:
@@ -1338,7 +1338,7 @@ def terminate_cluster(self):
node.terminate()
for spot in self.spot_requests:
if spot.state not in ['cancelled', 'closed']:
- log.info("Cancelling spot instance request: %s" % spot.id)
+ log.info("Canceling spot instance request: %s" % spot.id)
spot.cancel()
sg = self.ec2.get_group_or_none(self._security_group)
pg = self.ec2.get_placement_group_or_none(self._security_group)
@@ -1481,7 +1481,7 @@ def run_plugin(self, plugin, name='', method_name='run', node=None):
except exception.MasterDoesNotExist:
raise
except Exception, e:
- log.error("Error occured while running plugin '%s':" % plugin_name)
+ log.error("Error occurred while running plugin '%s':" % plugin_name)
if isinstance(e, exception.ThreadPoolException):
e.print_excs()
log.debug(e.format_excs())
View
2  starcluster/clustersetup.py
@@ -253,7 +253,7 @@ def _setup_ebs_volumes(self):
log.error(
"volume has more than one partition, please specify "
"which partition to use (e.g. partition=0, "
- "partition=1, etc) in the volume's config")
+ "partition=1, etc.) in the volume's config")
continue
elif not master.ssh.path_exists(volume_partition):
log.warn("Cannot find partition %s on volume %s" % \
View
2  starcluster/commands/addnode.py
@@ -55,7 +55,7 @@ def addopts(self, parser):
parser.add_option("-a", "--alias", dest="alias",
action="append", type="string", default=[],
help=("alias to give to the new node " + \
- "(e.g. node007, mynode, etc)"))
+ "(e.g. node007, mynode, etc.)"))
parser.add_option("-n", "--num-nodes", dest="num_nodes",
action="store", type="int", default=1,
help=("number of new nodes to launch"))
View
2  starcluster/commands/removeimage.py
@@ -37,7 +37,7 @@ class CmdRemoveImage(ImageCompleter):
def addopts(self, parser):
parser.add_option("-p", "--pretend", dest="pretend",
action="store_true", default=False,
- help="pretend run, dont actually remove anything")
+ help="pretend run, do not actually remove anything")
parser.add_option("-c", "--confirm", dest="confirm",
action="store_true", default=False,
help="do not prompt for confirmation, "
View
2  starcluster/commands/runplugin.py
@@ -5,7 +5,7 @@ class CmdRunPlugin(CmdBase):
"""
runplugin <plugin_name> <cluster_tag>
- Run a StarCluster plugin on a runnning cluster
+ Run a StarCluster plugin on a running cluster
plugin_name - name of plugin section defined in the config
cluster_tag - tag name of a running StarCluster
View
4 starcluster/config.py
@@ -311,7 +311,7 @@ def _load_extends_settings(self, section_name, store):
step is a group of settings for a section in the form of a dictionary.
A 'master' dictionary is updated with the settings at each step. This
causes the next group of settings to override the previous, and so on.
- The 'section_name' settings are at the top of the dep tree.
+ The 'section_name' settings are at the top of the dependency tree.
"""
section = store[section_name]
extends = section.get('extends')
@@ -522,7 +522,7 @@ def _load_sections(self, section_prefix, section_settings,
def _load_cluster_sections(self, cluster_sections):
"""
Loads all cluster sections. Similar to _load_sections but also handles
- populating specified keypair,volume,plugins,permissions,etc settings
+ populating specified keypair, volume, plugins, permissions, etc. settings
"""
clusters = cluster_sections
cluster_store = AttributeDict()
View
4 starcluster/exception.py
@@ -117,7 +117,7 @@ def __init__(self, snap_id):
class BucketAlreadyExists(AWSError):
def __init__(self, bucket_name):
self.msg = "bucket with name '%s' already exists on S3\n" % bucket_name
- self.msg += "(NOTE: S3's bucket namepsace is shared by all AWS users)"
+ self.msg += "(NOTE: S3's bucket namespace is shared by all AWS users)"
class BucketDoesNotExist(AWSError):
@@ -471,7 +471,7 @@ def format_excs(self):
excs = []
for exception in self.exceptions:
e, tb_msg, jobid = exception
- excs.append('error occured in job (id=%s): %s' % (jobid, str(e)))
+ excs.append('error occurred in job (id=%s): %s' % (jobid, str(e)))
excs.append(tb_msg)
return '\n'.join(excs)
View
2  starcluster/image.py
@@ -212,7 +212,7 @@ def create_image(self, size=15):
return self._create_image_from_ebs(size)
return self._create_image_from_instance_store(size)
except:
- log.error("Error occured while creating image")
+ log.error("Error occurred while creating image")
if self._snap:
log.error("Removing generated snapshot '%s'" % self._snap)
self._snap.delete()
View
6 starcluster/iptools.py
@@ -22,7 +22,7 @@
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
-"""Utitlities for dealing with ip addresses.
+"""Utilities for dealing with ip addresses.
Functions:
- validate_ip: Validate a dotted-quad ip address.
@@ -89,7 +89,7 @@ def validate_ip(s):
"""Validate a dotted-quad ip address.
The string is considered a valid dotted-quad address if it consists of
- one to four octets (0-255) seperated by periods (.).
+ one to four octets (0-255) separated by periods (.).
>>> validate_ip('127.0.0.1')
@@ -128,7 +128,7 @@ def validate_cidr(s):
"""Validate a CIDR notation ip address.
The string is considered a valid CIDR address if it consists of one to
- four octets (0-255) seperated by periods (.) followed by a forward slash
+ four octets (0-255) separated by periods (.) followed by a forward slash
(/) and a bit mask length (1-32).
View
2  starcluster/logger.py
@@ -111,7 +111,7 @@ def configure_sc_logging(use_syslog=False):
By default StarCluster's logger has no formatters and a NullHandler so that
other developers using StarCluster as a library can configure logging as
- they see fit. This method is used in StarCluster's application code (ie the
+ they see fit. This method is used in StarCluster's application code (i.e. the
'starcluster' command) to toggle StarCluster's application specific
formatters/handlers
View
6 starcluster/node.py
@@ -53,7 +53,7 @@ class Node(object):
This class represents a single compute node in a StarCluster.
It contains all useful metadata for the node such as the internal/external
- hostnames, ips, etc as well as a paramiko ssh object for executing
+ hostnames, ips, etc. as well as a paramiko ssh object for executing
commands, creating/modifying files on the node.
'instance' arg must be an instance of boto.ec2.instance.Instance
@@ -712,7 +712,7 @@ def delete_root_volume(self):
vol_id = root_vol.volume_id
vol = self.ec2.get_volume(vol_id)
vol.detach()
- while vol.update() != 'availabile':
+ while vol.update() != 'available':
time.sleep(5)
log.info("Deleting node %s's root volume" % self.alias)
root_vol.delete()
@@ -800,7 +800,7 @@ def shutdown(self):
"""
Shutdown this instance. This method will terminate traditional
instance-store instances and stop EBS-backed instances
- (ie not destroy EBS root dev)
+ (i.e. not destroy EBS root dev)
"""
if self.is_stoppable():
self.stop()
View
4 starcluster/optcomplete.py
@@ -35,14 +35,14 @@
This module provide automatic bash completion support for programs that use the
optparse module. The premise is that the optparse options parser specifies
enough information (and more) for us to be able to generate completion strings
-esily. Another advantage of this over traditional completion schemes where the
+easily. Another advantage of this over traditional completion schemes where the
completion strings are hard-coded in a separate bash source file, is that the
same code that parses the options is used to generate the completions, so the
completions is always up-to-date with the program itself.
In addition, we allow you specify a list of regular expressions or code that
define what kinds of files should be proposed as completions to this file if
-needed. If you want to implement more complex behaviour, you can instead
+needed. If you want to implement more complex behavior, you can instead
specify a function, which will be called with the current directory as an
argument.
View
2  starcluster/plugins/mysql.py
@@ -45,7 +45,7 @@
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
-# escpecially if they contain "#" chars...
+# especially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port = 3306
View
2  starcluster/plugins/pkginstaller.py
@@ -112,6 +112,6 @@ def run(self, nodes, master, user, user_shell, volumes):
done
# Record packages on Master
-# assuming that /home/ is a persistant EBS volume
+# assuming that /home/ is a persistent EBS volume
dpkg --get-selections > $PACKAGE_FILE || die "Failed to save packages"
"""
View
10 starcluster/progressbar.py
@@ -27,10 +27,10 @@
The ProgressBar class manages the progress, and the format of the line
is given by a number of widgets. A widget is an object that may
-display diferently depending on the state of the progress. There are
+display differently depending on the state of the progress. There are
three types of widget:
- a string, which always shows itself;
-- a ProgressBarWidget, which may return a diferent value every time
+- a ProgressBarWidget, which may return a different value every time
it's update method is called; and
- a ProgressBarWidgetHFill, which is like ProgressBarWidget, except it
expands to fill the remaining width of the line.
@@ -78,7 +78,7 @@ def update(self, pbar):
where one can access attributes of the class for knowing how
the update must be made.
- At least this function must be overriden."""
+ At least this function must be overridden."""
pass
@@ -99,7 +99,7 @@ def update(self, pbar, width):
the update must be made. The parameter width is the total
horizontal width the widget must have.
- At least this function must be overriden."""
+ At least this function must be overridden."""
pass
@@ -164,7 +164,7 @@ def update(self, pbar):
class Bar(ProgressBarWidgetHFill):
- "The bar of progress. It will strech to fill the line."
+ "The bar of progress. It will stretch to fill the line."
def __init__(self, marker='#', left='|', right='|'):
self.marker = marker
self.left = left
View
12 starcluster/scp.py
@@ -38,7 +38,7 @@ class SCPClient(object):
halted after too many levels of symlinks are detected.
The put method uses os.walk for recursion, and sends files accordingly.
Since scp doesn't support symlinks, we send file symlinks as the file
- (matching scp behaviour), but we make no attempt at symlinked directories.
+ (matching scp behavior), but we make no attempt at symlinked directories.
"""
def __init__(self, transport, buff_size=16384, socket_timeout=5.0,
progress=None):
@@ -72,7 +72,7 @@ def put(self, files, remote_path='.',
"""
Transfer files to remote host.
- @param files: A single path, or a list of paths to be transfered.
+ @param files: A single path, or a list of paths to be transferred.
recursive must be True to transfer directories.
@type files: string OR list of strings
@param remote_path: path in which to receive the files on the remote
@@ -80,7 +80,7 @@ def put(self, files, remote_path='.',
@type remote_path: str
@param recursive: transfer files and directories recursively
@type recursive: bool
- @param preserve_times: preserve mtime and atime of transfered files
+ @param preserve_times: preserve mtime and atime of transferred files
and directories.
@type preserve_times: bool
"""
@@ -107,7 +107,7 @@ def get(self, remote_path, local_path='',
"""
Transfer files from remote host to localhost
- @param remote_path: path to retreive from remote host. since this is
+ @param remote_path: path to retrieve from remote host. since this is
evaluated by scp on the remote host, shell wildcards and
environment variables may be used.
@type remote_path: str
@@ -115,7 +115,7 @@ def get(self, remote_path, local_path='',
@type local_path: str
@param recursive: transfer files and directories recursively
@type recursive: bool
- @param preserve_times: preserve mtime and atime of transfered files
+ @param preserve_times: preserve mtime and atime of transferred files
and directories.
@type preserve_times: bool
"""
@@ -216,7 +216,7 @@ def _recv_confirm(self):
try:
msg = self.channel.recv(512)
except SocketTimeout:
- raise exception.SCPException('Timout waiting for scp response')
+ raise exception.SCPException('Timeout waiting for scp response')
if msg and msg[0] == '\x00':
return
elif msg and msg[0] == '\x01':
View
2  starcluster/tests/test_config.py
@@ -139,7 +139,7 @@ def test_extends(self):
def test_order_invariance(self):
"""
Loads all cluster sections in the test config in all possible orders
- (ie c1,c2,c3, c3,c1,c2, etc) and test that the results are the same
+ (i.e. c1,c2,c3, c3,c1,c2, etc.) and test that the results are the same
"""
cfg = self.config
orig = cfg.clusters
View
2  starcluster/threadpool.py
@@ -139,7 +139,7 @@ def wait(self, numtasks=None, return_results=True):
self.join()
if self._exception_queue.qsize() > 0:
raise exception.ThreadPoolException(
- "An error occured in ThreadPool", self._exception_queue.queue)
+ "An error occurred in ThreadPool", self._exception_queue.queue)
if return_results:
return self.get_results()
View
2  starcluster/utils.py
@@ -371,7 +371,7 @@ def version_to_float(v):
# and is placed in public domain.
"""
Convert a Mozilla-style version string into a floating-point number
- 1.2.3.4, 1.2a5, 2.3.4b1pre, 3.0rc2, etc
+ 1.2.3.4, 1.2a5, 2.3.4b1pre, 3.0rc2, etc.
"""
version = [
0, 0, 0, 0, # 4-part numerical revision
View
4 starcluster/volume.py
@@ -260,7 +260,7 @@ def create(self, volume_size, volume_zone, name=None, tags=None):
self.log.error("failed to create new volume")
if self._volume:
log.error(
- "Error occured. Detaching, and deleting volume: %s" % \
+ "Error occurred. Detaching, and deleting volume: %s" % \
self._volume.id)
self._volume.detach(force=True)
time.sleep(5)
@@ -279,7 +279,7 @@ def resize(self, vol, size, dest_zone=None):
Resize EBS volume
vol - boto volume object
- size - new volume sze
+ size - new volume size
dest_zone - zone to create the new resized volume in. this must be
within the original volume's region otherwise a manual copy (rsync)
is required. this is currently not implemented.
View
2  starcluster/webtools.py
@@ -93,7 +93,7 @@ class TemplateHandler(DocrootHandler):
under the starcluster.templates package. You can set the _root_template_pkg
attribute on this class before passing to BaseHTTPServer to specify a
starcluster.templates subpackage to render templates from. Defaults to
- rendering starcluster.templates (ie '/')
+ rendering starcluster.templates (i.e. '/')
"""
_root_template_pkg = '/'
_tmpl_context = {}
Please sign in to comment.
Something went wrong with that request. Please try again.