Skip to content
This repository has been archived by the owner on Feb 3, 2021. It is now read-only.

Commit

Permalink
Fix: fix typos (#595)
Browse files Browse the repository at this point in the history
  • Loading branch information
mmduyzend authored and jafreck committed Jun 7, 2018
1 parent 88d0419 commit 7d7a814
Show file tree
Hide file tree
Showing 40 changed files with 90 additions and 90 deletions.
6 changes: 3 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This release includes a number of breaking changes. [Please follow the migration
- Docker images have been refactored and moved to a different Dockerhub repository. The new supported images are not backwards compatible. See [the documentation on configuration files.](https://aztk.readthedocs.io/en/v0.7.0/13-configuration.html#cluster-yaml)

**Deprecated Features**
- Custom scripts have been removed in favor of Plugins, which are more robust. See, [the documenation on Plugins.](https://aztk.readthedocs.io/en/v0.7.0/15-plugins.html)
- Custom scripts have been removed in favor of Plugins, which are more robust. See, [the documentation on Plugins.](https://aztk.readthedocs.io/en/v0.7.0/15-plugins.html)

**Added Features**
* add internal flag to node commands (#482) ([1eaa1b6](https://github.com/Azure/aztk/commit/1eaa1b6)), closes [#482](https://github.com/Azure/aztk/issues/482)
Expand All @@ -33,7 +33,7 @@ This release includes a number of breaking changes. [Please follow the migration
* match cluster submit exit code in cli (#478) ([8889059](https://github.com/Azure/aztk/commit/8889059)), closes [#478](https://github.com/Azure/aztk/issues/478)
* Plugin V2: Running plugin on host (#461) ([de78983](https://github.com/Azure/aztk/commit/de78983)), closes [#461](https://github.com/Azure/aztk/issues/461)
* Plugins (#387) ([c724d94](https://github.com/Azure/aztk/commit/c724d94)), closes [#387](https://github.com/Azure/aztk/issues/387)
* Pypi auto deployement (#428) ([c237501](https://github.com/Azure/aztk/commit/c237501)), closes [#428](https://github.com/Azure/aztk/issues/428)
* Pypi auto deployment (#428) ([c237501](https://github.com/Azure/aztk/commit/c237501)), closes [#428](https://github.com/Azure/aztk/issues/428)
* Readthedocs support (#497) ([e361c3b](https://github.com/Azure/aztk/commit/e361c3b)), closes [#497](https://github.com/Azure/aztk/issues/497)
* refactor docker images (#510) ([779bffb](https://github.com/Azure/aztk/commit/779bffb)), closes [#510](https://github.com/Azure/aztk/issues/510)
* Spark add output logs flag (#468) ([32de752](https://github.com/Azure/aztk/commit/32de752)), closes [#468](https://github.com/Azure/aztk/issues/468)
Expand Down Expand Up @@ -93,7 +93,7 @@ This release includes a number of breaking changes. [Please follow the migration
**Bug Fixes:**
- load jars in `.aztk/jars/` in job submission mode
- replace outdated error in cluster_create
- fix type error crash if not jars are specificed in job submission
- fix type error crash if no jars are specified in job submission
- stop using mutable default parameters
- print job application code if exit_code is 0
- job submission crash if executor or driver cores specified
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ chmod 755 account_setup.sh &&
/bin/bash account_setup.sh
```

4. Follow the on screen prompts to create the necessary Azure resources and copy the output into your `.aztk/secrets.yaml` file. For more infomration see [Getting Started Scripts](./01-Getting-Started-Script).
4. Follow the on screen prompts to create the necessary Azure resources and copy the output into your `.aztk/secrets.yaml` file. For more information see [Getting Started Scripts](./01-Getting-Started-Script).


## Quickstart Guide
Expand Down Expand Up @@ -98,8 +98,8 @@ aztk spark cluster submit \
path\to\pi.py 1000
```
- The `aztk spark cluster submit` command takes the same parameters as the standard [`spark-submit` command](https://spark.apache.org/docs/latest/submitting-applications.html), except instead of specifying `--master`, AZTK requires that you specify your cluster `--id` and a unique job `--name`
- The job name, `--name`, argument must be atleast 3 characters long
- It can only contain alphanumeric characters including hypens but excluding underscores
- The job name, `--name`, argument must be at least 3 characters long
- It can only contain alphanumeric characters including hyphens but excluding underscores
- It cannot contain uppercase letters
- Each job you submit **must** have a unique name
- Use the `--no-wait` option for your command to return immediately
Expand Down
8 changes: 4 additions & 4 deletions account_setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ def format_secrets(**kwargs):
The following form is returned:
service_principal:
tenant_id: <AAD Diretory ID>
tenant_id: <AAD Directory ID>
client_id: <AAD App Application ID>
credential: <AAD App Password>
batch_account_resource_id: </batch/account/resource/id>
Expand Down Expand Up @@ -409,16 +409,16 @@ def stop(self):
# create AAD application and service principal
with Spinner():
profile = credentials.get_cli_profile()
aad_cred, subscirption_id, tenant_id = profile.get_login_credentials(
aad_cred, subscription_id, tenant_id = profile.get_login_credentials(
resource=AZURE_PUBLIC_CLOUD.endpoints.active_directory_graph_resource_id
)
application_id, service_principal_object_id, application_credential = create_aad_user(aad_cred, tenant_id, **kwargs)

print("Created Azure Active Directory service principal.")

with Spinner():
create_role_assignment(creds, subscription_id, resource_group_id, service_principal_object_id)
print("Configured permsisions.")
print("Configured permissions.")

secrets = format_secrets(
**{
Expand Down
2 changes: 1 addition & 1 deletion aztk/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@
# Azure storage is logging error in the console which make the CLI quite confusing
logging.getLogger("azure.storage").setLevel(logging.CRITICAL)

# msrestazure logs warnring for keyring
# msrestazure logs warning for keyring
logging.getLogger("msrestazure").setLevel(logging.CRITICAL)
10 changes: 5 additions & 5 deletions aztk/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ def __create_pool_and_job(self, cluster_conf: models.ClusterConfiguration, softw
auto_scale_formula = "$TargetDedicatedNodes={0}; $TargetLowPriorityNodes={1}".format(
cluster_conf.size, cluster_conf.size_low_priority)

# Confiure the pool
# Configure the pool
pool = batch_models.PoolAddParameter(
id=pool_id,
virtual_machine_configuration=batch_models.VirtualMachineConfiguration(
Expand Down Expand Up @@ -225,7 +225,7 @@ def __generate_user_on_pool(self, pool_id, nodes):
node.id,
ssh_pub_key): node for node in nodes}
concurrent.futures.wait(futures)

return generated_username, ssh_key

def __create_user_on_pool(self, username, pool_id, nodes, ssh_pub_key=None, password=None):
Expand All @@ -239,8 +239,8 @@ def __create_user_on_pool(self, username, pool_id, nodes, ssh_pub_key=None, pass
concurrent.futures.wait(futures)

def __delete_user_on_pool(self, username, pool_id, nodes):
with concurrent.futures.ThreadPoolExecutor() as exector:
futures = [exector.submit(self.__delete_user, pool_id, node.id, username) for node in nodes]
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = [executor.submit(self.__delete_user, pool_id, node.id, username) for node in nodes]
concurrent.futures.wait(futures)

def __node_run(self, cluster_id, node_id, command, internal, container_name=None, timeout=None):
Expand Down Expand Up @@ -355,7 +355,7 @@ def __submit_job(self,
:param job_configuration -> aztk_sdk.spark.models.JobConfiguration
:param start_task -> batch_models.StartTask
:param job_manager_task -> batch_models.TaskAddParameter
:param autoscale forumula -> str
:param autoscale_formula -> str
:param software_metadata_key -> str
:param vm_image_model -> aztk_sdk.models.VmImage
:returns None
Expand Down
4 changes: 2 additions & 2 deletions aztk/core/models/fields.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ def __set__(self, instance, value):

def merge(self, instance, value):
"""
Method called when merging 2 model together.
This is overriden in some of the fields where merge can be handled differently
Method called when merging 2 models together.
This is overridden in some of the fields where merge can be handled differently
"""
if value is not None:
instance._data[self] = value
Expand Down
2 changes: 1 addition & 1 deletion aztk/core/models/validators.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def validate(self, value):

class Required(Validator):
"""
Validate the field valiue is not `None`
Validate the field value is not `None`
"""

def validate(self, value):
Expand Down
2 changes: 1 addition & 1 deletion aztk/models/scheduling_target.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,5 +18,5 @@ class SchedulingTarget(Enum):

Any = "any"
"""
Any node(Not reconmmended if using low pri)
Any node(Not recommended if using low pri)
"""
4 changes: 2 additions & 2 deletions aztk/models/toolkit.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ def get_docker_repo(self, gpu: bool):

def _get_docker_tag(self, gpu: bool):
environment = self.environment or "base"
environment_def = self._get_environent_definition()
environment_def = self._get_environment_definition()
environment_version = self.environment_version or (environment_def and environment_def.default)

array = [
Expand All @@ -98,7 +98,7 @@ def _get_docker_tag(self, gpu: bool):
return '-'.join(array)


def _get_environent_definition(self) -> ToolkitEnvironmentDefinition:
def _get_environment_definition(self) -> ToolkitEnvironmentDefinition:
toolkit = TOOLKIT_MAP.get(self.software)

if toolkit:
Expand Down
4 changes: 2 additions & 2 deletions aztk/node_scripts/install/node_scheduling.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ def setup_node_scheduling(
enable = True

if enable:
log.info("Scheduling will be enabled on this node as it satifies the right conditions")
log.info("Scheduling will be enabled on this node as it satisfies the right conditions")
enable_scheduling(batch_client)
else:
log.info("Scheduling will be disabled on this node as it does NOT satifies the right conditions")
log.info("Scheduling will be disabled on this node as it does NOT satisfy the right conditions")
disable_scheduling(batch_client)
2 changes: 1 addition & 1 deletion aztk/node_scripts/install/pick_master.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def find_master(client: batch.BatchServiceClient) -> bool:
result = try_assign_self_as_master(client, pool)

if result:
print("Assignment was successfull! Node {0} is the new master.".format(config.node_id))
print("Assignment was successful! Node {0} is the new master.".format(config.node_id))
return True

raise CannotAllocateMasterError("Unable to assign node as a master in 5 tries")
4 changes: 2 additions & 2 deletions aztk/node_scripts/install/spark.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def copyfile(src, dest):

def setup_conf():
"""
Copy spark conf files to spark_home if they were uplaoded
Copy spark conf files to spark_home if they were uploaded
"""
copy_spark_env()
copy_core_site()
Expand Down Expand Up @@ -220,7 +220,7 @@ def configure_history_server_log_path(path_to_log_file):
if os.path.exists(directory):
print('Skipping. Directory {} already exists.'.format(directory))
else:
print('Create direcotory {}.'.format(directory))
print('Create directory {}.'.format(directory))
os.makedirs(directory)

# Make sure the directory can be accessed by all users
Expand Down
2 changes: 1 addition & 1 deletion aztk/node_scripts/setup_host.sh
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ install_prerequisites () {
}

install_docker_compose () {
echo "Installing Docker-Componse"
echo "Installing Docker-Compose"
sudo curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
echo "Finished installing Docker-Compose"
Expand Down
6 changes: 3 additions & 3 deletions aztk/node_scripts/submit.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ def __app_submit_cmd(
spark_submit_cmd.add_option('--executor-cores', str(executor_cores))

spark_submit_cmd.add_argument(
os.path.expandvars(app) + ' ' +
os.path.expandvars(app) + ' ' +
' '.join(['\'' + str(app_arg) + '\'' for app_arg in (app_args or [])]))

with open("spark-submit.txt", mode="w", encoding="UTF-8") as stream:
Expand Down Expand Up @@ -145,7 +145,7 @@ def upload_log(blob_client, application):
use_full_path=False)


def recieve_submit_request(application_file_path):
def receive_submit_request(application_file_path):

'''
Handle the request to submit a task
Expand Down Expand Up @@ -195,7 +195,7 @@ def upload_error_log(error, application_file_path):
if __name__ == "__main__":
return_code = 1
try:
return_code = recieve_submit_request(os.path.join(os.environ['AZ_BATCH_TASK_WORKING_DIR'], 'application.yaml'))
return_code = receive_submit_request(os.path.join(os.environ['AZ_BATCH_TASK_WORKING_DIR'], 'application.yaml'))
except Exception as e:
upload_error_log(str(e), os.path.join(os.environ['AZ_BATCH_TASK_WORKING_DIR'], 'application.yaml'))

Expand Down
2 changes: 1 addition & 1 deletion aztk/node_scripts/wait_until_setup_complete.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,5 @@
while not os.path.exists('/tmp/setup_complete'):
time.sleep(1)

print("SETUP FINSIHED")
print("SETUP FINISHED")
os.remove('/tmp/setup_complete')
8 changes: 4 additions & 4 deletions aztk/spark/models/plugins/resource_monitor/readme.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Using the Resrouce Monitor Plugin
# Using the Resource Monitor Plugin

The resource monitor plugin is useful for tracking performance counters on the cluster. These include counters such as Percent CPU used per core, Disk Read, Disk Write, Network In, Network out, and several others. Simply enabling the plugin in your cluster.yaml will deploy all the necessary components to start tracking metrics.

This plugin takes advanage of the TICK monitoring stack. For more information please visit the [influx data](https://www.influxdata.com/time-series-platform/) web page.
This plugin takes advantage of the TICK monitoring stack. For more information please visit the [influx data](https://www.influxdata.com/time-series-platform/) web page.

> **IMPORTANT** All of the data is collected on the cluster's master node and will be lost once the cluster is thrown away. To persist data we recommend pushing to an off-cluster InfluxDB instance. Currently there is no supported way to persist the data from this plugin.
Expand All @@ -21,14 +21,14 @@ plugins:

```

Once the cluster is created simply the cluster ssh command and all of the ports will automatically get forwareded.
Once the cluster is created simply the cluster ssh command and all of the ports will automatically get forwarded.

```sh
aztk spark cluster ssh --id <my_cluster>
```

### Ports
url | desciption
url | description
--- | ---
http://localhost:8890 | Cronograf UI

Expand Down
2 changes: 1 addition & 1 deletion aztk/utils/deprecation.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def deprecate(message: str):

def _get_deprecated_version():
"""
Returns the next version where the deprecated funtionality will be removed
Returns the next version where the deprecated functionality will be removed
"""
if version.major == 0:
return "0.{minor}.0".format(minor=version.minor + 1)
Expand Down
2 changes: 1 addition & 1 deletion aztk/utils/helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ def read_cluster_config(cluster_id: str, blob_client: blob.BlockBlobService):

def bool_env(value: bool):
"""
Takes a boolean value(or None) and return the serialized version to be used as an environemnt variable
Takes a boolean value(or None) and return the serialized version to be used as an environment variable
Examples:
>>> bool_env(True)
Expand Down
2 changes: 1 addition & 1 deletion aztk_cli/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def load_aztk_secrets() -> SecretsConfiguration:
if not global_config and not local_config:
raise aztk.error.AztkError("There is no secrets.yaml in either ./.aztk/secrets.yaml or .aztk/secrets.yaml")

if global_config: # GLobal config is optional
if global_config: # Global config is optional
_merge_secrets_dict(secrets, global_config)
if local_config:
_merge_secrets_dict(secrets, local_config)
Expand Down
10 changes: 5 additions & 5 deletions aztk_cli/config/cluster.yaml
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
## cluster settings

# id: <id of the cluster to be created, reccommended to specify with --id command line parameter>
# id: <id of the cluster to be created, recommended to specify with --id command line parameter>

# Toolkit configuration [Required] You can use `aztk toolkit` command to find which are the available tookits
# Toolkit configuration [Required] You can use `aztk toolkit` command to find which toolkits are available
toolkit:
software: spark
version: 2.3.0
# Which environemnt is needed for spark anaconda, r, miniconda
# Which environment is needed for spark anaconda, r, miniconda
environment: {environment}
# Optional version for the environment
# environment_version:

# Optional docker repository(To bring your custom docker image. Just specify the Toolkit software, version and environemnt if using default images)
# Optional docker repository(To bring your custom docker image. Just specify the Toolkit software, version and environment if using default images)
# docker_repo: <name of docker image repo (for more information, see https://github.com/Azure/aztk/blob/master/docs/12-docker-image.md)>


Expand All @@ -34,7 +34,7 @@ username: spark
# - script: <./relative/path/to/other/script.sh or ./relative/path/to/other/script/directory/>
# runOn: <master/worker/all-nodes>

# To add your cluster to a virtual network provide the full arm resoruce id below
# To add your cluster to a virtual network provide the full arm resource id below
# subnet_id: /subscriptions/********-****-****-****-************/resourceGroups/********/providers/Microsoft.Network/virtualNetworks/*******/subnets/******

# Enable plugins
Expand Down
4 changes: 2 additions & 2 deletions aztk_cli/config/job.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ job:
toolkit:
software: spark
version: 2.2.0
# Which environemnt is needed for spark anaconda, r, miniconda
# Which environment is needed for spark anaconda, r, miniconda
environment: {environment}
# Optional version for the environment
# environment_version:

# Optional docker repository(To bring your custom docker image. Just specify the Toolkit software, version and environemnt if using default images)
# Optional docker repository(To bring your custom docker image. Just specify the Toolkit software, version and environment if using default images)
# docker_repo: <name of docker image repo (for more information, see https://github.com/Azure/aztk/blob/master/docs/12-docker-image.md)>

# Where do you want to run the driver <dedicated/master/any> (Default: dedicated if at least one dedicated node or any otherwise)
Expand Down
2 changes: 1 addition & 1 deletion aztk_cli/config/secrets.yaml.template
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ service_principal:
# storage_account_suffix: core.windows.net


# Configuration for private docker repositories. If using public containers you do not need to provide authentification
# Configuration for private docker repositories. If using public containers you do not need to provide authentication
docker:
# username:
# password:
Expand Down
2 changes: 1 addition & 1 deletion aztk_cli/config/ssh.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ssh configuration

# cluster_id: <id of the cluster to connect to, reccommended to specify with --id command line parameter>
# cluster_id: <id of the cluster to connect to, recommended to specify with --id command line parameter>

# username: <name of the user account to ssh into>
username: spark
Expand Down
2 changes: 1 addition & 1 deletion aztk_cli/spark/endpoints/cluster/cluster_add_user.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ def setup_parser(parser: argparse.ArgumentParser):
parser.add_argument('--id', dest='cluster_id', required=True,
help='The unique id of your spark cluster')
parser.add_argument('-u', '--username',
help='The usernameto access your spark cluster\'s head node')
help='The username to access your spark cluster\'s head node')

auth_group = parser.add_mutually_exclusive_group()
auth_group.add_argument('-p', '--password',
Expand Down
6 changes: 3 additions & 3 deletions aztk_cli/spark/endpoints/init.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def setup_parser(parser: argparse.ArgumentParser):
help="Create a .aztk/ folder in your home directory for global configurations.")
software_parser = parser.add_mutually_exclusive_group()
software_parser.add_argument('--miniconda', action="store_true", required=False)
software_parser.add_argument('--annaconda', action="store_true", required=False)
software_parser.add_argument('--anaconda', action="store_true", required=False)
software_parser.add_argument('--r', '--R', action="store_true", required=False)
software_parser.add_argument('--java', action="store_true", required=False)
software_parser.add_argument('--scala', action="store_true", required=False)
Expand All @@ -21,8 +21,8 @@ def execute(args: typing.NamedTuple):
# software_specific init
if args.miniconda:
environment = "miniconda"
elif args.annaconda:
environment = "annaconda"
elif args.anaconda:
environment = "anaconda"
elif args.r:
environment = "r"
else:
Expand Down
Loading

0 comments on commit 7d7a814

Please sign in to comment.