From cc3eb9d20c9a3ab291bb0354391a9effe36de482 Mon Sep 17 00:00:00 2001 From: Kevin W Monroe Date: Wed, 5 Oct 2016 17:43:32 +0000 Subject: [PATCH 1/4] BIGTOP-2548: Refresh charms for Juju 2.0 and Xenial --- .../hadoop/layer-hadoop-namenode/README.md | 109 ++++++---- .../hadoop/layer-hadoop-namenode/actions.yaml | 2 +- .../layer-hadoop-namenode/actions/smoke-test | 2 +- .../hadoop/layer-hadoop-namenode/layer.yaml | 2 +- .../layer-hadoop-namenode/metadata.yaml | 6 +- .../reactive/namenode.py | 11 +- .../tests/01-basic-deployment.py | 2 +- .../hadoop/layer-hadoop-plugin/README.md | 101 +++++---- .../hadoop/layer-hadoop-plugin/actions.yaml | 2 + .../layer-hadoop-plugin/actions/smoke-test | 62 ++++++ .../hadoop/layer-hadoop-plugin/layer.yaml | 10 +- .../hadoop/layer-hadoop-plugin/metadata.yaml | 4 +- .../reactive/apache_bigtop_plugin.py | 1 + .../tests/01-basic-deployment.py | 2 +- .../layer-hadoop-resourcemanager/README.md | 193 ++++++++++-------- .../layer-hadoop-resourcemanager/actions.yaml | 3 +- .../actions/smoke-test | 82 +++----- .../layer-hadoop-resourcemanager/layer.yaml | 2 +- .../metadata.yaml | 6 +- .../reactive/resourcemanager.py | 32 ++- .../tests/01-basic-deployment.py | 2 +- .../charm/hadoop/layer-hadoop-slave/README.md | 107 +++++----- .../hadoop/layer-hadoop-slave/actions.yaml | 3 + .../layer-hadoop-slave/actions/smoke-test | 49 +++++ .../hadoop/layer-hadoop-slave/layer.yaml | 6 +- .../hadoop/layer-hadoop-slave/metadata.yaml | 4 +- .../reactive/hadoop_status.py | 3 +- .../tests/01-basic-deployment.py | 2 +- 28 files changed, 513 insertions(+), 297 deletions(-) create mode 100644 bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml create mode 100755 bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test create mode 100644 bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml create mode 100755 bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md index bf46bf73c2..710fd629de 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. --> -## Overview +# Overview The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers @@ -24,83 +24,108 @@ This charm deploys the NameNode component of the Apache Bigtop platform to provide HDFS master resources. -## Usage +# Deploying + +A working Juju installation is assumed to be present. If Juju is not yet set +up, please follow the +[getting-started](https://jujucharms.com/docs/2.0/getting-started) +instructions prior to deploying this charm. This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles). +[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). For example: juju deploy hadoop-processing -> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to -deploy the bundle. - -This will deploy the Apache Bigtop platform with a workload node -preconfigured to work with the cluster. - -You can also manually load and run map-reduce jobs via the plugin charm -included in the bundles linked above: - - juju scp my-job.jar plugin/0: - juju ssh plugin/0 - hadoop jar my-job.jar +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the +following syntax: `juju quickstart hadoop-processing`. +This will deploy an Apache Bigtop cluster with this charm acting as the +NameNode. More information about this deployment can be found in the +[bundle readme](https://jujucharms.com/hadoop-processing/). -[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/ +# Verifying -## Status and Smoke Test - +## Status Apache Bigtop charms provide extended status reporting to indicate when they are ready: - juju status --format=tabular + juju status This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status --format=tabular + watch -n 0.5 juju status -The message for each unit will provide information about that unit's state. -Once they all indicate that they are ready, you can perform a "smoke test" -to verify HDFS or YARN services are working as expected. Trigger the -`smoke-test` action by: +The message column will provide information about a given unit's state. +This charm is ready for use once the status message indicates that it is +ready with datanodes. - juju action do namenode/0 smoke-test - juju action do resourcemanager/0 smoke-test +## Smoke Test +This charm provides a `smoke-test` action that can be used to verify the +application is functioning as expected. Run the action as follows: -After a few seconds or so, you can check the results of the smoke test: + juju run-action namenode/0 smoke-test - juju action status +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action do namenode/0 smoke-test`. -You will see `status: completed` if the smoke test was successful, or -`status: failed` if it was not. You can get more information on why it failed -via: +Watch the progress of the smoke test actions with: - juju action fetch + watch -n 0.5 juju show-action-status +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action status`. -## Deploying in Network-Restricted Environments +Eventually, the action should settle to `status: completed`. If it +reports `status: failed`, the application is not working as expected. Get +more information about a specific smoke test with: -Charms can be deployed in environments with limited network access. To deploy -in this environment, you will need a local mirror to serve required packages. + juju show-action-output + +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action fetch `. + +## Utilities +This charm includes Hadoop command line and web utilities that can be used +to verify information about the cluster. +Show the dfsadmin report on the command line with the following: -### Mirroring Packages + juju run --application namenode "su hdfs -c 'hdfs dfsadmin -report'" -You can setup a local mirror for apt packages using squid-deb-proxy. -For instructions on configuring juju to use this, see the -[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html). +To access the HDFS web console, find the `PUBLIC-ADDRESS` of the +namenode application and expose it: + + juju status namenode + juju expose namenode + +The web interface will be available at the following URL: + + http://NAMENODE_PUBLIC_IP:50070 + + +# Network-Restricted Environments + +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate +proxy and/or mirror options. See +[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more +information. -## Contact Information +# Contact Information - -## Hadoop +# Resources - [Apache Bigtop](http://bigtop.apache.org/) home page - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html) - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html) -- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju) +- [Juju community](https://jujucharms.com/community) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml index ee93b4cb16..c2d65aeca7 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml @@ -1,2 +1,2 @@ smoke-test: - description: Verify that HDFS is working by creating and removing a test directory. + description: Run a simple HDFS smoke test. diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test index 58ffce2139..391b6261ff 100755 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test @@ -22,7 +22,7 @@ from jujubigdata.utils import run_as from charms.reactive import is_state if not is_state('apache-bigtop-namenode.ready'): - hookenv.action_fail('NameNode service not yet ready') + hookenv.action_fail('Charm is not yet ready') # verify the hdfs-test directory does not already exist diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml index 332a6e3ded..3fca827021 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml @@ -1,4 +1,4 @@ -repo: git@github.com:juju-solutions/layer-hadoop-namenode.git +repo: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode includes: - 'layer:apache-bigtop-base' - 'interface:dfs' diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml index ab51ce4300..a358a6d743 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml @@ -1,12 +1,12 @@ name: hadoop-namenode -summary: HDFS master (NameNode) for Apache Bigtop platform +summary: HDFS master (NameNode) from Apache Bigtop maintainer: Juju Big Data description: > Hadoop is a software platform that lets one easily write and run applications that process vast amounts of data. - This charm manages the HDFS master node (NameNode). -tags: ["applications", "bigdata", "bigtop", "hadoop", "apache"] + This charm provides the HDFS master node (NameNode). +tags: [] provides: namenode: interface: dfs diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py index c39a6098c9..c8a71daecf 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py @@ -15,7 +15,9 @@ # limitations under the License. from charms.reactive import is_state, remove_state, set_state, when, when_not -from charms.layer.apache_bigtop_base import Bigtop, get_layer_opts, get_fqdn +from charms.layer.apache_bigtop_base import ( + Bigtop, get_hadoop_version, get_layer_opts, get_fqdn +) from charmhelpers.core import hookenv, host from jujubigdata import utils from path import Path @@ -50,6 +52,8 @@ def send_early_install_info(remote): def install_namenode(): hookenv.status_set('maintenance', 'installing namenode') bigtop = Bigtop() + hdfs_port = get_layer_opts().port('namenode') + webhdfs_port = get_layer_opts().port('nn_webapp_http') bigtop.render_site_yaml( hosts={ 'namenode': get_fqdn(), @@ -58,6 +62,10 @@ def install_namenode(): 'namenode', 'mapred-app', ], + overrides={ + 'hadoop::common_hdfs::hadoop_namenode_port': hdfs_port, + 'hadoop::common_hdfs::hadoop_namenode_http_port': webhdfs_port, + } ) bigtop.trigger_puppet() @@ -96,6 +104,7 @@ def start_namenode(): for port in get_layer_opts().exposed_ports('namenode'): hookenv.open_port(port) set_state('apache-bigtop-namenode.started') + hookenv.application_version_set(get_hadoop_version()) hookenv.status_set('maintenance', 'namenode started') diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py index 15c00c9d94..38aa45b959 100755 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py @@ -28,7 +28,7 @@ class TestDeploy(unittest.TestCase): """ def test_deploy(self): - self.d = amulet.Deployment(series='trusty') + self.d = amulet.Deployment(series='xenial') self.d.add('namenode', 'hadoop-namenode') self.d.setup(timeout=900) self.d.sentry.wait(timeout=1800) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md index cbea7f0249..eb37c7cd08 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md @@ -14,79 +14,108 @@ See the License for the specific language governing permissions and limitations under the License. --> -## Overview +# Overview The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. -This charm facilitates communication between core Apache Bigtop cluster -components and workload charms. +This charm facilitates communication between Hadoop components of an +Apache Bigtop cluster and workload applications. -## Usage +# Deploying + +A working Juju installation is assumed to be present. If Juju is not yet set +up, please follow the +[getting-started](https://jujucharms.com/docs/2.0/getting-started) +instructions prior to deploying this charm. This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles). +[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). For example: juju deploy hadoop-processing -> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to -deploy the bundle. - -This will deploy the Apache Bigtop platform with a workload node -preconfigured to work with the cluster. - -You could extend this deployment, for example, to analyze data using Apache Pig. -Simply deploy Pig and attach it to the same plugin: - - juju deploy apache-pig pig - juju add-relation plugin pig +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the +following syntax: `juju quickstart hadoop-processing`. +This will deploy an Apache Bigtop cluster with a client unit preconfigured to +work with the cluster. More information about this deployment can be found in the +[bundle readme](https://jujucharms.com/hadoop-processing/). -[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/ +# Verifying -## Status and Smoke Test - +## Status Apache Bigtop charms provide extended status reporting to indicate when they are ready: - juju status --format=tabular + juju status This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status --format=tabular + watch -n 0.5 juju status + +The message column will provide information about a given unit's state. +This charm is ready for use once the status message indicates that it is +ready with hdfs and/or yarn. + +## Smoke Test +This charm provides a `smoke-test` action that can be used to verify the +application is functioning as expected. Run the action as follows: + + juju run-action plugin/0 smoke-test + +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action do plugin/0 smoke-test`. + +Watch the progress of the smoke test actions with: + + watch -n 0.5 juju show-action-status + +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action status`. + +Eventually, the action should settle to `status: completed`. If it +reports `status: failed`, the application is not working as expected. Get +more information about a specific smoke test with: + + juju show-action-output + +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action fetch `. -The message for each unit will provide information about that unit's state. -Once they all indicate that they are ready, you can perform a "smoke test" -to verify HDFS or YARN services are working as expected. Trigger the -`smoke-test` action by: +## Utilities +This charm includes Hadoop command line utilities that can be used +to verify information about the cluster. - juju action do namenode/0 smoke-test - juju action do resourcemanager/0 smoke-test +Show the dfsadmin report on the command line with the following: -After a few seconds or so, you can check the results of the smoke test: + juju run --application plugin "su hdfs -c 'hdfs dfsadmin -report'" - juju action status -You will see `status: completed` if the smoke test was successful, or -`status: failed` if it was not. You can get more information on why it failed -via: +# Network-Restricted Environments - juju action fetch +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate +proxy and/or mirror options. See +[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more +information. -## Contact Information +# Contact Information - -## Resources +# Resources - [Apache Bigtop](http://bigtop.apache.org/) home page - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html) - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html) -- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju) +- [Juju community](https://jujucharms.com/community) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml new file mode 100644 index 0000000000..c2d65aeca7 --- /dev/null +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml @@ -0,0 +1,2 @@ +smoke-test: + description: Run a simple HDFS smoke test. diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test new file mode 100755 index 0000000000..65ba07c636 --- /dev/null +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test @@ -0,0 +1,62 @@ +#!/usr/bin/env python3 + +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys + +from charmhelpers.core import hookenv +from jujubigdata.utils import run_as +from charms.reactive import is_state + +if not is_state('apache-bigtop-plugin.hdfs.ready'): + hookenv.action_fail('Charm is not yet ready') + + +# verify the hdfs-test directory does not already exist +output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True) +if '/tmp/hdfs-test' in output: + run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test') + output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True) + if 'hdfs-test' in output: + hookenv.action_fail('Unable to remove existing hdfs-test directory') + sys.exit() + +# create the directory +run_as('ubuntu', 'hdfs', 'dfs', '-mkdir', '-p', '/tmp/hdfs-test') +run_as('ubuntu', 'hdfs', 'dfs', '-chmod', '-R', '777', '/tmp/hdfs-test') + +# verify the newly created hdfs-test subdirectory exists +output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True) +for line in output.split('\n'): + if '/tmp/hdfs-test' in line: + if 'ubuntu' not in line or 'drwxrwxrwx' not in line: + hookenv.action_fail('Permissions incorrect for hdfs-test directory') + sys.exit() + break +else: + hookenv.action_fail('Unable to create hdfs-test directory') + sys.exit() + +# remove the directory +run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test') + +# verify the hdfs-test subdirectory has been removed +output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True) +if '/tmp/hdfs-test' in output: + hookenv.action_fail('Unable to remove hdfs-test directory') + sys.exit() + +hookenv.action_set({'outcome': 'success'}) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml index 5ddc2c9b95..ceedad7fcf 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml @@ -1,8 +1,12 @@ -repo: git@github.com:juju-solutions/layer-hadoop-plugin.git -includes: ['layer:apache-bigtop-base', 'interface:hadoop-plugin', 'interface:dfs', 'interface:mapred'] +repo: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin +includes: + - 'layer:apache-bigtop-base' + - 'interface:hadoop-plugin' + - 'interface:dfs' + - 'interface:mapred' options: basic: use_venv: true metadata: deletes: - - requires.java + - provides.java diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml index a5fd4538c8..4df86f1b90 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml @@ -1,5 +1,5 @@ name: hadoop-plugin -summary: Simplified connection point for Apache Bigtop platform +summary: Facilitates communication with an Apache Bigtop Hadoop cluster maintainer: Juju Big Data description: > Hadoop is a software platform that lets one easily write and @@ -8,7 +8,7 @@ description: > This charm provides a simplified connection point for client / workload services which require access to Apache Hadoop. This connection is established via the Apache Bigtop gateway. -tags: ["applications", "bigdata", "hadoop", "apache"] +tags: [] subordinate: true requires: namenode: diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py index e5b1275851..e680002ae7 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py @@ -42,6 +42,7 @@ def install_hadoop_client_hdfs(principal, namenode): bigtop.render_site_yaml(hosts=hosts, roles='hadoop-client') bigtop.trigger_puppet() set_state('apache-bigtop-plugin.hdfs.installed') + hookenv.application_version_set(get_hadoop_version()) hookenv.status_set('maintenance', 'plugin (hdfs) installed') else: hookenv.status_set('waiting', 'waiting for namenode fqdn') diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py index 512630df9c..815f9fbfc5 100755 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py @@ -29,7 +29,7 @@ class TestDeploy(unittest.TestCase): """ def test_deploy(self): - self.d = amulet.Deployment(series='trusty') + self.d = amulet.Deployment(series='xenial') self.d.load({ 'services': { 'client': {'charm': 'hadoop-client'}, diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md index 025088179c..9845fcb6e8 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. --> -## Overview +# Overview The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers @@ -24,132 +24,159 @@ This charm deploys the ResourceManager component of the Apache Bigtop platform to provide YARN master resources. -## Usage +# Deploying + +A working Juju installation is assumed to be present. If Juju is not yet set +up, please follow the +[getting-started](https://jujucharms.com/docs/2.0/getting-started) +instructions prior to deploying this charm. This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles). +[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). For example: juju deploy hadoop-processing -> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to -deploy the bundle. - -This will deploy the Apache Bigtop platform with a workload node -preconfigured to work with the cluster. - -You can also manually load and run map-reduce jobs via the plugin charm -included in the bundles linked above: - - juju scp my-job.jar plugin/0: - juju ssh plugin/0 - hadoop jar my-job.jar +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the +following syntax: `juju quickstart hadoop-processing`. +This will deploy an Apache Bigtop cluster with this charm acting as the +ResourceManager. More information about this deployment can be found in the +[bundle readme](https://jujucharms.com/hadoop-processing/). -[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/ +# Verifying -## Status and Smoke Test - +## Status Apache Bigtop charms provide extended status reporting to indicate when they are ready: - juju status --format=tabular + juju status This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status --format=tabular + watch -n 0.5 juju status -The message for each unit will provide information about that unit's state. -Once they all indicate that they are ready, you can perform a "smoke test" -to verify HDFS or YARN services are working as expected. Trigger the -`smoke-test` action by: +The message column will provide information about a given unit's state. +This charm is ready for use once the status message indicates that it is +ready with nodemanagers. - juju action do namenode/0 smoke-test - juju action do resourcemanager/0 smoke-test +## Smoke Test +This charm provides a `smoke-test` action that can be used to verify the +application is functioning as expected. This action executes the 'yarn' +smoke tests provided by Apache Bigtop and may take up to +10 minutes to complete. Run the action as follows: -After a few seconds or so, you can check the results of the smoke test: + juju run-action resourcemanager/0 smoke-test - juju action status +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action do resourcemanager/0 smoke-test`. -You will see `status: completed` if the smoke test was successful, or -`status: failed` if it was not. You can get more information on why it failed -via: +Watch the progress of the smoke test actions with: - juju action fetch + watch -n 0.5 juju show-action-status +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action status`. -## Benchmarking +Eventually, the action should settle to `status: completed`. If it +reports `status: failed`, the application is not working as expected. Get +more information about a specific smoke test with: -This charm provides several benchmarks to gauge the performance of your -environment. + juju show-action-output -The easiest way to run the benchmarks on this service is to relate it to the -[Benchmark GUI][]. You will likely also want to relate it to the -[Benchmark Collector][] to have machine-level information collected during the -benchmark, for a more complete picture of how the machine performed. +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action fetch `. -[Benchmark GUI]: https://jujucharms.com/benchmark-gui/ -[Benchmark Collector]: https://jujucharms.com/benchmark-collector/ +## Utilities +This charm includes Hadoop command line and web utilities that can be used +to verify information about the cluster. -However, each benchmark is also an action that can be called manually: +Show the running nodes on the command line with the following: - $ juju action do resourcemanager/0 nnbench - Action queued with id: 55887b40-116c-4020-8b35-1e28a54cc622 - $ juju action fetch --wait 0 55887b40-116c-4020-8b35-1e28a54cc622 + juju run --application resourcemanager "su yarn -c 'yarn node -list'" - results: - meta: - composite: - direction: asc - units: secs - value: "128" - start: 2016-02-04T14:55:39Z - stop: 2016-02-04T14:57:47Z - results: - raw: '{"BAD_ID": "0", "FILE: Number of read operations": "0", "Reduce input groups": - "8", "Reduce input records": "95", "Map output bytes": "1823", "Map input records": - "12", "Combine input records": "0", "HDFS: Number of bytes read": "18635", "FILE: - Number of bytes written": "32999982", "HDFS: Number of write operations": "330", - "Combine output records": "0", "Total committed heap usage (bytes)": "3144749056", - "Bytes Written": "164", "WRONG_LENGTH": "0", "Failed Shuffles": "0", "FILE: - Number of bytes read": "27879457", "WRONG_MAP": "0", "Spilled Records": "190", - "Merged Map outputs": "72", "HDFS: Number of large read operations": "0", "Reduce - shuffle bytes": "2445", "FILE: Number of large read operations": "0", "Map output - materialized bytes": "2445", "IO_ERROR": "0", "CONNECTION": "0", "HDFS: Number - of read operations": "567", "Map output records": "95", "Reduce output records": - "8", "WRONG_REDUCE": "0", "HDFS: Number of bytes written": "27412", "GC time - elapsed (ms)": "603", "Input split bytes": "1610", "Shuffled Maps ": "72", "FILE: - Number of write operations": "0", "Bytes Read": "1490"}' - status: completed - timing: - completed: 2016-02-04 14:57:48 +0000 UTC - enqueued: 2016-02-04 14:55:14 +0000 UTC - started: 2016-02-04 14:55:27 +0000 UTC +To access the Resource Manager web consoles, find the `PUBLIC-ADDRESS` of the +resourcemanager application and expose it: + juju status resourcemanager + juju expose resourcemanager -## Deploying in Network-Restricted Environments +The YARN and Job History web interfaces will be available at the following URLs: + + http://RESOURCEMANAGER_PUBLIC_IP:8088 + http://RESOURCEMANAGER_PUBLIC_IP:19888 -Charms can be deployed in environments with limited network access. To deploy -in this environment, you will need a local mirror to serve required packages. +# Benchmarking -### Mirroring Packages +This charm provides several benchmarks to gauge the performance of the +cluster. Each benchmark is an action that can be run with `juju run-action`: -You can setup a local mirror for apt packages using squid-deb-proxy. -For instructions on configuring juju to use this, see the -[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html). + $ juju actions resourcemanager + ACTION DESCRIPTION + mrbench Mapreduce benchmark for small jobs + nnbench Load test the NameNode hardware and configuration + smoke-test Run an Apache Bigtop smoke test. + teragen Generate data with teragen + terasort Runs teragen to generate sample data, and then runs terasort to sort that data + testdfsio DFS IO Testing + + $ juju run-action resourcemanager/0 nnbench + Action queued with id: 55887b40-116c-4020-8b35-1e28a54cc622 + + $ juju show-action-output 55887b40-116c-4020-8b35-1e28a54cc622 + results: + meta: + composite: + direction: asc + units: secs + value: "128" + start: 2016-02-04T14:55:39Z + stop: 2016-02-04T14:57:47Z + results: + raw: '{"BAD_ID": "0", "FILE: Number of read operations": "0", "Reduce input groups": + "8", "Reduce input records": "95", "Map output bytes": "1823", "Map input records": + "12", "Combine input records": "0", "HDFS: Number of bytes read": "18635", "FILE: + Number of bytes written": "32999982", "HDFS: Number of write operations": "330", + "Combine output records": "0", "Total committed heap usage (bytes)": "3144749056", + "Bytes Written": "164", "WRONG_LENGTH": "0", "Failed Shuffles": "0", "FILE: + Number of bytes read": "27879457", "WRONG_MAP": "0", "Spilled Records": "190", + "Merged Map outputs": "72", "HDFS: Number of large read operations": "0", "Reduce + shuffle bytes": "2445", "FILE: Number of large read operations": "0", "Map output + materialized bytes": "2445", "IO_ERROR": "0", "CONNECTION": "0", "HDFS: Number + of read operations": "567", "Map output records": "95", "Reduce output records": + "8", "WRONG_REDUCE": "0", "HDFS: Number of bytes written": "27412", "GC time + elapsed (ms)": "603", "Input split bytes": "1610", "Shuffled Maps ": "72", "FILE: + Number of write operations": "0", "Bytes Read": "1490"}' + status: completed + timing: + completed: 2016-02-04 14:57:48 +0000 UTC + enqueued: 2016-02-04 14:55:14 +0000 UTC + started: 2016-02-04 14:55:27 +0000 UTC + + +# Network-Restricted Environments + +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate +proxy and/or mirror options. See +[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more +information. -## Contact Information +# Contact Information - -## Hadoop +# Resources - [Apache Bigtop](http://bigtop.apache.org/) home page - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html) - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html) -- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju) +- [Juju community](https://jujucharms.com/community) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml index da4fc08e1b..77a644bfa5 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml @@ -1,6 +1,5 @@ smoke-test: - description: > - Verify that YARN is working as expected by running a small (1MB) terasort. + description: Run an Apache Bigtop smoke test. mrbench: description: Mapreduce benchmark for small jobs params: diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test index 9ef33a9fb3..3280e791cc 100755 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test @@ -1,4 +1,4 @@ -#!/bin/bash +#!/usr/bin/env python3 # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -15,66 +15,34 @@ # See the License for the specific language governing permissions and # limitations under the License. -set -ex +import sys +sys.path.append('lib') -if ! charms.reactive is_state 'apache-bigtop-resourcemanager.ready'; then - action-fail 'ResourceManager not yet ready' - exit -fi +from charmhelpers.core import hookenv +from charms.layer.apache_bigtop_base import Bigtop +from charms.reactive import is_state -IN_DIR='/tmp/smoke_test_in' -OUT_DIR='/tmp/smoke_test_out' -SIZE=10000 -OPTIONS='' -MAPS=1 -REDUCES=1 -NUMTASKS=1 -COMPRESSION='LocalDefault' +def fail(msg, output=None): + if output: + hookenv.action_set({'output': output}) + hookenv.action_fail(msg) + sys.exit() -OPTIONS="${OPTIONS} -D mapreduce.job.maps=${MAPS}" -OPTIONS="${OPTIONS} -D mapreduce.job.reduces=${REDUCES}" -OPTIONS="${OPTIONS} -D mapreduce.job.jvm.numtasks=${NUMTASKS}" -if [ $COMPRESSION == 'Disable' ] ; then - OPTIONS="${OPTIONS} -D mapreduce.map.output.compress=false" -elif [ $COMPRESSION == 'LocalDefault' ] ; then - OPTIONS="${OPTIONS}" -else - OPTIONS="${OPTIONS} -D mapreduce.map.output.compress=true -D mapred.map.output.compress.codec=org.apache.hadoop.io.compress.${COMPRESSION}Codec" -fi +if not is_state('apache-bigtop-resourcemanager.ready'): + fail('Charm is not yet ready to run the Bigtop smoke test(s)') -# create dir to store results -RUN=`date +%s` -RESULT_DIR=/opt/terasort-results -RESULT_LOG=${RESULT_DIR}/${RUN}.$$.log -mkdir -p ${RESULT_DIR} -chown -R hdfs ${RESULT_DIR} +# Bigtop smoke test components +smoke_components = ['yarn'] -# clean out any previous data (must be run as the hdfs user) -su hdfs << EOF -if hadoop fs -stat ${IN_DIR} &> /dev/null; then - hadoop fs -rm -r -skipTrash ${IN_DIR} || true -fi -if hadoop fs -stat ${OUT_DIR} &> /dev/null; then - hadoop fs -rm -r -skipTrash ${OUT_DIR} || true -fi -EOF +# Env required by test components +smoke_env = { + 'HADOOP_CONF_DIR': '/etc/hadoop/conf', +} -START=`date +%s` -# NB: Escaped vars in the block below (e.g., \${HADOOP_MAPRED_HOME}) come from -# the environment while non-escaped vars (e.g., ${IN_DIR}) are parameterized -# from this outer scope -su hdfs << EOF -. /etc/default/hadoop -echo 'generating data' -hadoop jar \${HADOOP_MAPRED_HOME}/hadoop-mapreduce-examples-*.jar teragen ${SIZE} ${IN_DIR} &>/dev/null -echo 'sorting data' -hadoop jar \${HADOOP_MAPRED_HOME}/hadoop-mapreduce-examples-*.jar terasort ${OPTIONS} ${IN_DIR} ${OUT_DIR} &> ${RESULT_LOG} -EOF -STOP=`date +%s` - -if ! grep -q 'Bytes Written=1000000' ${RESULT_LOG}; then - action-fail 'smoke-test failed' - action-set log="$(cat ${RESULT_LOG})" -fi -DURATION=`expr $STOP - $START` +bigtop = Bigtop() +result = bigtop.run_smoke_tests(smoke_components, smoke_env) +if result == 'success': + hookenv.action_set({'outcome': result}) +else: + fail('{} smoke tests failed'.format(smoke_components), result) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml index ad0b569582..c2e3420565 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml @@ -1,4 +1,4 @@ -repo: git@github.com:juju-solutions/layer-hadoop-resourcemanager.git +repo: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager includes: - 'layer:apache-bigtop-base' - 'interface:dfs' diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml index 82b82cd7c0..695d5bfab9 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml @@ -1,12 +1,12 @@ name: hadoop-resourcemanager -summary: YARN master (ResourceManager) for Apache Bigtop platform +summary: YARN master (ResourceManager) from Apache Bigtop maintainer: Juju Big Data description: > Hadoop is a software platform that lets one easily write and run applications that process vast amounts of data. - This charm manages the YARN master node (ResourceManager). -tags: ["applications", "bigdata", "bigtop", "hadoop", "apache"] + This charm provides the YARN master node (ResourceManager). +tags: [] provides: resourcemanager: interface: mapred diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py index afca26bde5..3f3e9ae738 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py @@ -15,7 +15,9 @@ # limitations under the License. from charms.reactive import is_state, remove_state, set_state, when, when_not -from charms.layer.apache_bigtop_base import Bigtop, get_layer_opts, get_fqdn +from charms.layer.apache_bigtop_base import ( + Bigtop, get_hadoop_version, get_layer_opts, get_fqdn +) from charmhelpers.core import hookenv, host from jujubigdata import utils @@ -61,11 +63,32 @@ def install_resourcemanager(namenode): """ if namenode.namenodes(): hookenv.status_set('maintenance', 'installing resourcemanager') + # Hosts nn_host = namenode.namenodes()[0] rm_host = get_fqdn() + + # Ports + rm_ipc = get_layer_opts().port('resourcemanager') + rm_http = get_layer_opts().port('rm_webapp_http') + jh_ipc = get_layer_opts().port('jobhistory') + jh_http = get_layer_opts().port('jh_webapp_http') + bigtop = Bigtop() - hosts = {'namenode': nn_host, 'resourcemanager': rm_host} - bigtop.render_site_yaml(hosts=hosts, roles='resourcemanager') + bigtop.render_site_yaml( + hosts={ + 'namenode': nn_host, + 'resourcemanager': rm_host, + }, + roles=[ + 'resourcemanager', + ], + overrides={ + 'hadoop::common_yarn::hadoop_rm_port': rm_ipc, + 'hadoop::common_yarn::hadoop_rm_webapp_port': rm_http, + 'hadoop::common_mapred_app::mapreduce_jobhistory_port': jh_ipc, + 'hadoop::common_mapred_app::mapreduce_jobhistory_webapp_port': jh_http, + } + ) bigtop.trigger_puppet() # /etc/hosts entries from the KV are not currently used for bigtop, @@ -104,7 +127,8 @@ def start_resourcemanager(namenode): for port in get_layer_opts().exposed_ports('resourcemanager'): hookenv.open_port(port) set_state('apache-bigtop-resourcemanager.started') - hookenv.status_set('active', 'ready') + hookenv.application_version_set(get_hadoop_version()) + hookenv.status_set('maintenance', 'resourcemanager started') ############################################################################### diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py index 65dbbbb5d2..3b694548d8 100755 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py @@ -28,7 +28,7 @@ class TestDeploy(unittest.TestCase): """ def test_deploy(self): - self.d = amulet.Deployment(series='trusty') + self.d = amulet.Deployment(series='xenial') self.d.add('resourcemanager', 'hadoop-resourcemanager') self.d.setup(timeout=900) self.d.sentry.wait(timeout=1800) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md index 2580072276..2833184536 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. --> -## Overview +# Overview The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers @@ -25,92 +25,105 @@ and DataNode components of the Apache Bigtop platform to provide YARN compute and HDFS storage resources. -## Usage +# Deploying + +A working Juju installation is assumed to be present. If Juju is not yet set +up, please follow the +[getting-started](https://jujucharms.com/docs/2.0/getting-started) +instructions prior to deploying this charm. This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles). +[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). For example: juju deploy hadoop-processing -> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to -deploy the bundle. - -This will deploy the Apache Bigtop platform with a workload node -preconfigured to work with the cluster. - -You can also manually load and run map-reduce jobs via the plugin charm -included in the bundles linked above: - - juju scp my-job.jar plugin/0: - juju ssh plugin/0 - hadoop jar my-job.jar - +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the +following syntax: `juju quickstart hadoop-processing`. -[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/ +This will deploy an Apache Bigtop cluster with 3 units of this charm acting as +the combined DataNode/NodeManager application. More information about this +deployment can be found in the +[bundle readme](https://jujucharms.com/hadoop-processing/). -## Status and Smoke Test +# Verifying +## Status Apache Bigtop charms provide extended status reporting to indicate when they are ready: - juju status --format=tabular + juju status This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status --format=tabular + watch -n 0.5 juju status -The message for each unit will provide information about that unit's state. -Once they all indicate that they are ready, you can perform a "smoke test" -to verify HDFS or YARN services are working as expected. Trigger the -`smoke-test` action by: +The message column will provide information about a given unit's state. +This charm is ready for use once the status message indicates that it is +ready as a datanode/nodemanager. - juju action do namenode/0 smoke-test - juju action do resourcemanager/0 smoke-test +## Smoke Test +This charm provides a `smoke-test` action that can be used to verify the +application is functioning as expected. This action executes the 'hdfs' +and 'mapreduce' smoke tests provided by Apache Bigtop and may take up to +30 minutes to complete. Run the action as follows: -After a few seconds or so, you can check the results of the smoke test: + juju run-action slave/0 smoke-test - juju action status +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action do slave/0 smoke-test`. -You will see `status: completed` if the smoke test was successful, or -`status: failed` if it was not. You can get more information on why it failed -via: +Watch the progress of the smoke test actions with: - juju action fetch + watch -n 0.5 juju show-action-status +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action status`. -## Scaling +Eventually, the action should settle to `status: completed`. If it +reports `status: failed`, the application is not working as expected. Get +more information about a specific smoke test with: -The slave node is the "workhorse" of the Hadoop environment. To scale your -cluster performance and storage capabilities, you can simply add more slave -units. For example, to add three more units: + juju show-action-output - juju add-unit slave -n 3 +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is `juju action fetch `. -## Deploying in Network-Restricted Environments +# Scaling -Charms can be deployed in environments with limited network access. To deploy -in this environment, you will need a local mirror to serve required packages. +To scale the cluster compute and storage capabilities, simply add more +slave units. To add one unit: + + juju add-unit slave + +Multiple units may be added at once. For example, add four more slave units: + juju add-unit -n4 slave -### Mirroring Packages -You can setup a local mirror for apt packages using squid-deb-proxy. -For instructions on configuring juju to use this, see the -[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html). +# Network-Restricted Environments + +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate +proxy and/or mirror options. See +[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more +information. -## Contact Information +# Contact Information - -## Hadoop +# Resources - [Apache Bigtop](http://bigtop.apache.org/) home page - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html) - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html) -- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop) +- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju) +- [Juju community](https://jujucharms.com/community) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml new file mode 100644 index 0000000000..7fbb30227a --- /dev/null +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml @@ -0,0 +1,3 @@ +smoke-test: + description: | + Run an Apache Bigtop smoke test. Requires 3 slave units. diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test new file mode 100755 index 0000000000..6dec4b5ac1 --- /dev/null +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test @@ -0,0 +1,49 @@ +#!/usr/bin/env python3 + +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +sys.path.append('lib') + +from charmhelpers.core import hookenv +from charms.layer.apache_bigtop_base import Bigtop +from charms.reactive import is_state + + +def fail(msg, output=None): + if output: + hookenv.action_set({'output': output}) + hookenv.action_fail(msg) + sys.exit() + +if not is_state('apache-bigtop-datanode.started'): + fail('Charm is not yet ready to run the Bigtop smoke test(s)') + +# Bigtop smoke test components +smoke_components = ['hdfs', 'mapreduce'] + +# Env required by test components +smoke_env = { + 'HADOOP_CONF_DIR': '/etc/hadoop/conf', + 'HADOOP_MAPRED_HOME': '/usr/lib/hadoop-mapreduce', +} + +bigtop = Bigtop() +result = bigtop.run_smoke_tests(smoke_components, smoke_env) +if result == 'success': + hookenv.action_set({'outcome': result}) +else: + fail('{} smoke tests failed'.format(smoke_components), result) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml index 73c66e62db..e10b9daaf8 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml @@ -1,2 +1,4 @@ -repo: git@github.com:juju-solutions/layer-hadoop-slave.git -includes: ['layer:hadoop-datanode', 'layer:hadoop-nodemanager'] +repo: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave +includes: + - 'layer:hadoop-datanode' + - 'layer:hadoop-nodemanager' diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml index f0b6cce723..e5bbc3cf5f 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml @@ -1,8 +1,8 @@ name: hadoop-slave -summary: Combined slave node (DataNode + NodeManager) for Apache Bigtop. +summary: Combined slave node (DataNode + NodeManager) from Apache Bigtop. description: > Hadoop is a software platform that lets one easily write and run applications that process vast amounts of data. - This charm manages both the storage node (DataNode) for HDFS and the + This charm provides both the storage node (DataNode) for HDFS and the compute node (NodeManager) for Yarn. diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py index 1e6d38f296..8690d625b8 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py @@ -15,11 +15,10 @@ # See the License for the specific language governing permissions and # limitations under the License. -from charms.reactive import when_any, when_none, is_state +from charms.reactive import when_any, is_state from charmhelpers.core.hookenv import status_set -@when_none('namenode.spec.mismatch', 'resourcemanager.spec.mismatch') @when_any( 'bigtop.available', 'apache-bigtop-datanode.pending', diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py index e479078779..5899c0fc4d 100755 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py @@ -28,7 +28,7 @@ class TestDeploy(unittest.TestCase): """ def test_deploy(self): - self.d = amulet.Deployment(series='trusty') + self.d = amulet.Deployment(series='xenial') self.d.add('slave', 'hadoop-slave') self.d.setup(timeout=900) self.d.sentry.wait(timeout=1800) From 1a37df5d0e2835d578bb9246d8fc8245bb7d97be Mon Sep 17 00:00:00 2001 From: Kevin W Monroe Date: Mon, 10 Oct 2016 17:54:39 +0000 Subject: [PATCH 2/4] update top-level charm source readme with 2.0/stable docs --- bigtop-packages/src/charm/README.md | 51 ++++++++++++++++------------- 1 file changed, 29 insertions(+), 22 deletions(-) diff --git a/bigtop-packages/src/charm/README.md b/bigtop-packages/src/charm/README.md index 1290d4b274..04213ad8ba 100644 --- a/bigtop-packages/src/charm/README.md +++ b/bigtop-packages/src/charm/README.md @@ -18,37 +18,41 @@ ## Overview -These are the charm layers used to build Juju charms for deploying Bigtop -components. The charms are also published to the [Juju charm store][] and -can be deployed directly from there using [bundles][], or they can be -built from these layers and deployed locally. +Juju Charms allow you to deploy, configure, and connect an Apache Bigtop cluster +on any supported cloud, which can be scaled to meet workload demands. You can +also easily connect other, non-Bigtop components from the [Juju charm store][] +that support common interfaces. -Charms allow you to deploy, configure, and connect a Apache Bigtop cluster -on any supported cloud, which can be easily scaled to meet workload demands. -You can also easily connect other, non-Bigtop components from the -[Juju charm store][] that support common interfaces. +This source tree contains the charm layers used to build charms for deploying +Bigtop components. Built charms are published to the [Juju charm store][] +and can be deployed directly from there, either individually or with +[bundles][]. They can also be built from these layers and deployed locally. +For the remainder of this guide, a working Juju installation is assumed to be +present. If Juju is not yet set up, please follow the [getting-started][] +instructions prior to deploying locally built charms and bundles. [Juju charm store]: https://jujucharms.com/ -[bundles]: https://jujucharms.com/u/bigdata-dev/hadoop-processing +[bundles]: https://jujucharms.com/hadoop-processing +[getting-started]: https://jujucharms.com/docs/stable/getting-started ## Building the Bigtop Charms -To build these charms, you will need [charm-tools][]. You should also read -over the developer [Getting Started][] page for an overview of charms and -building them. Then, in any of the charm layer directories, use `charm build`. +To build these charms, you will need [charm-tools][]. You should also read +over the developer [Getting Started][] page for an overview of developing and +building charms. Then, in any of the charm layer directories, use `charm build`. For example: export JUJU_REPOSITORY=$HOME/charms - mkdir $HOME/charms + mkdir $JUJU_REPOSITORY cd bigtop-packages/src/charms/hadoop/layer-hadoop-namenode charm build This will build the NameNode charm, pulling in the appropriate base and interface layers from [interfaces.juju.solutions][]. You can get local copies -of those layers as well using `charm pull-source`: +of those layers as well by using `charm pull-source`: export LAYER_PATH=$HOME/layers export INTERFACE_PATH=$HOME/interfaces @@ -57,19 +61,22 @@ of those layers as well using `charm pull-source`: charm pull-source layer:apache-bigtop-base charm pull-source interface:dfs -You can then deploy the locally built charms individually: +You can deploy the locally built charms individually, for example: - juju deploy local:trusty/hadoop-namenode + juju deploy $JUJU_REPOSITORY/xenial/hadoop-namenode -You can also use the local version of a bundle: +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, the syntax is: `juju deploy local:xenial/hadoop-namenode`. - juju deploy bigtop-deploy/juju/hadoop-processing/bundle-local.yaml +You can also deploy the local version of a bundle: -> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to -deploy the local bundle. + juju deploy ./bigtop-deploy/juju/hadoop-processing/bundle-local.yaml +> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version +of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart +./bigtop-deploy/juju/hadoop-processing/bundle-local.yaml`. [charm-tools]: https://jujucharms.com/docs/stable/tools-charm-tools -[Getting Started]: https://jujucharms.com/docs/devel/developer-getting-started +[Getting Started]: https://jujucharms.com/docs/stable/developer-getting-started [interfaces.juju.solutions]: http://interfaces.juju.solutions/ -[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/ +[juju-quickstart]: https://launchpad.net/juju-quickstart From b9edf8d94b57a9254b12d7522e82bc4a72ed0273 Mon Sep 17 00:00:00 2001 From: Kevin W Monroe Date: Mon, 10 Oct 2016 18:04:56 +0000 Subject: [PATCH 3/4] tweak readme for charm store formatting --- .../hadoop/layer-hadoop-namenode/README.md | 32 +++++++++-------- .../hadoop/layer-hadoop-plugin/README.md | 32 +++++++++-------- .../layer-hadoop-resourcemanager/README.md | 34 ++++++++++-------- .../charm/hadoop/layer-hadoop-slave/README.md | 35 ++++++++++--------- 4 files changed, 74 insertions(+), 59 deletions(-) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md index 710fd629de..87809cac05 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md @@ -20,31 +20,35 @@ The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. -This charm deploys the NameNode component of the Apache Bigtop platform +This charm deploys the NameNode component of the [Apache Bigtop][] platform to provide HDFS master resources. +[Apache Bigtop]: http://bigtop.apache.org/ + # Deploying A working Juju installation is assumed to be present. If Juju is not yet set -up, please follow the -[getting-started](https://jujucharms.com/docs/2.0/getting-started) -instructions prior to deploying this charm. +up, please follow the [getting-started][] instructions prior to deploying this +charm. -This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). +This charm is intended to be deployed via one of the [apache bigtop bundles][]. For example: juju deploy hadoop-processing > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version -of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the -following syntax: `juju quickstart hadoop-processing`. +of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart +hadoop-processing`. This will deploy an Apache Bigtop cluster with this charm acting as the NameNode. More information about this deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). +[getting-started]: https://jujucharms.com/docs/stable/getting-started +[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles +[juju-quickstart]: https://launchpad.net/juju-quickstart + # Verifying @@ -57,7 +61,7 @@ are ready: This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status + watch -n 2 juju status The message column will provide information about a given unit's state. This charm is ready for use once the status message indicates that it is @@ -74,7 +78,7 @@ of Juju, the syntax is `juju action do namenode/0 smoke-test`. Watch the progress of the smoke test actions with: - watch -n 0.5 juju show-action-status + watch -n 2 juju show-action-status > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version of Juju, the syntax is `juju action status`. @@ -110,10 +114,10 @@ The web interface will be available at the following URL: # Network-Restricted Environments Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate -proxy and/or mirror options. See -[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more -information. +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Contact Information diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md index eb37c7cd08..6ecd894531 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md @@ -21,30 +21,34 @@ distributed processing of large data sets across clusters of computers using a simple programming model. This charm facilitates communication between Hadoop components of an -Apache Bigtop cluster and workload applications. +[Apache Bigtop][] cluster and workload applications. + +[Apache Bigtop]: http://bigtop.apache.org/ # Deploying A working Juju installation is assumed to be present. If Juju is not yet set -up, please follow the -[getting-started](https://jujucharms.com/docs/2.0/getting-started) -instructions prior to deploying this charm. +up, please follow the [getting-started][] instructions prior to deploying this +charm. -This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). +This charm is intended to be deployed via one of the [apache bigtop bundles][]. For example: juju deploy hadoop-processing > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version -of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the -following syntax: `juju quickstart hadoop-processing`. +of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart +hadoop-processing`. This will deploy an Apache Bigtop cluster with a client unit preconfigured to work with the cluster. More information about this deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). +[getting-started]: https://jujucharms.com/docs/stable/getting-started +[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles +[juju-quickstart]: https://launchpad.net/juju-quickstart + # Verifying @@ -57,7 +61,7 @@ are ready: This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status + watch -n 2 juju status The message column will provide information about a given unit's state. This charm is ready for use once the status message indicates that it is @@ -74,7 +78,7 @@ of Juju, the syntax is `juju action do plugin/0 smoke-test`. Watch the progress of the smoke test actions with: - watch -n 0.5 juju show-action-status + watch -n 2 juju show-action-status > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version of Juju, the syntax is `juju action status`. @@ -100,10 +104,10 @@ Show the dfsadmin report on the command line with the following: # Network-Restricted Environments Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate -proxy and/or mirror options. See -[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more -information. +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Contact Information diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md index 9845fcb6e8..b00cbf0ba0 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md @@ -20,31 +20,35 @@ The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. -This charm deploys the ResourceManager component of the Apache Bigtop platform -to provide YARN master resources. +This charm deploys the ResourceManager component of the [Apache Bigtop][] +platform to provide YARN master resources. + +[Apache Bigtop]: http://bigtop.apache.org/ # Deploying A working Juju installation is assumed to be present. If Juju is not yet set -up, please follow the -[getting-started](https://jujucharms.com/docs/2.0/getting-started) -instructions prior to deploying this charm. +up, please follow the [getting-started][] instructions prior to deploying this +charm. -This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). +This charm is intended to be deployed via one of the [apache bigtop bundles][]. For example: juju deploy hadoop-processing > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version -of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the -following syntax: `juju quickstart hadoop-processing`. +of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart +hadoop-processing`. This will deploy an Apache Bigtop cluster with this charm acting as the ResourceManager. More information about this deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). +[getting-started]: https://jujucharms.com/docs/stable/getting-started +[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles +[juju-quickstart]: https://launchpad.net/juju-quickstart + # Verifying @@ -57,7 +61,7 @@ are ready: This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status + watch -n 2 juju status The message column will provide information about a given unit's state. This charm is ready for use once the status message indicates that it is @@ -76,7 +80,7 @@ of Juju, the syntax is `juju action do resourcemanager/0 smoke-test`. Watch the progress of the smoke test actions with: - watch -n 0.5 juju show-action-status + watch -n 2 juju show-action-status > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version of Juju, the syntax is `juju action status`. @@ -161,10 +165,10 @@ cluster. Each benchmark is an action that can be run with `juju run-action`: # Network-Restricted Environments Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate -proxy and/or mirror options. See -[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more -information. +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Contact Information diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md index 2833184536..f002d775b1 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md @@ -21,31 +21,34 @@ distributed processing of large data sets across clusters of computers using a simple programming model. This charm deploys a combined slave node running the NodeManager -and DataNode components of the Apache Bigtop platform +and DataNode components of the [Apache Bigtop][] platform to provide YARN compute and HDFS storage resources. +[Apache Bigtop]: http://bigtop.apache.org/ + # Deploying A working Juju installation is assumed to be present. If Juju is not yet set -up, please follow the -[getting-started](https://jujucharms.com/docs/2.0/getting-started) -instructions prior to deploying this charm. +up, please follow the [getting-started][] instructions prior to deploying this +charm. -This charm is intended to be deployed via one of the -[apache bigtop bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). +This charm is intended to be deployed via one of the [apache bigtop bundles][]. For example: juju deploy hadoop-processing > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version -of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the -following syntax: `juju quickstart hadoop-processing`. +of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart +hadoop-processing`. This will deploy an Apache Bigtop cluster with 3 units of this charm acting as the combined DataNode/NodeManager application. More information about this -deployment can be found in the -[bundle readme](https://jujucharms.com/hadoop-processing/). +deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). + +[getting-started]: https://jujucharms.com/docs/stable/getting-started +[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles +[juju-quickstart]: https://launchpad.net/juju-quickstart # Verifying @@ -59,7 +62,7 @@ are ready: This is particularly useful when combined with `watch` to track the on-going progress of the deployment: - watch -n 0.5 juju status + watch -n 2 juju status The message column will provide information about a given unit's state. This charm is ready for use once the status message indicates that it is @@ -78,7 +81,7 @@ of Juju, the syntax is `juju action do slave/0 smoke-test`. Watch the progress of the smoke test actions with: - watch -n 0.5 juju show-action-status + watch -n 2 juju show-action-status > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version of Juju, the syntax is `juju action status`. @@ -108,10 +111,10 @@ Multiple units may be added at once. For example, add four more slave units: # Network-Restricted Environments Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate -proxy and/or mirror options. See -[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more -information. +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Contact Information From cc8a7aa0e47669d2c27518a6242fdd510bc80f71 Mon Sep 17 00:00:00 2001 From: Kevin W Monroe Date: Mon, 10 Oct 2016 21:09:42 +0000 Subject: [PATCH 4/4] one more readme tweak to move restricted env to the deployment section --- .../charm/hadoop/layer-hadoop-namenode/README.md | 15 ++++++--------- .../charm/hadoop/layer-hadoop-plugin/README.md | 15 ++++++--------- .../hadoop/layer-hadoop-resourcemanager/README.md | 15 ++++++--------- .../src/charm/hadoop/layer-hadoop-slave/README.md | 15 ++++++--------- 4 files changed, 24 insertions(+), 36 deletions(-) diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md index 87809cac05..621a1e8de3 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md @@ -45,9 +45,15 @@ This will deploy an Apache Bigtop cluster with this charm acting as the NameNode. More information about this deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). +## Network-Restricted Environments +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + [getting-started]: https://jujucharms.com/docs/stable/getting-started [apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles [juju-quickstart]: https://launchpad.net/juju-quickstart +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Verifying @@ -111,15 +117,6 @@ The web interface will be available at the following URL: http://NAMENODE_PUBLIC_IP:50070 -# Network-Restricted Environments - -Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate proxy and/or -mirror options. See [Configuring Models][] for more information. - -[Configuring Models]: https://jujucharms.com/docs/stable/models-config - - # Contact Information - diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md index 6ecd894531..405c08ac5f 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md @@ -45,9 +45,15 @@ This will deploy an Apache Bigtop cluster with a client unit preconfigured to work with the cluster. More information about this deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). +## Network-Restricted Environments +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + [getting-started]: https://jujucharms.com/docs/stable/getting-started [apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles [juju-quickstart]: https://launchpad.net/juju-quickstart +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Verifying @@ -101,15 +107,6 @@ Show the dfsadmin report on the command line with the following: juju run --application plugin "su hdfs -c 'hdfs dfsadmin -report'" -# Network-Restricted Environments - -Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate proxy and/or -mirror options. See [Configuring Models][] for more information. - -[Configuring Models]: https://jujucharms.com/docs/stable/models-config - - # Contact Information - diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md index b00cbf0ba0..430cc97449 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md @@ -45,9 +45,15 @@ This will deploy an Apache Bigtop cluster with this charm acting as the ResourceManager. More information about this deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). +## Network-Restricted Environments +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + [getting-started]: https://jujucharms.com/docs/stable/getting-started [apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles [juju-quickstart]: https://launchpad.net/juju-quickstart +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Verifying @@ -162,15 +168,6 @@ cluster. Each benchmark is an action that can be run with `juju run-action`: started: 2016-02-04 14:55:27 +0000 UTC -# Network-Restricted Environments - -Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate proxy and/or -mirror options. See [Configuring Models][] for more information. - -[Configuring Models]: https://jujucharms.com/docs/stable/models-config - - # Contact Information - diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md index f002d775b1..4bf240dd3e 100644 --- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md +++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md @@ -46,9 +46,15 @@ This will deploy an Apache Bigtop cluster with 3 units of this charm acting as the combined DataNode/NodeManager application. More information about this deployment can be found in the [bundle readme](https://jujucharms.com/hadoop-processing/). +## Network-Restricted Environments +Charms can be deployed in environments with limited network access. To deploy +in this environment, configure a Juju model with appropriate proxy and/or +mirror options. See [Configuring Models][] for more information. + [getting-started]: https://jujucharms.com/docs/stable/getting-started [apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles [juju-quickstart]: https://launchpad.net/juju-quickstart +[Configuring Models]: https://jujucharms.com/docs/stable/models-config # Verifying @@ -108,15 +114,6 @@ Multiple units may be added at once. For example, add four more slave units: juju add-unit -n4 slave -# Network-Restricted Environments - -Charms can be deployed in environments with limited network access. To deploy -in this environment, configure a Juju model with appropriate proxy and/or -mirror options. See [Configuring Models][] for more information. - -[Configuring Models]: https://jujucharms.com/docs/stable/models-config - - # Contact Information -