Skip to content

Commit

Permalink
BIGTOP-2874: juju bundle refresh (august 2017)
Browse files Browse the repository at this point in the history
Closes #281
  • Loading branch information
kwmonroe committed Sep 5, 2017
1 parent d21a37a commit 999e734
Show file tree
Hide file tree
Showing 22 changed files with 106 additions and 663 deletions.
12 changes: 6 additions & 6 deletions bigtop-deploy/juju/hadoop-hbase/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ Show the list of Zookeeper nodes with the following:

juju run --unit zookeeper/0 'echo "ls /" | /usr/lib/zookeeper/bin/zkCli.sh'

To access the HDFS web console, find the `PUBLIC-ADDRESS` of the namenode
To access the HDFS web console, find the `Public address` of the namenode
application and expose it:

juju status namenode
Expand All @@ -169,7 +169,7 @@ The web interface will be available at the following URL:
http://NAMENODE_PUBLIC_IP:50070

Similarly, to access the Resource Manager web consoles, find the
`PUBLIC-ADDRESS` of the resourcemanager application and expose it:
`Public address` of the resourcemanager application and expose it:

juju status resourcemanager
juju expose resourcemanager
Expand All @@ -179,7 +179,7 @@ The YARN and Job History web interfaces will be available at the following URLs:
http://RESOURCEMANAGER_PUBLIC_IP:8088
http://RESOURCEMANAGER_PUBLIC_IP:19888

Finally, to access the HBase web console, find the `PUBLIC-ADDRESS` of any
Finally, to access the HBase web console, find the `Public address` of any
hbase unit and expose the application:

juju status hbase
Expand All @@ -193,9 +193,9 @@ The web interface will be available at the following URL:
# Monitoring

This bundle includes Ganglia for system-level monitoring of the namenode,
resourcemanager, slave, hbase, and zookeeper units. Metrics are sent to a
resourcemanager, slave, and zookeeper units. Metrics are sent to a
centralized ganglia unit for easy viewing in a browser. To view the ganglia web
interface, find the `PUBLIC-ADDRESS` of the Ganglia application and expose it:
interface, find the `Public address` of the Ganglia application and expose it:

juju status ganglia
juju expose ganglia
Expand All @@ -208,7 +208,7 @@ The web interface will be available at:
# Logging

This bundle includes rsyslog to collect syslog data from the namenode,
resourcemanager, slave, hbase, and zookeeper units. These logs are sent to a
resourcemanager, slave, and zookeeper units. These logs are sent to a
centralized rsyslog unit for easy syslog analysis. One method of viewing this
log data is to simply cat syslog from the rsyslog unit:

Expand Down
139 changes: 0 additions & 139 deletions bigtop-deploy/juju/hadoop-hbase/bundle-dev.yaml

This file was deleted.

11 changes: 7 additions & 4 deletions bigtop-deploy/juju/hadoop-hbase/bundle-local.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
series: xenial
description: >
This is a fourteen unit big data cluster that includes Hadoop 2.7 and
HBase 1.1 from Apache Bigtop. Use it when you need a distributed big data
store with MapReduce processing capabilities. It will run on 8 machines in
your cloud.
services:
namenode:
charm: "/home/ubuntu/charms/xenial/hadoop-namenode"
Expand Down Expand Up @@ -47,7 +53,7 @@ services:
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
gui-x: "0"
gui-x: "1000"
gui-y: "0"
to:
- "1"
Expand Down Expand Up @@ -90,7 +96,6 @@ services:
annotations:
gui-x: "750"
gui-y: "400"
series: xenial
relations:
- [resourcemanager, namenode]
- [namenode, slave]
Expand All @@ -103,13 +108,11 @@ relations:
- ["ganglia-node:juju-info", "namenode:juju-info"]
- ["ganglia-node:juju-info", "resourcemanager:juju-info"]
- ["ganglia-node:juju-info", "slave:juju-info"]
- ["ganglia-node:juju-info", "hbase:juju-info"]
- ["ganglia-node:juju-info", "zookeeper:juju-info"]
- ["ganglia:node", "ganglia-node:node"]
- ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "hbase:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
- ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
machines:
Expand Down
23 changes: 13 additions & 10 deletions bigtop-deploy/juju/hadoop-hbase/bundle.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
series: xenial
description: >
This is a fourteen unit big data cluster that includes Hadoop 2.7 and
HBase 1.1 from Apache Bigtop. Use it when you need a distributed big data
store with MapReduce processing capabilities. It will run on 8 machines in
your cloud.
services:
namenode:
charm: "cs:xenial/hadoop-namenode-24"
charm: "cs:xenial/hadoop-namenode-31"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
Expand All @@ -9,7 +15,7 @@ services:
to:
- "0"
resourcemanager:
charm: "cs:xenial/hadoop-resourcemanager-26"
charm: "cs:xenial/hadoop-resourcemanager-33"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
Expand All @@ -18,7 +24,7 @@ services:
to:
- "0"
slave:
charm: "cs:xenial/hadoop-slave-25"
charm: "cs:xenial/hadoop-slave-32"
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
Expand All @@ -29,7 +35,7 @@ services:
- "2"
- "3"
plugin:
charm: "cs:xenial/hadoop-plugin-24"
charm: "cs:xenial/hadoop-plugin-31"
annotations:
gui-x: "1000"
gui-y: "400"
Expand All @@ -43,18 +49,18 @@ services:
to:
- "4"
hbase:
charm: "cs:xenial/hbase-25"
charm: "cs:xenial/hbase-32"
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
gui-x: "0"
gui-x: "1000"
gui-y: "0"
to:
- "1"
- "2"
- "3"
zookeeper:
charm: "cs:xenial/zookeeper-30"
charm: "cs:xenial/zookeeper-37"
constraints: "mem=3G root-disk=32G"
num_units: 3
annotations:
Expand Down Expand Up @@ -90,7 +96,6 @@ services:
annotations:
gui-x: "750"
gui-y: "400"
series: xenial
relations:
- [resourcemanager, namenode]
- [namenode, slave]
Expand All @@ -103,13 +108,11 @@ relations:
- ["ganglia-node:juju-info", "namenode:juju-info"]
- ["ganglia-node:juju-info", "resourcemanager:juju-info"]
- ["ganglia-node:juju-info", "slave:juju-info"]
- ["ganglia-node:juju-info", "hbase:juju-info"]
- ["ganglia-node:juju-info", "zookeeper:juju-info"]
- ["ganglia:node", "ganglia-node:node"]
- ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "hbase:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
- ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
machines:
Expand Down
6 changes: 3 additions & 3 deletions bigtop-deploy/juju/hadoop-kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ Show the list of Zookeeper nodes with the following:

juju run --unit zookeeper/0 'echo "ls /" | /usr/lib/zookeeper/bin/zkCli.sh'

To access the HDFS web console, find the `PUBLIC-ADDRESS` of the namenode
To access the HDFS web console, find the `Public address` of the namenode
application and expose it:

juju status namenode
Expand All @@ -191,7 +191,7 @@ The web interface will be available at the following URL:
http://NAMENODE_PUBLIC_IP:50070

Similarly, to access the Resource Manager web consoles, find the
`PUBLIC-ADDRESS` of the resourcemanager application and expose it:
`Public address` of the resourcemanager application and expose it:

juju status resourcemanager
juju expose resourcemanager
Expand All @@ -207,7 +207,7 @@ The YARN and Job History web interfaces will be available at the following URLs:
This bundle includes Ganglia for system-level monitoring of the namenode,
resourcemanager, slave, kafka, and zookeeper units. Metrics are sent to a
centralized ganglia unit for easy viewing in a browser. To view the ganglia web
interface, find the `PUBLIC-ADDRESS` of the Ganglia application and expose it:
interface, find the `Public address` of the Ganglia application and expose it:

juju status ganglia
juju expose ganglia
Expand Down
Loading

0 comments on commit 999e734

Please sign in to comment.