From eec4bd1a1731dc84a8de70a2a12251ee134f2296 Mon Sep 17 00:00:00 2001 From: Andrew Ash Date: Fri, 14 Feb 2014 10:01:01 -0800 Subject: [PATCH] Typo: Standlone -> Standalone Author: Andrew Ash Closes #601 from ash211/typo and squashes the following commits: 9cd43ac [Andrew Ash] Change docs references to metrics.properties, not metrics.conf 3813ff1 [Andrew Ash] Typo: mulitcast -> multicast 873bd2f [Andrew Ash] Typo: Standlone -> Standalone --- conf/metrics.properties.template | 2 +- docs/monitoring.md | 6 +++--- docs/spark-standalone.md | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/conf/metrics.properties.template b/conf/metrics.properties.template index 1c3d94e1b08..30bcab0c933 100644 --- a/conf/metrics.properties.template +++ b/conf/metrics.properties.template @@ -67,7 +67,7 @@ # period 10 Poll period # unit seconds Units of poll period # ttl 1 TTL of messages sent by Ganglia -# mode multicast Ganglia network mode ('unicast' or 'mulitcast') +# mode multicast Ganglia network mode ('unicast' or 'multicast') # org.apache.spark.metrics.sink.JmxSink diff --git a/docs/monitoring.md b/docs/monitoring.md index 0d5eb7065e9..e9b1d2b2f4f 100644 --- a/docs/monitoring.md +++ b/docs/monitoring.md @@ -19,7 +19,7 @@ You can access this interface by simply opening `http://:4040` in a If multiple SparkContexts are running on the same host, they will bind to succesive ports beginning with 4040 (4041, 4042, etc). -Spark's Standlone Mode cluster manager also has its own +Spark's Standalone Mode cluster manager also has its own [web UI](spark-standalone.html#monitoring-and-logging). Note that in both of these UIs, the tables are sortable by clicking their headers, @@ -31,7 +31,7 @@ Spark has a configurable metrics system based on the [Coda Hale Metrics Library](http://metrics.codahale.com/). This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV files. The metrics system is configured via a configuration file that Spark expects to be present -at `$SPARK_HOME/conf/metrics.conf`. A custom file location can be specified via the +at `$SPARK_HOME/conf/metrics.properties`. A custom file location can be specified via the `spark.metrics.conf` [configuration property](configuration.html#spark-properties). Spark's metrics are decoupled into different _instances_ corresponding to Spark components. Within each instance, you can configure a @@ -54,7 +54,7 @@ Each instance can report to zero or more _sinks_. Sinks are contained in the * `GraphiteSink`: Sends metrics to a Graphite node. The syntax of the metrics configuration file is defined in an example configuration file, -`$SPARK_HOME/conf/metrics.conf.template`. +`$SPARK_HOME/conf/metrics.properties.template`. # Advanced Instrumentation diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index 3388c14ec4d..51fb3a4f7f8 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -10,7 +10,7 @@ In addition to running on the Mesos or YARN cluster managers, Spark also provide # Installing Spark Standalone to a Cluster -To install Spark Standlone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building). +To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](index.html#building). # Starting a Cluster Manually