Skip to content

Commit

Permalink
Various doc fixes (broken link, format etc.)
Browse files Browse the repository at this point in the history
  • Loading branch information
andrewor14 committed Aug 5, 2014
1 parent e837cde commit cb3be88
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 10 deletions.
12 changes: 6 additions & 6 deletions docs/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,22 +39,22 @@ configure those ports.
<td>Standalone Master</td>
<td>8080</td>
<td>Web UI</td>
<td><code>spark.master.ui.port<br>SPARK_MASTER_WEBUI_PORT</code></td>
<td><code>spark.master.ui.port /<br> SPARK_MASTER_WEBUI_PORT</code></td>
<td>Jetty-based. Standalone mode only.</td>
</tr>
<tr>
<td>Browser</td>
<td>Standalone Worker</td>
<td>8081</td>
<td>Web UI</td>
<td><code>spark.worker.ui.port<br>SPARK_WORKER_WEBUI_PORT</code></td>
<td><code>spark.worker.ui.port /<br> SPARK_WORKER_WEBUI_PORT</code></td>
<td>Jetty-based. Standalone mode only.</td>
</tr>
<tr>
<td>Driver<br>Standalone Worker</td>
<td>Driver /<br> Standalone Worker</td>
<td>Standalone Master</td>
<td>7077</td>
<td>Submit job to cluster<br>Join cluster</td>
<td>Submit job to cluster /<br> Join cluster</td>
<td><code>SPARK_MASTER_PORT</code></td>
<td>Akka-based. Set to "0" to choose a port randomly. Standalone mode only.</td>
</tr>
Expand Down Expand Up @@ -92,10 +92,10 @@ configure those ports.
<td>Jetty-based</td>
</tr>
<tr>
<td>Executor<br>Standalone Master</td>
<td>Executor /<br> Standalone Master</td>
<td>Driver</td>
<td>(random)</td>
<td>Connect to application<br>Notify executor state changes</td>
<td>Connect to application /<br> Notify executor state changes</td>
<td><code>spark.driver.port</code></td>
<td>Akka-based. Set to "0" to choose a port randomly.</td>
</tr>
Expand Down
8 changes: 4 additions & 4 deletions docs/spark-standalone.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,14 +300,14 @@ You can run Spark alongside your existing Hadoop cluster by just launching it as
# Configuring Ports for Network Security

Spark makes heavy use of the network, and some environments have strict requirements for using
tight firewall settings. For a complete list of ports to configure, see the [security page]
(security.html#configuring-ports-for-network-security).
tight firewall settings. For a complete list of ports to configure, see the
[security page](security.html#configuring-ports-for-network-security).

# High Availability

By default, standalone scheduling clusters are resilient to Worker failures (insofar as Spark itself is resilient to losing work by moving it to other workers). However, the scheduler uses a Master to make scheduling decisions, and this (by default) creates a single point of failure: if the Master crashes, no new applications can be created. In order to circumvent this, we have two high availability schemes, detailed below.

## Standby Masters with ZooKeeper
# Standby Masters with ZooKeeper

**Overview**

Expand Down Expand Up @@ -347,7 +347,7 @@ There's an important distinction to be made between "registering with a Master"

Due to this property, new Masters can be created at any time, and the only thing you need to worry about is that _new_ applications and Workers can find it to register with in case it becomes the leader. Once registered, you're taken care of.

## Single-Node Recovery with Local File System
# Single-Node Recovery with Local File System

**Overview**

Expand Down

0 comments on commit cb3be88

Please sign in to comment.