Skip to content

Commit

Permalink
Review feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
sryza committed Mar 7, 2014
1 parent 6ad06d4 commit 563ef3a
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 4 deletions.
6 changes: 5 additions & 1 deletion core/src/main/scala/org/apache/spark/SparkContext.scala
Expand Up @@ -1027,7 +1027,7 @@ class SparkContext(
* The SparkContext object contains a number of implicit conversions and parameters for use with
* various Spark features.
*/
object SparkContext {
object SparkContext extends Logging {

private[spark] val SPARK_JOB_DESCRIPTION = "spark.job.description"

Expand Down Expand Up @@ -1246,6 +1246,10 @@ object SparkContext {
scheduler

case "yarn-standalone" | "yarn-cluster" =>
if (master == "yarn-standalone") {
logWarning(
"\"yarn-standalone\" is deprecated as of Spark 1.0. Use \"yarn-cluster\" instead.")
}
val scheduler = try {
val clazz = Class.forName("org.apache.spark.scheduler.cluster.YarnClusterScheduler")
val cons = clazz.getConstructor(classOf[SparkContext])
Expand Down
6 changes: 3 additions & 3 deletions docs/running-on-yarn.md
Expand Up @@ -90,9 +90,9 @@ For example:
--worker-memory 2g \
--worker-cores 1

The above starts a YARN client program which starts the default Application Master. Then SparkPi will be run as a child thread of Application Master. The client will periodically poll the Application Master for status updates and display them in the console. The client will exit once your application has finished running.
The above starts a YARN client program which starts the default Application Master. Then SparkPi will be run as a child thread of Application Master. The client will periodically poll the Application Master for status updates and display them in the console. The client will exit once your application has finished running. Refer to the "Viewing Logs" section below for how to see driver and executor logs.

Because the application is run on a remote machine where the Application Master is running, applications that involve local interaction, such as spark-shell, will not work well.
Because the application is run on a remote machine where the Application Master is running, applications that involve local interaction, such as spark-shell, will not work.

## Launching a Spark application with yarn-client mode.

Expand Down Expand Up @@ -136,7 +136,7 @@ When log aggregation isn't turned on, logs are retained locally on each machine

See [Building Spark with Maven](building-with-maven.html) for instructions on how to build Spark using Maven.

# Important Notes
# Important notes

- Before Hadoop 2.2, YARN does not support cores in container resource requests. Thus, when running against an earlier version, the numbers of cores given via command line arguments cannot be passed to YARN. Whether core requests are honored in scheduling decisions depends on which scheduler is in use and how it is configured.
- The local directories used by Spark executors will be the local directories configured for YARN (Hadoop YARN config yarn.nodemanager.local-dirs). If the user specifies spark.local.dir, it will be ignored.
Expand Down

0 comments on commit 563ef3a

Please sign in to comment.