diff --git a/docs/configuration.md b/docs/configuration.md index c018b9f1fb7c0..7884a2af60b23 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -91,7 +91,7 @@ Then, you can supply configuration values at runtime: ```sh ./bin/spark-submit \ --name "My app" \ - --master local[4] \ + --master "local[4]" \ --conf spark.eventLog.enabled=false \ --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \ myApp.jar @@ -3750,7 +3750,7 @@ Also, you can modify or add configurations at runtime: {% highlight bash %} ./bin/spark-submit \ --name "My app" \ - --master local[4] \ + --master "local[4]" \ --conf spark.eventLog.enabled=false \ --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \ --conf spark.hadoop.abc.def=xyz \ diff --git a/docs/quick-start.md b/docs/quick-start.md index 366970cf66c71..5a03af98cd832 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -286,7 +286,7 @@ We can run this application using the `bin/spark-submit` script: {% highlight bash %} # Use spark-submit to run your application $ YOUR_SPARK_HOME/bin/spark-submit \ - --master local[4] \ + --master "local[4]" \ SimpleApp.py ... Lines with a: 46, Lines with b: 23 @@ -371,7 +371,7 @@ $ sbt package # Use spark-submit to run your application $ YOUR_SPARK_HOME/bin/spark-submit \ --class "SimpleApp" \ - --master local[4] \ + --master "local[4]" \ target/scala-{{site.SCALA_BINARY_VERSION}}/simple-project_{{site.SCALA_BINARY_VERSION}}-1.0.jar ... Lines with a: 46, Lines with b: 23 @@ -452,7 +452,7 @@ $ mvn package # Use spark-submit to run your application $ YOUR_SPARK_HOME/bin/spark-submit \ --class "SimpleApp" \ - --master local[4] \ + --master "local[4]" \ target/simple-project-1.0.jar ... Lines with a: 46, Lines with b: 23 diff --git a/docs/rdd-programming-guide.md b/docs/rdd-programming-guide.md index f75bda0ffafb0..cbbce4c082060 100644 --- a/docs/rdd-programming-guide.md +++ b/docs/rdd-programming-guide.md @@ -214,13 +214,13 @@ can be passed to the `--repositories` argument. For example, to run `bin/pyspark` on exactly four cores, use: {% highlight bash %} -$ ./bin/pyspark --master local[4] +$ ./bin/pyspark --master "local[4]" {% endhighlight %} Or, to also add `code.py` to the search path (in order to later be able to `import code`), use: {% highlight bash %} -$ ./bin/pyspark --master local[4] --py-files code.py +$ ./bin/pyspark --master "local[4]" --py-files code.py {% endhighlight %} For a complete list of options, run `pyspark --help`. Behind the scenes, @@ -260,19 +260,19 @@ can be passed to the `--repositories` argument. For example, to run `bin/spark-s four cores, use: {% highlight bash %} -$ ./bin/spark-shell --master local[4] +$ ./bin/spark-shell --master "local[4]" {% endhighlight %} Or, to also add `code.jar` to its classpath, use: {% highlight bash %} -$ ./bin/spark-shell --master local[4] --jars code.jar +$ ./bin/spark-shell --master "local[4]" --jars code.jar {% endhighlight %} To include a dependency using Maven coordinates: {% highlight bash %} -$ ./bin/spark-shell --master local[4] --packages "org.example:example:0.1" +$ ./bin/spark-shell --master "local[4]" --packages "org.example:example:0.1" {% endhighlight %} For a complete list of options, run `spark-shell --help`. Behind the scenes, @@ -781,7 +781,7 @@ One of the harder things about Spark is understanding the scope and life cycle o #### Example -Consider the naive RDD element sum below, which may behave differently depending on whether execution is happening within the same JVM. A common example of this is when running Spark in `local` mode (`--master = local[n]`) versus deploying a Spark application to a cluster (e.g. via spark-submit to YARN): +Consider the naive RDD element sum below, which may behave differently depending on whether execution is happening within the same JVM. A common example of this is when running Spark in `local` mode (`--master = "local[n]"`) versus deploying a Spark application to a cluster (e.g. via spark-submit to YARN):