Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-48240][DOCS] Replace Local[..] with "Local[...]" in the docs #46535

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Then, you can supply configuration values at runtime:
```sh
./bin/spark-submit \
--name "My app" \
--master local[4] \
--master "local[4]" \
--conf spark.eventLog.enabled=false \
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \
myApp.jar
Expand Down Expand Up @@ -3750,7 +3750,7 @@ Also, you can modify or add configurations at runtime:
{% highlight bash %}
./bin/spark-submit \
--name "My app" \
--master local[4] \
--master "local[4]" \
--conf spark.eventLog.enabled=false \
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \
--conf spark.hadoop.abc.def=xyz \
Expand Down
6 changes: 3 additions & 3 deletions docs/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ We can run this application using the `bin/spark-submit` script:
{% highlight bash %}
# Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--master local[4] \
--master "local[4]" \
SimpleApp.py
...
Lines with a: 46, Lines with b: 23
Expand Down Expand Up @@ -371,7 +371,7 @@ $ sbt package
# Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--class "SimpleApp" \
--master local[4] \
--master "local[4]" \
target/scala-{{site.SCALA_BINARY_VERSION}}/simple-project_{{site.SCALA_BINARY_VERSION}}-1.0.jar
...
Lines with a: 46, Lines with b: 23
Expand Down Expand Up @@ -452,7 +452,7 @@ $ mvn package
# Use spark-submit to run your application
$ YOUR_SPARK_HOME/bin/spark-submit \
--class "SimpleApp" \
--master local[4] \
--master "local[4]" \
target/simple-project-1.0.jar
...
Lines with a: 46, Lines with b: 23
Expand Down
12 changes: 6 additions & 6 deletions docs/rdd-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,13 +214,13 @@ can be passed to the `--repositories` argument. For example, to run
`bin/pyspark` on exactly four cores, use:

{% highlight bash %}
$ ./bin/pyspark --master local[4]
$ ./bin/pyspark --master "local[4]"
{% endhighlight %}

Or, to also add `code.py` to the search path (in order to later be able to `import code`), use:

{% highlight bash %}
$ ./bin/pyspark --master local[4] --py-files code.py
$ ./bin/pyspark --master "local[4]" --py-files code.py
{% endhighlight %}

For a complete list of options, run `pyspark --help`. Behind the scenes,
Expand Down Expand Up @@ -260,19 +260,19 @@ can be passed to the `--repositories` argument. For example, to run `bin/spark-s
four cores, use:

{% highlight bash %}
$ ./bin/spark-shell --master local[4]
$ ./bin/spark-shell --master "local[4]"
{% endhighlight %}

Or, to also add `code.jar` to its classpath, use:

{% highlight bash %}
$ ./bin/spark-shell --master local[4] --jars code.jar
$ ./bin/spark-shell --master "local[4]" --jars code.jar
{% endhighlight %}

To include a dependency using Maven coordinates:

{% highlight bash %}
$ ./bin/spark-shell --master local[4] --packages "org.example:example:0.1"
$ ./bin/spark-shell --master "local[4]" --packages "org.example:example:0.1"
{% endhighlight %}

For a complete list of options, run `spark-shell --help`. Behind the scenes,
Expand Down Expand Up @@ -781,7 +781,7 @@ One of the harder things about Spark is understanding the scope and life cycle o

#### Example

Consider the naive RDD element sum below, which may behave differently depending on whether execution is happening within the same JVM. A common example of this is when running Spark in `local` mode (`--master = local[n]`) versus deploying a Spark application to a cluster (e.g. via spark-submit to YARN):
Consider the naive RDD element sum below, which may behave differently depending on whether execution is happening within the same JVM. A common example of this is when running Spark in `local` mode (`--master = "local[n]"`) versus deploying a Spark application to a cluster (e.g. via spark-submit to YARN):

<div class="codetabs">

Expand Down
2 changes: 1 addition & 1 deletion docs/submitting-applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ run it with `--help`. Here are a few examples of common options:
# Run application locally on 8 cores
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master local[8] \
--master "local[8]" \
/path/to/examples.jar \
100

Expand Down