From aa7b1c25e4be307ea54e5d8e7bd82b6bb6e7a56e Mon Sep 17 00:00:00 2001 From: Dongjoon Hyun Date: Sat, 19 Mar 2016 00:34:06 -0700 Subject: [PATCH] [MINOR][DOCS] Use `spark-submit` instead of `sparkR` to submit R script. ./bin/sparkR examples/src/main/r/dataframe.R ``` Running R applications through 'sparkR' is not supported as of Spark 2.0. Use ./bin/spark-submit ``` --- R/README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/R/README.md b/R/README.md index bb3464ba9955d..810bfc14e977e 100644 --- a/R/README.md +++ b/R/README.md @@ -40,7 +40,7 @@ To set other options like driver memory, executor memory etc. you can pass in th If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example ``` # Set this to where Spark is installed -Sys.setenv(SPARK_HOME="/Users/shivaram/spark") +Sys.setenv(SPARK_HOME="/Users/username/spark") # This line loads SparkR from the installed directory .libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths())) library(SparkR) @@ -51,7 +51,7 @@ sc <- sparkR.init(master="local") The [instructions](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark) for making contributions to Spark also apply to SparkR. If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using `R/install-dev.sh` and test your changes. -Once you have made your changes, please include unit tests for them and run existing unit tests using the `run-tests.sh` script as described below. +Once you have made your changes, please include unit tests for them and run existing unit tests using the `R/run-tests.sh` script as described below. #### Generating documentation @@ -60,9 +60,9 @@ The SparkR documentation (Rd files and HTML files) are not a part of the source ### Examples, Unit tests SparkR comes with several sample programs in the `examples/src/main/r` directory. -To run one of them, use `./bin/sparkR `. For example: +To run one of them, use `./bin/spark-submit `. For example: - ./bin/sparkR examples/src/main/r/dataframe.R + ./bin/spark-submit examples/src/main/r/dataframe.R You can also run the unit-tests for SparkR by running (you need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first): @@ -70,7 +70,7 @@ You can also run the unit-tests for SparkR by running (you need to install the [ ./R/run-tests.sh ### Running on YARN -The `./bin/spark-submit` and `./bin/sparkR` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run +The `./bin/spark-submit` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run ``` export YARN_CONF_DIR=/etc/hadoop/conf ./bin/spark-submit --master yarn examples/src/main/r/dataframe.R