Skip to content

Commit

Permalink
[SPARK-12318][SPARKR] Save mode in SparkR should be error by default
Browse files Browse the repository at this point in the history
shivaram  Please help review.

Author: Jeff Zhang <zjffdu@apache.org>

Closes #10290 from zjffdu/SPARK-12318.
  • Loading branch information
zjffdu authored and shivaram committed Dec 16, 2015
1 parent 54c512b commit 2eb5af5
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 6 deletions.
10 changes: 5 additions & 5 deletions R/pkg/R/DataFrame.R
Original file line number Diff line number Diff line change
Expand Up @@ -1886,7 +1886,7 @@ setMethod("except",
#' @param df A SparkSQL DataFrame
#' @param path A name for the table
#' @param source A name for external data source
#' @param mode One of 'append', 'overwrite', 'error', 'ignore' save mode
#' @param mode One of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)
#'
#' @family DataFrame functions
#' @rdname write.df
Expand All @@ -1903,7 +1903,7 @@ setMethod("except",
#' }
setMethod("write.df",
signature(df = "DataFrame", path = "character"),
function(df, path, source = NULL, mode = "append", ...){
function(df, path, source = NULL, mode = "error", ...){
if (is.null(source)) {
sqlContext <- get(".sparkRSQLsc", envir = .sparkREnv)
source <- callJMethod(sqlContext, "getConf", "spark.sql.sources.default",
Expand All @@ -1928,7 +1928,7 @@ setMethod("write.df",
#' @export
setMethod("saveDF",
signature(df = "DataFrame", path = "character"),
function(df, path, source = NULL, mode = "append", ...){
function(df, path, source = NULL, mode = "error", ...){
write.df(df, path, source, mode, ...)
})

Expand All @@ -1951,7 +1951,7 @@ setMethod("saveDF",
#' @param df A SparkSQL DataFrame
#' @param tableName A name for the table
#' @param source A name for external data source
#' @param mode One of 'append', 'overwrite', 'error', 'ignore' save mode
#' @param mode One of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)
#'
#' @family DataFrame functions
#' @rdname saveAsTable
Expand All @@ -1968,7 +1968,7 @@ setMethod("saveDF",
setMethod("saveAsTable",
signature(df = "DataFrame", tableName = "character", source = "character",
mode = "character"),
function(df, tableName, source = NULL, mode="append", ...){
function(df, tableName, source = NULL, mode="error", ...){
if (is.null(source)) {
sqlContext <- get(".sparkRSQLsc", envir = .sparkREnv)
source <- callJMethod(sqlContext, "getConf", "spark.sql.sources.default",
Expand Down
9 changes: 8 additions & 1 deletion docs/sparkr.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ printSchema(people)
</div>

The data sources API can also be used to save out DataFrames into multiple file formats. For example we can save the DataFrame from the previous example
to a Parquet file using `write.df`
to a Parquet file using `write.df` (Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API)

<div data-lang="r" markdown="1">
{% highlight r %}
Expand Down Expand Up @@ -387,3 +387,10 @@ The following functions are masked by the SparkR package:
Since part of SparkR is modeled on the `dplyr` package, certain functions in SparkR share the same names with those in `dplyr`. Depending on the load order of the two packages, some functions from the package loaded first are masked by those in the package loaded after. In such case, prefix such calls with the package name, for instance, `SparkR::cume_dist(x)` or `dplyr::cume_dist(x)`.

You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/search.html)


# Migration Guide

## Upgrading From SparkR 1.6 to 1.7

- Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API.

0 comments on commit 2eb5af5

Please sign in to comment.