Skip to content

Commit

Permalink
Update doc
Browse files Browse the repository at this point in the history
  • Loading branch information
zjffdu committed Dec 16, 2015
1 parent d72a3af commit 8eb181c
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/sparkr.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ printSchema(people)
</div>

The data sources API can also be used to save out DataFrames into multiple file formats. For example we can save the DataFrame from the previous example
to a Parquet file using `write.df` (Before spark 1.7, mode's default value is 'append', we change it to 'error' to be consistent with scala api)
to a Parquet file using `write.df` (Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API)

<div data-lang="r" markdown="1">
{% highlight r %}
Expand Down Expand Up @@ -393,4 +393,4 @@ You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-ma

## Upgrading From SparkR 1.6 to 1.7

- Before Spark 1.7, the default save mode is `append` in api saveDF/write.df/saveAsTable, it is changed to `error` to be consistent with scala api.
- Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API.

0 comments on commit 8eb181c

Please sign in to comment.