Navigation Menu

Skip to content

Commit

Permalink
[SPARK-23250][DOCS] Typo in JavaDoc/ScalaDoc for DataFrameWriter
Browse files Browse the repository at this point in the history
## What changes were proposed in this pull request?

Fix typo in ScalaDoc for DataFrameWriter - originally stated "This is applicable for all file-based data sources (e.g. Parquet, JSON) staring Spark 2.1.0", should be "starting with Spark 2.1.0".

## How was this patch tested?

Check of correct spelling in ScalaDoc

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: CCInCharge <charles.l.chen.clc@gmail.com>

Closes #20417 from CCInCharge/master.

(cherry picked from commit 686a622)
Signed-off-by: Sean Owen <sowen@cloudera.com>
  • Loading branch information
clchen28 authored and srowen committed Jan 28, 2018
1 parent 8ff0cc4 commit 7ca2cd4
Showing 1 changed file with 6 additions and 3 deletions.
Expand Up @@ -174,7 +174,8 @@ final class DataFrameWriter[T] private[sql](ds: Dataset[T]) {
* predicates on the partitioned columns. In order for partitioning to work well, the number
* of distinct values in each column should typically be less than tens of thousands.
*
* This is applicable for all file-based data sources (e.g. Parquet, JSON) staring Spark 2.1.0.
* This is applicable for all file-based data sources (e.g. Parquet, JSON) starting with Spark
* 2.1.0.
*
* @since 1.4.0
*/
Expand All @@ -188,7 +189,8 @@ final class DataFrameWriter[T] private[sql](ds: Dataset[T]) {
* Buckets the output by the given columns. If specified, the output is laid out on the file
* system similar to Hive's bucketing scheme.
*
* This is applicable for all file-based data sources (e.g. Parquet, JSON) staring Spark 2.1.0.
* This is applicable for all file-based data sources (e.g. Parquet, JSON) starting with Spark
* 2.1.0.
*
* @since 2.0
*/
Expand All @@ -202,7 +204,8 @@ final class DataFrameWriter[T] private[sql](ds: Dataset[T]) {
/**
* Sorts the output in each bucket by the given columns.
*
* This is applicable for all file-based data sources (e.g. Parquet, JSON) staring Spark 2.1.0.
* This is applicable for all file-based data sources (e.g. Parquet, JSON) starting with Spark
* 2.1.0.
*
* @since 2.0
*/
Expand Down

0 comments on commit 7ca2cd4

Please sign in to comment.