Skip to content

Commit

Permalink
[SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?

Clarify that, if explicit pivot values are not provided, Spark will eagerly compute them.

### Why are the changes needed?

The current wording on `master` is misleading. To say that one version of pivot is more or less "efficient" than the other glosses over the fact that one is lazy and the other is not. Spark users are trained from early on that transformations are generally lazy; exceptions to this rule should be more clearly highlighted.

I experienced this personally when I called pivot on a DataFrame without providing explicit values, and Spark took around 20 minutes to compute the distinct pivot values. Looking at the docs, I felt that "less efficient" didn't accurately represent this behavior.

### Does this PR introduce _any_ user-facing change?

Yes, updated user docs.

### How was this patch tested?

I built and reviewed the docs locally.

<img width="300" src="https://github.com/apache/spark/assets/1039369/532d935b-b8f4-49be-b999-366acfbca7d8" />
<img width="400" src="https://github.com/apache/spark/assets/1039369/77dde43e-a217-4a30-8ce3-727f2060e54a" />

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#45363 from nchammas/pivot-eager.

Authored-by: Nicholas Chammas <nicholas.chammas@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
  • Loading branch information
nchammas authored and jpcorreia99 committed Mar 12, 2024
1 parent 252cbc5 commit 74090dd
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 22 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -259,15 +259,12 @@ class RelationalGroupedDataset private[sql] (
/**
* Pivots a column of the current `DataFrame` and performs the specified aggregation.
*
* There are two versions of `pivot` function: one that requires the caller to specify the list
* of distinct values to pivot on, and one that does not. The latter is more concise but less
* efficient, because Spark needs to first compute the list of distinct values internally.
* Spark will eagerly compute the distinct values in `pivotColumn` so it can determine the
* resulting schema of the transformation. To avoid any eager computations, provide an explicit
* list of values via `pivot(pivotColumn: String, values: Seq[Any])`.
*
* {{{
* // Compute the sum of earnings for each year by course with each course as a separate column
* df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings")
*
* // Or without specifying column values (less efficient)
* df.groupBy("year").pivot("course").sum("earnings")
* }}}
*
Expand Down Expand Up @@ -392,11 +389,14 @@ class RelationalGroupedDataset private[sql] (
}

/**
* Pivots a column of the current `DataFrame` and performs the specified aggregation. This is an
* overloaded version of the `pivot` method with `pivotColumn` of the `String` type.
* Pivots a column of the current `DataFrame` and performs the specified aggregation.
*
* Spark will eagerly compute the distinct values in `pivotColumn` so it can determine the
* resulting schema of the transformation. To avoid any eager computations, provide an explicit
* list of values via `pivot(pivotColumn: Column, values: Seq[Any])`.
*
* {{{
* // Or without specifying column values (less efficient)
* // Compute the sum of earnings for each year by course with each course as a separate column
* df.groupBy($"year").pivot($"course").sum($"earnings");
* }}}
*
Expand Down
10 changes: 5 additions & 5 deletions python/pyspark/sql/group.py
Original file line number Diff line number Diff line change
Expand Up @@ -432,11 +432,7 @@ def sum(self, *cols: str) -> DataFrame: # type: ignore[empty-body]

def pivot(self, pivot_col: str, values: Optional[List["LiteralType"]] = None) -> "GroupedData":
"""
Pivots a column of the current :class:`DataFrame` and perform the specified aggregation.
There are two versions of the pivot function: one that requires the caller
to specify the list of distinct values to pivot on, and one that does not.
The latter is more concise but less efficient,
because Spark needs to first compute the list of distinct values internally.
Pivots a column of the current :class:`DataFrame` and performs the specified aggregation.
.. versionadded:: 1.6.0
Expand All @@ -450,6 +446,10 @@ def pivot(self, pivot_col: str, values: Optional[List["LiteralType"]] = None) ->
values : list, optional
List of values that will be translated to columns in the output DataFrame.
If ``values`` is not provided, Spark will eagerly compute the distinct values in
``pivot_col`` so it can determine the resulting schema of the transformation. To avoid
any eager computations, provide an explicit list of values.
Examples
--------
>>> from pyspark.sql import Row
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -324,15 +324,12 @@ class RelationalGroupedDataset protected[sql](
/**
* Pivots a column of the current `DataFrame` and performs the specified aggregation.
*
* There are two versions of `pivot` function: one that requires the caller to specify the list
* of distinct values to pivot on, and one that does not. The latter is more concise but less
* efficient, because Spark needs to first compute the list of distinct values internally.
* Spark will eagerly compute the distinct values in `pivotColumn` so it can determine
* the resulting schema of the transformation. To avoid any eager computations, provide an
* explicit list of values via `pivot(pivotColumn: String, values: Seq[Any])`.
*
* {{{
* // Compute the sum of earnings for each year by course with each course as a separate column
* df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings")
*
* // Or without specifying column values (less efficient)
* df.groupBy("year").pivot("course").sum("earnings")
* }}}
*
Expand Down Expand Up @@ -407,10 +404,13 @@ class RelationalGroupedDataset protected[sql](

/**
* Pivots a column of the current `DataFrame` and performs the specified aggregation.
* This is an overloaded version of the `pivot` method with `pivotColumn` of the `String` type.
*
* Spark will eagerly compute the distinct values in `pivotColumn` so it can determine
* the resulting schema of the transformation. To avoid any eager computations, provide an
* explicit list of values via `pivot(pivotColumn: Column, values: Seq[Any])`.
*
* {{{
* // Or without specifying column values (less efficient)
* // Compute the sum of earnings for each year by course with each course as a separate column
* df.groupBy($"year").pivot($"course").sum($"earnings");
* }}}
*
Expand Down

0 comments on commit 74090dd

Please sign in to comment.