-
Notifications
You must be signed in to change notification settings - Fork 28k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation #45363
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -432,11 +432,7 @@ def sum(self, *cols: str) -> DataFrame: # type: ignore[empty-body] | |
|
||
def pivot(self, pivot_col: str, values: Optional[List["LiteralType"]] = None) -> "GroupedData": | ||
""" | ||
Pivots a column of the current :class:`DataFrame` and perform the specified aggregation. | ||
There are two versions of the pivot function: one that requires the caller | ||
to specify the list of distinct values to pivot on, and one that does not. | ||
The latter is more concise but less efficient, | ||
because Spark needs to first compute the list of distinct values internally. | ||
Pivots a column of the current :class:`DataFrame` and performs the specified aggregation. | ||
|
||
.. versionadded:: 1.6.0 | ||
|
||
|
@@ -450,6 +446,14 @@ def pivot(self, pivot_col: str, values: Optional[List["LiteralType"]] = None) -> | |
values : list, optional | ||
List of values that will be translated to columns in the output DataFrame. | ||
|
||
.. note:: If ``values`` is not provided, Spark will **eagerly** compute the distinct | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this too. I would just put it up in the doctest (like |
||
values in ``pivot_col`` so it can determine the resulting schema of the | ||
transformation. Depending on the size and complexity of your data, this may take | ||
some time. | ||
In other words, though the pivot transformation is lazy like most DataFrame | ||
transformations, computing the distinct pivot values is not. To avoid any eager | ||
computations, provide an explicit list of values. | ||
|
||
Examples | ||
-------- | ||
>>> from pyspark.sql import Row | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -324,18 +324,18 @@ class RelationalGroupedDataset protected[sql]( | |
/** | ||
* Pivots a column of the current `DataFrame` and performs the specified aggregation. | ||
* | ||
* There are two versions of `pivot` function: one that requires the caller to specify the list | ||
* of distinct values to pivot on, and one that does not. The latter is more concise but less | ||
* efficient, because Spark needs to first compute the list of distinct values internally. | ||
* | ||
* {{{ | ||
* // Compute the sum of earnings for each year by course with each course as a separate column | ||
* df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings") | ||
* | ||
* // Or without specifying column values (less efficient) | ||
* df.groupBy("year").pivot("course").sum("earnings") | ||
* }}} | ||
* | ||
* @note Spark will '''eagerly''' compute the distinct values in `pivotColumn` so it can determine | ||
* the resulting schema of the transformation. Depending on the size and complexity of your | ||
* data, this may take some time. In other words, though the pivot transformation is lazy like | ||
* most DataFrame transformations, computing the distinct pivot values is not. To avoid any | ||
* eager computations, provide an explicit list of values via | ||
* `pivot(pivotColumn: String, values: Seq[Any])`. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I probably spent about an hour trying to get this to work as a proper link via |
||
* | ||
* @see `org.apache.spark.sql.Dataset.unpivot` for the reverse operation, | ||
* except for the aggregation. | ||
* | ||
|
@@ -407,13 +407,19 @@ class RelationalGroupedDataset protected[sql]( | |
|
||
/** | ||
* Pivots a column of the current `DataFrame` and performs the specified aggregation. | ||
* This is an overloaded version of the `pivot` method with `pivotColumn` of the `String` type. | ||
* | ||
* {{{ | ||
* // Or without specifying column values (less efficient) | ||
* // Compute the sum of earnings for each year by course with each course as a separate column | ||
* df.groupBy($"year").pivot($"course").sum($"earnings"); | ||
* }}} | ||
* | ||
* @note Spark will '''eagerly''' compute the distinct values in `pivotColumn` so it can determine | ||
* the resulting schema of the transformation. Depending on the size and complexity of your | ||
* data, this may take some time. In other words, though the pivot transformation is lazy like | ||
* most DataFrame transformations, computing the distinct pivot values is not. To avoid any | ||
* eager computations, provide an explicit list of values via | ||
* `pivot(pivotColumn: Column, values: Seq[Any])`. | ||
* | ||
* @see `org.apache.spark.sql.Dataset.unpivot` for the reverse operation, | ||
* except for the aggregation. | ||
* | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we can just make it a bit shorter, and put it into the main doc instead of the separate note. I don't want to scare users about this .. e.g.,
DataFrameReader.csv
about schema inference.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I trimmed the note a bit. Is that better?
I also took a look at the CSV reader method:
spark/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
Lines 530 to 532 in a1b0da2
It's pretty similar to what I'm proposing here.
I believe it's more important to highlight the eager computation here since
pivot
is a transformation and, unlike with reader methods, users are probably not expecting expensive computations to be triggered. But I agree, we don't want to make it sound like there's something wrong with not specifying pivot values.