Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-47252][DOCS] Clarify that pivot may trigger an eager computation #45363

Closed
wants to merge 4 commits into from

Conversation

nchammas
Copy link
Contributor

@nchammas nchammas commented Mar 2, 2024

What changes were proposed in this pull request?

Clarify that, if explicit pivot values are not provided, Spark will eagerly compute them.

Why are the changes needed?

The current wording on master is misleading. To say that one version of pivot is more or less "efficient" than the other glosses over the fact that one is lazy and the other is not. Spark users are trained from early on that transformations are generally lazy; exceptions to this rule should be more clearly highlighted.

I experienced this personally when I called pivot on a DataFrame without providing explicit values, and Spark took around 20 minutes to compute the distinct pivot values. Looking at the docs, I felt that "less efficient" didn't accurately represent this behavior.

Does this PR introduce any user-facing change?

Yes, updated user docs.

How was this patch tested?

I built and reviewed the docs locally.

Was this patch authored or co-authored using generative AI tooling?

No.

* data, this may take some time. In other words, though the pivot transformation is lazy like
* most DataFrame transformations, computing the distinct pivot values is not. To avoid any
* eager computations, provide an explicit list of values via
* `pivot(pivotColumn: String, values: Seq[Any])`.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I probably spent about an hour trying to get this to work as a proper link via [[pivot(...]], per the scaladoc docs on ambiguous links, but I could not get it to work.

* df.groupBy("year").pivot("course").sum("earnings")
* }}}
*
* @note
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we can just make it a bit shorter, and put it into the main doc instead of the separate note. I don't want to scare users about this .. e.g., DataFrameReader.csv about schema inference.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I trimmed the note a bit. Is that better?

I also took a look at the CSV reader method:

* This function will go through the input once to determine the input schema if `inferSchema`
* is enabled. To avoid going through the entire data once, disable `inferSchema` option or
* specify the schema explicitly using `schema`.

It's pretty similar to what I'm proposing here.

I believe it's more important to highlight the eager computation here since pivot is a transformation and, unlike with reader methods, users are probably not expecting expensive computations to be triggered. But I agree, we don't want to make it sound like there's something wrong with not specifying pivot values.

@@ -450,6 +446,10 @@ def pivot(self, pivot_col: str, values: Optional[List["LiteralType"]] = None) ->
values : list, optional
List of values that will be translated to columns in the output DataFrame.

.. note:: If ``values`` is not provided, Spark will **eagerly** compute the distinct
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this too. I would just put it up in the doctest (like DataFrameReader.csv)

Copy link
Member

@HyukjinKwon HyukjinKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM otherwise

@nchammas
Copy link
Contributor Author

nchammas commented Mar 4, 2024

I've updated the screenshots in the PR description to reflect the latest changes.

@HyukjinKwon
Copy link
Member

Merged to master.

@nchammas nchammas deleted the pivot-eager branch March 5, 2024 15:11
jpcorreia99 pushed a commit to jpcorreia99/spark that referenced this pull request Mar 12, 2024
### What changes were proposed in this pull request?

Clarify that, if explicit pivot values are not provided, Spark will eagerly compute them.

### Why are the changes needed?

The current wording on `master` is misleading. To say that one version of pivot is more or less "efficient" than the other glosses over the fact that one is lazy and the other is not. Spark users are trained from early on that transformations are generally lazy; exceptions to this rule should be more clearly highlighted.

I experienced this personally when I called pivot on a DataFrame without providing explicit values, and Spark took around 20 minutes to compute the distinct pivot values. Looking at the docs, I felt that "less efficient" didn't accurately represent this behavior.

### Does this PR introduce _any_ user-facing change?

Yes, updated user docs.

### How was this patch tested?

I built and reviewed the docs locally.

<img width="300" src="https://github.com/apache/spark/assets/1039369/532d935b-b8f4-49be-b999-366acfbca7d8" />
<img width="400" src="https://github.com/apache/spark/assets/1039369/77dde43e-a217-4a30-8ce3-727f2060e54a" />

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#45363 from nchammas/pivot-eager.

Authored-by: Nicholas Chammas <nicholas.chammas@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants