Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-47816][CONNECT][DOCS] Document the lazy evaluation of views in spark.{sql, table} #46007

Closed
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions python/pyspark/sql/session.py
Original file line number Diff line number Diff line change
Expand Up @@ -1630,6 +1630,13 @@ def sql(
-------
:class:`DataFrame`

Notes
-----
In Spark Classic, a temporary view referenced in `spark.sql` is resolved immediately,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about temp functions?

while in Spark Connect it is lazily evaluated.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this note might be very confusing to users, as data frames in Spark are all lazily evaluated, right? Maybe we can say "it is lazily analyzed".

We should probably document this as a behavior change for Spark Connect. I am pretty sure there are other behavior changes. Also does this lazy analysis apply to persistent tables and views as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good, let me update with it is lazily analyzed.

Besides temp views, this lazy analysis apply to temp functions / configurations / persistent tables.

If the functions/configurations/tables are changed after spark.table/sql, the results may be different from Spark Classic.

Copy link
Contributor Author

@zhengruifeng zhengruifeng Apr 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

other dataframe APIs may also have the same behavior change, we probably need to document it somewhere like docs/spark-connect-overview.md

So in Spark Connect if a view is dropped, modified or replaced after `spark.sql`, the
execution may fail or generate different results.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of cusiority, in which cases the execution may fail?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drop the view, for example

df = ...
df.createTempView("some_view")
df2 = spark.sql("SELECT * FROM some_view")
spark.catalog.dropTempView("some_view") <- drop the view
df2.show() <- should fail in Spark Connect


Examples
--------
Executing a SQL query.
Expand Down Expand Up @@ -1756,6 +1763,13 @@ def table(self, tableName: str) -> DataFrame:
-------
:class:`DataFrame`

Notes
-----
In Spark Classic, a temporary view referenced in `spark.table` is resolved immediately,
while in Spark Connect it is lazily evaluated.
So in Spark Connect if a view is dropped, modified or replaced after `spark.table`, the
execution may fail or generate different results.

Examples
--------
>>> spark.range(5).createOrReplaceTempView("table1")
Expand Down