-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-47816][CONNECT][DOCS] Document the lazy evaluation of views in spark.{sql, table}
#46007
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1630,6 +1630,13 @@ def sql( | |
------- | ||
:class:`DataFrame` | ||
|
||
Notes | ||
----- | ||
In Spark Classic, a temporary view referenced in `spark.sql` is resolved immediately, | ||
while in Spark Connect it is lazily evaluated. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this note might be very confusing to users, as data frames in Spark are all lazily evaluated, right? Maybe we can say "it is lazily analyzed". We should probably document this as a behavior change for Spark Connect. I am pretty sure there are other behavior changes. Also does this lazy analysis apply to persistent tables and views as well? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sounds good, let me update with Besides temp views, this lazy analysis apply to temp functions / configurations / persistent tables. If the functions/configurations/tables are changed after There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. other dataframe APIs may also have the same behavior change, we probably need to document it somewhere like |
||
So in Spark Connect if a view is dropped, modified or replaced after `spark.sql`, the | ||
execution may fail or generate different results. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Out of cusiority, in which cases the execution may fail? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. drop the view, for example
|
||
|
||
Examples | ||
-------- | ||
Executing a SQL query. | ||
|
@@ -1756,6 +1763,13 @@ def table(self, tableName: str) -> DataFrame: | |
------- | ||
:class:`DataFrame` | ||
|
||
Notes | ||
----- | ||
In Spark Classic, a temporary view referenced in `spark.table` is resolved immediately, | ||
while in Spark Connect it is lazily evaluated. | ||
So in Spark Connect if a view is dropped, modified or replaced after `spark.table`, the | ||
execution may fail or generate different results. | ||
|
||
Examples | ||
-------- | ||
>>> spark.range(5).createOrReplaceTempView("table1") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about temp functions?