-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HOTFIX] SPARK-1399: remove outdated comments #474
Conversation
Merged build triggered. |
Merged build started. |
Merged build finished. |
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14315/ |
Jenkins, retest this please. |
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
as the original PR was merged before this mistake is found....fix here, Sorry about that @pwendell, @andrewor14, I will be more careful next time Author: CodingCat <zhunansjtu@gmail.com> Closes #474 from CodingCat/hotfix_1399 and squashes the following commits: f3a8ba9 [CodingCat] move outdated comments (cherry picked from commit 87de290) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
as the original PR was merged before this mistake is found....fix here, Sorry about that @pwendell, @andrewor14, I will be more careful next time Author: CodingCat <zhunansjtu@gmail.com> Closes apache#474 from CodingCat/hotfix_1399 and squashes the following commits: f3a8ba9 [CodingCat] move outdated comments
p articularly -> particularly
Move jobs naming notations to doc site
…ting with MEP 6.2 (apache#474)
… to data source (apache#474) * [SPARK-38768][SQL] Remove `Limit` from plan if complete push down limit to data source ### What changes were proposed in this pull request? Currently, Spark supports push down limit to data source. If limit could pushed down and Data source only have one partition, DS V2 still do limit again. This PR want remove `Limit` from plan if complete push down limit to data source. ### Why are the changes needed? Improve performance. ### Does this PR introduce _any_ user-facing change? 'No'. New feature. ### How was this patch tested? Tests updated. Closes apache#36043 from beliefer/SPARK-38768. Authored-by: Jiaan Geng <beliefer@163.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> * [SPARK-38391][SPARK-38768][SQL][FOLLOWUP] Add comments for `pushLimit` and `pushTopN` of `PushDownUtils` ### What changes were proposed in this pull request? `pushLimit` and `pushTopN` of `PushDownUtils` returns tuple of boolean. It will be good to explain what the boolean value represents. ### Why are the changes needed? Make DS V2 API more friendly to developers. ### Does this PR introduce _any_ user-facing change? 'No'. Just update comments. ### How was this patch tested? N/A Closes apache#36092 from beliefer/SPARK-38391_SPARK-38768_followup. Authored-by: Jiaan Geng <beliefer@163.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> * [SPARK-37960][SQL][FOLLOWUP] Make the testing CASE WHEN query more reasonable ### What changes were proposed in this pull request? Some testing CASE WHEN queries are not carefully written and do not make sense. In the future, the optimizer may get smarter and get rid of the CASE WHEN completely, and then we loose test coverage. This PR updates some CASE WHEN queries to make them more reasonable. ### Why are the changes needed? future-proof test coverage. ### Does this PR introduce _any_ user-facing change? 'No'. ### How was this patch tested? N/A Closes apache#36125 from beliefer/SPARK-37960_followup3. Authored-by: Jiaan Geng <beliefer@163.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> * update spark version Co-authored-by: Jiaan Geng <beliefer@163.com>
### What changes were proposed in this pull request? #41458 updated `numeric.sql.out` but not update `numeric.sql.out.java21`, this pr updated `numeric.sql.out.java21` for Java 21. ### Why are the changes needed? Fix golden file for Java 21. https://github.com/apache/spark/actions/runs/5362442727/jobs/9729315685 ``` [info] - postgreSQL/numeric.sql *** FAILED *** (1 minute, 4 seconds) [info] postgreSQL/numeric.sql [info] Expected "...OLUMN_ARITY_MISMATCH[", [info] "sqlState" : "21S01", [info] "messageParameters" : { [info] "dataColumns" : "'id', 'id', 'val', 'val', '(val * val)'", [info] "reason" : "too many data columns", [info] "tableColumns" : "'id1', 'id2', 'result']", [info] "tableName" :...", but got "...OLUMN_ARITY_MISMATCH[.TOO_MANY_DATA_COLUMNS", [info] "sqlState" : "21S01", [info] "messageParameters" : { [info] "dataColumns" : "`id`, `id`, `val`, `val`, `(val * val)`", [info] "tableColumns" : "`id1`, `id2`, `result`]", [info] "tableName" :..." Result did not match for query #474 [info] INSERT INTO num_result SELECT t1.id, t2.id, t1.val, t2.val, t1.val * t2.val [info] FROM num_data t1, num_data t2 (SQLQueryTestSuite.scala:848) [info] org.scalatest.exceptions.TestFailedException: [info] at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472) [info] at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471) [info] at org.scalatest.funsuite.AnyFunSuite.newAssertionFailedException(AnyFunSuite.scala:1564) [info] at org.scalatest.Assertions.assertResult(Assertions.scala:847) [info] at org.scalatest.Assertions.assertResult$(Assertions.scala:842) [info] at org.scalatest.funsuite.AnyFunSuite.assertResult(AnyFunSuite.scala:1564) [info] at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$readGoldenFileAndCompareResults$3(SQLQueryTestSuite.scala:848) ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - Pass GitHub Actions - Manual checked using Java 21 Closes #41720 from LuciferYang/SPARK-43969-FOLLOWUP-2. Authored-by: yangjie01 <yangjie01@baidu.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
as the original PR was merged before this mistake is found....fix here,
Sorry about that @pwendell, @andrewor14, I will be more careful next time