ForeachBatch java example fix#27739
Closed
roland1982 wants to merge 3878 commits intoapache:branch-2.4from
roland1982:foreachwriter_java_example_fix
Closed
ForeachBatch java example fix#27739roland1982 wants to merge 3878 commits intoapache:branch-2.4from roland1982:foreachwriter_java_example_fix
roland1982 wants to merge 3878 commits intoapache:branch-2.4from
roland1982:foreachwriter_java_example_fix
Conversation
…trol sampling method ### What changes were proposed in this pull request? add a param `bootstrap` to control whether bootstrap samples are used. ### Why are the changes needed? Current RF with numTrees=1 will directly build a tree using the orignial dataset, while with numTrees>1 it will use bootstrap samples to build trees. This design is for training a DecisionTreeModel by the impl of RandomForest, however, it is somewhat strange. In Scikit-Learn, there is a param [bootstrap](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier) to control whether bootstrap samples are used. ### Does this PR introduce any user-facing change? Yes, new param is added ### How was this patch tested? existing testsuites Closes #27254 from zhengruifeng/add_bootstrap. Authored-by: zhengruifeng <ruifengz@foxmail.com> Signed-off-by: zhengruifeng <ruifengz@foxmail.com>
### What changes were proposed in this pull request? Fix a few super nit problems ### Why are the changes needed? To make doc look better ### Does this PR introduce any user-facing change? Yes ### How was this patch tested? Tested using jykyll build --serve Closes #27332 from huaxingao/spark-30575-followup. Authored-by: Huaxin Gao <huaxing@us.ibm.com> Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org>
…ssignLambdaVariableID.enabled ### What changes were proposed in this pull request? This PR is to remove the conf ### Why are the changes needed? This rule can be excluded using spark.sql.optimizer.excludedRules without an extra conf ### Does this PR introduce any user-facing change? Yes ### How was this patch tested? N/A Closes #27334 from gatorsmile/spark27871. Authored-by: Xiao Li <gatorsmile@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…talogPlugin ### What changes were proposed in this pull request? Move the `defaultNamespace` method from the interface `SupportsNamespace` to `CatalogPlugin`. ### Why are the changes needed? While I'm implementing JDBC V2, I realize that the default namespace is very an important information. Even if you don't want to implement namespace manipulation functionalities like CREATE/DROP/ALTER namespace, you still need to report the default namespace. The default namespace is not about functionality but a matter of correctness. If you don't know the default namespace of a catalog, it's wrong to assume it's `[]`. You may get table not found exception if you do so. I think it's more reasonable to put the `defaultNamespace` method in the base class `CatalogPlugin`. It returns `[]` by default so won't bother implementation if they really don't have namespace concept. ### Does this PR introduce any user-facing change? yes, but for an unreleased API. ### How was this patch tested? existing tests Closes #27319 from cloud-fan/ns. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…mparing two strings ### What changes were proposed in this pull request? In the PR, I propose to remove sorting in the asserts of checking output of: - expression examples, - SQL tests in `SQLQueryTestSuite`. ### Why are the changes needed? * Sorted `actual` and `expected` make assert output unusable. Instead of `"[true]" did not equal "[false]"`, it looks like `"[ertu]" did not equal "[aefls]"`. * Output of expression examples should be always the same except nondeterministic expressions listed in the `ignoreSet` of the `check outputs of expression examples` test. ### Does this PR introduce any user-facing change? No ### How was this patch tested? By running `SQLQuerySuite` via `./build/sbt "sql/test:testOnly org.apache.spark.sql.SQLQuerySuite"`. Closes #27324 from MaxGekk/remove-sorting-in-examples-tests. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…en.additionalRemoteRepositories ### What changes were proposed in this pull request? Rename the config added in #25849 to `spark.sql.maven.additionalRemoteRepositories`. ### Why are the changes needed? Follow the advice in [SPARK-29175](https://issues.apache.org/jira/browse/SPARK-29175?focusedCommentId=17021586&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17021586), the new name is more clear. ### Does this PR introduce any user-facing change? Yes, the config name changed. ### How was this patch tested? Existing test. Closes #27339 from xuanyuanking/SPARK-29175. Authored-by: Yuanjian Li <xyliyuanjian@gmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
…sion ### What changes were proposed in this pull request? Expressions are very likely to be serialized and sent to executors, we should avoid unnecessary serialization overhead as much as we can. This PR fixes `AggregateExpression`. ### Why are the changes needed? small improvement ### Does this PR introduce any user-facing change? no ### How was this patch tested? existing tests Closes #27342 from cloud-fan/fix. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request? Document CREATE TABLE statement in SQL Reference Guide. ### Why are the changes needed? Adding documentation for SQL reference. ### Does this PR introduce any user-facing change? yes Before: There was no documentation for this. ### How was this patch tested? Used jekyll build and serve to verify. Closes #26759 from PavithraRamachandran/create_doc. Authored-by: Pavithra Ramachandran <pavi.rams@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com>
…Files feature ### What changes were proposed in this pull request? Update the scalafmt plugin to 1.0.3 and use its new onlyChangedFiles feature rather than --diff ### Why are the changes needed? Older versions of the plugin either didn't work with scala 2.13, or got rid of the --diff argument and didn't allow for formatting only changed files ### Does this PR introduce any user-facing change? The /dev/scalafmt script no longer passes through arbitrary args, instead using the arg to select scala version. The issue here is the plugin name literally contains the scala version, and doesn't appear to have a shorter way to refer to it. If srowen or someone else with better maven-fu has an idea I'm all ears. ### How was this patch tested? Manually, e.g. edited a file and ran dev/scalafmt or dev/scalafmt 2.13 Closes #27279 from koeninger/SPARK-30570. Authored-by: cody koeninger <cody@koeninger.org> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request? Fix a bug in #26589 , to make this feature work. ### Why are the changes needed? This feature doesn't work actually. ### Does this PR introduce any user-facing change? no ### How was this patch tested? new test Closes #27341 from cloud-fan/cache. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
…nd TableCatalog to CatalogV2Util ### What changes were proposed in this pull request? In this PR, I propose to move the `RESERVED_PROPERTIES `s from `SupportsNamespaces` and `TableCatalog` to `CatalogV2Util`, which can keep `RESERVED_PROPERTIES ` safe for interval usages only. ### Why are the changes needed? the `RESERVED_PROPERTIES` should not be changed by subclasses ### Does this PR introduce any user-facing change? no ### How was this patch tested? existing uts Closes #27318 from yaooqinn/SPARK-30603. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
… and aggregates
### What changes were proposed in this pull request?
Currently, in the following scenario, bucket join is not utilized:
```scala
val df = (0 until 20).map(i => (i, i)).toDF("i", "j").as("df")
df.write.format("parquet").bucketBy(8, "i").saveAsTable("t")
sql("CREATE VIEW v AS SELECT * FROM t")
sql("SELECT * FROM t a JOIN v b ON a.i = b.i").explain
```
```
== Physical Plan ==
*(4) SortMergeJoin [i#13], [i#15], Inner
:- *(1) Sort [i#13 ASC NULLS FIRST], false, 0
: +- *(1) Project [i#13, j#14]
: +- *(1) Filter isnotnull(i#13)
: +- *(1) ColumnarToRow
: +- FileScan parquet default.t[i#13,j#14] Batched: true, DataFilters: [isnotnull(i#13)], Format: Parquet, Location: InMemoryFileIndex[file:..., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int,j:int>, SelectedBucketsCount: 8 out of 8
+- *(3) Sort [i#15 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(i#15, 8), true, [id=#64] <----- Exchange node introduced
+- *(2) Project [i#13 AS i#15, j#14 AS j#16]
+- *(2) Filter isnotnull(i#13)
+- *(2) ColumnarToRow
+- FileScan parquet default.t[i#13,j#14] Batched: true, DataFilters: [isnotnull(i#13)], Format: Parquet, Location: InMemoryFileIndex[file:..., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int,j:int>, SelectedBucketsCount: 8 out of 8
```
Notice that `Exchange` is present. This is because `Project` introduces aliases and `outputPartitioning` and `requiredChildDistribution` do not consider aliases while considering bucket join in `EnsureRequirements`. This PR addresses to allow this scenario.
### Why are the changes needed?
This allows bucket join to be utilized in the above example.
### Does this PR introduce any user-facing change?
Yes, now with the fix, the `explain` out is as follows:
```
== Physical Plan ==
*(3) SortMergeJoin [i#13], [i#15], Inner
:- *(1) Sort [i#13 ASC NULLS FIRST], false, 0
: +- *(1) Project [i#13, j#14]
: +- *(1) Filter isnotnull(i#13)
: +- *(1) ColumnarToRow
: +- FileScan parquet default.t[i#13,j#14] Batched: true, DataFilters: [isnotnull(i#13)], Format: Parquet, Location: InMemoryFileIndex[file:.., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int,j:int>, SelectedBucketsCount: 8 out of 8
+- *(2) Sort [i#15 ASC NULLS FIRST], false, 0
+- *(2) Project [i#13 AS i#15, j#14 AS j#16]
+- *(2) Filter isnotnull(i#13)
+- *(2) ColumnarToRow
+- FileScan parquet default.t[i#13,j#14] Batched: true, DataFilters: [isnotnull(i#13)], Format: Parquet, Location: InMemoryFileIndex[file:.., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int,j:int>, SelectedBucketsCount: 8 out of 8
```
Note that the `Exchange` is no longer present.
### How was this patch tested?
Closes #26943 from imback82/bucket_alias.
Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org>
…k.sql.execution.subquery.reuse.enabled ### What changes were proposed in this pull request? This PR is to rename spark.sql.subquery.reuse to spark.sql.execution.subquery.reuse.enabled ### Why are the changes needed? Make it consistent and clear. ### Does this PR introduce any user-facing change? N/A. This is a [new conf added in Spark 3.0](#23998) ### How was this patch tested? N/A Closes #27346 from gatorsmile/spark27083. Authored-by: Xiao Li <gatorsmile@gmail.com> Signed-off-by: Xiao Li <gatorsmile@gmail.com>
…cala function API filter ### What changes were proposed in this pull request? This PR is a follow-up PR #25666 for adding the description and example for the Scala function API `filter`. ### Why are the changes needed? It is hard to tell which parameter is the index column. ### Does this PR introduce any user-facing change? No ### How was this patch tested? N/A Closes #27336 from gatorsmile/spark28962. Authored-by: Xiao Li <gatorsmile@gmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request? Minor documentation fix ### Why are the changes needed? ### Does this PR introduce any user-facing change? ### How was this patch tested? Manually; consider adding tests? Closes #27295 from deepyaman/patch-2. Authored-by: Deepyaman Datta <deepyaman.datta@utexas.edu> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
### What changes were proposed in this pull request? Disable all the V2 file sources in Spark 3.0 by default. ### Why are the changes needed? There are still some missing parts in the file source V2 framework: 1. It doesn't support reporting file scan metrics such as "numOutputRows"/"numFiles"/"fileSize" like `FileSourceScanExec`. This requires another patch in the data source V2 framework. Tracked by [SPARK-30362](https://issues.apache.org/jira/browse/SPARK-30362) 2. It doesn't support partition pruning with subqueries(including dynamic partition pruning) for now. Tracked by [SPARK-30628](https://issues.apache.org/jira/browse/SPARK-30628) As we are going to code freeze on Jan 31st, this PR proposes to disable all the V2 file sources in Spark 3.0 by default. ### Does this PR introduce any user-facing change? No ### How was this patch tested? Existing tests. Closes #27348 from gengliangwang/disableFileSourceV2. Authored-by: Gengliang Wang <gengliang.wang@databricks.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request? This adds a note for additional setting for Apache Arrow library for Java 11. ### Why are the changes needed? Since Apache Arrow 0.14.0, an additional setting is required for Java 9+. - https://issues.apache.org/jira/browse/ARROW-3191 It's explicitly documented at Apache Arrow 0.15.0. - https://issues.apache.org/jira/browse/ARROW-6206 However, there is no plan to handle that inside Apache Arrow side. - https://issues.apache.org/jira/browse/ARROW-7223 In short, we need to document this for the users who is using Arrow-related feature on JDK11. For dev environment, we handle this via [SPARK-29923](#26552) . ### Does this PR introduce any user-facing change? Yes. ### How was this patch tested? Generated document and see the pages.  Closes #27356 from dongjoon-hyun/SPARK-JDK11-ARROW-DOC. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request? Add SPARK_APPLICATION_ID environment when spark configure driver pod. ### Why are the changes needed? Currently, driver doesn't have this in environments and it's no convenient to retrieve spark id. The use case is we want to look up spark application id and create application folder and redirect driver logs to application folder. ### Does this PR introduce any user-facing change? no ### How was this patch tested? unit tested. I also build new distribution and container image to kick off a job in Kubernetes and I do see SPARK_APPLICATION_ID added there. . Closes #27347 from Jeffwan/SPARK-30626. Authored-by: Jiaxin Shan <seedjeffwan@gmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request?
Remove ```numTrees``` in GBT in 3.0.0.
### Why are the changes needed?
Currently, GBT has
```
/**
* Number of trees in ensemble
*/
Since("2.0.0")
val getNumTrees: Int = trees.length
```
and
```
/** Number of trees in ensemble */
val numTrees: Int = trees.length
```
I think we should remove one of them. We deprecated it in 2.4.5 via #27352.
### Does this PR introduce any user-facing change?
Yes, remove ```numTrees``` in GBT in 3.0.0
### How was this patch tested?
existing tests
Closes #27330 from huaxingao/spark-numTrees.
Authored-by: Huaxin Gao <huaxing@us.ibm.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
…out Project ### What changes were proposed in this pull request? This patch proposes to prune unnecessary nested fields from Generate which has no Project on top of it. ### Why are the changes needed? In Optimizer, we can prune nested columns from Project(projectList, Generate). However, unnecessary columns could still possibly be read in Generate, if no Project on top of it. We should prune it too. ### Does this PR introduce any user-facing change? No ### How was this patch tested? Unit test. Closes #26978 from viirya/SPARK-29721. Lead-authored-by: Liang-Chi Hsieh <liangchi@uber.com> Co-authored-by: Liang-Chi Hsieh <viirya@gmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request? For better JDK11 support, this PR aims to upgrade **Jersey** and **javassist** to `2.30` and `3.35.0-GA` respectively. ### Why are the changes needed? **Jersey**: This will bring the following `Jersey` updates. - https://eclipse-ee4j.github.io/jersey.github.io/release-notes/2.30.html - eclipse-ee4j/jersey#4245 (Java 11 java.desktop module dependency) **javassist**: This is a transitive dependency from 3.20.0-CR2 to 3.25.0-GA. - `javassist` officially supports JDK11 from [3.24.0-GA release note](https://github.com/jboss-javassist/javassist/blob/master/Readme.html#L308). ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Pass the Jenkins with both JDK8 and JDK11. Closes #27357 from dongjoon-hyun/SPARK-30639. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
…L Reference ### What changes were proposed in this pull request? Document ORDER BY clause of SELECT statement in SQL Reference Guide. ### Why are the changes needed? Currently Spark lacks documentation on the supported SQL constructs causing confusion among users who sometimes have to look at the code to understand the usage. This is aimed at addressing this issue. ### Does this PR introduce any user-facing change? Yes. **Before:** There was no documentation for this. **After.** <img width="972" alt="Screen Shot 2020-01-19 at 11 50 57 PM" src="https://user-images.githubusercontent.com/14225158/72708034-ac0bdf80-3b16-11ea-81f3-48d8087e4e98.png"> <img width="972" alt="Screen Shot 2020-01-19 at 11 51 14 PM" src="https://user-images.githubusercontent.com/14225158/72708042-b0d09380-3b16-11ea-939e-905b8c031608.png"> <img width="972" alt="Screen Shot 2020-01-19 at 11 51 33 PM" src="https://user-images.githubusercontent.com/14225158/72708050-b4fcb100-3b16-11ea-95d2-e4e302cace1b.png"> ### How was this patch tested? Tested using jykyll build --serve Closes #27288 from dilipbiswal/sql-ref-select-orderby. Authored-by: Dilip Biswal <dkbiswal@gmail.com> Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org>
…nal file
### What changes were proposed in this pull request?
Reference data for "collect() support Unicode characters" has been moved to an external file, to make test OS and locale independent.
### Why are the changes needed?
As-is, embedded data is not properly encoded on Windows:
```
library(SparkR)
SparkR::sparkR.session()
Sys.info()
# sysname release version
# "Windows" "Server x64" "build 17763"
# nodename machine login
# "WIN-5BLT6Q610KH" "x86-64" "Administrator"
# user effective_user
# "Administrator" "Administrator"
Sys.getlocale()
# [1] "LC_COLLATE=English_United States.1252;LC_CTYPE=English_United States.1252;LC_MONETARY=English_United States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252"
lines <- c("{\"name\":\"안녕하세요\"}",
"{\"name\":\"您好\", \"age\":30}",
"{\"name\":\"こんにちは\", \"age\":19}",
"{\"name\":\"Xin chào\"}")
system(paste0("cat ", jsonPath))
# {"name":"<U+C548><U+B155><U+D558><U+C138><U+C694>"}
# {"name":"<U+60A8><U+597D>", "age":30}
# {"name":"<U+3053><U+3093><U+306B><U+3061><U+306F>", "age":19}
# {"name":"Xin chào"}
# [1] 0
jsonPath <- tempfile(pattern = "sparkr-test", fileext = ".tmp")
writeLines(lines, jsonPath)
df <- read.df(jsonPath, "json")
printSchema(df)
# root
# |-- _corrupt_record: string (nullable = true)
# |-- age: long (nullable = true)
# |-- name: string (nullable = true)
head(df)
# _corrupt_record age name
# 1 <NA> NA <U+C548><U+B155><U+D558><U+C138><U+C694>
# 2 <NA> 30 <U+60A8><U+597D>
# 3 <NA> 19 <U+3053><U+3093><U+306B><U+3061><U+306F>
# 4 {"name":"Xin ch<U+FFFD>o"} NA <NA>
```
This can be reproduced outside tests (Windows Server 2019, English locale), and causes failures, when `testthat` is updated to 2.x (#27359). Somehow problem is not picked-up when test is executed on `testthat` 1.0.2.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Running modified test, manual testing.
### Note
Alternative seems to be to used bytes, but it hasn't been properly tested.
```
test_that("collect() support Unicode characters", {
lines <- markUtf8(c(
'{"name": "안녕하세요"}',
'{"name": "您好", "age": 30}',
'{"name": "こんにちは", "age": 19}',
'{"name": "Xin ch\xc3\xa0o"}'
))
jsonPath <- tempfile(pattern = "sparkr-test", fileext = ".tmp")
writeLines(lines, jsonPath, useBytes = TRUE)
expected <- regmatches(lines, regexec('(?<="name": ").*?(?=")', lines, perl = TRUE))
df <- read.df(jsonPath, "json")
rdf <- collect(df)
expect_true(is.data.frame(rdf))
rdf$name <- markUtf8(rdf$name)
expect_equal(rdf$name[1], expected[[1]])
expect_equal(rdf$name[2], expected[[2]])
expect_equal(rdf$name[3], expected[[3]])
expect_equal(rdf$name[4], expected[[4]])
df1 <- createDataFrame(rdf)
expect_equal(
collect(
where(df1, df1$name == expected[[2]])
)$name,
expected[[2]]
)
})
```
Closes #27362 from zero323/SPARK-30645.
Authored-by: zero323 <mszymkiewicz@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
…rsive calls ### What changes were proposed in this pull request? Disabling test for cleaning closure of recursive function. ### Why are the changes needed? As of 9514b82 this test is no longer valid, and recursive calls, even simple ones: ```lead f <- function(x) { if(x > 0) { f(x - 1) } else { x } } ``` lead to ``` Error: node stack overflow ``` This is issue is silenced when tested with `testthat` 1.x (reason unknown), but cause failures when using `testthat` 2.x (issue can be reproduced outside test context). Problem is known and tracked by [SPARK-30629](https://issues.apache.org/jira/browse/SPARK-30629) Therefore, keeping this test active doesn't make sense, as it will lead to continuous test failures, when `testthat` is updated (#27359 / SPARK-23435). ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Existing tests. CC falaki Closes #27363 from zero323/SPARK-29777-FOLLOWUP. Authored-by: zero323 <mszymkiewicz@gmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
…SQLQueryTestSuite ### What changes were proposed in this pull request? This PR is to remove query index from the golden files of SQLQueryTestSuite ### Why are the changes needed? Because the SQLQueryTestSuite's golden files have the query index for each query, removal of any query statement [except the last one] will generate many unneeded difference. This will make code review harder. The number of changed lines is misleading. ### Does this PR introduce any user-facing change? No ### How was this patch tested? N/A Closes #27361 from gatorsmile/removeIndexNum. Authored-by: Xiao Li <gatorsmile@gmail.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
…elation ### What changes were proposed in this pull request? Add identifier and catalog information in DataSourceV2Relation so it would be possible to do richer checks in checkAnalysis step. ### Why are the changes needed? In data source v2, table implementations are all customized so we may not be able to get the resolved identifier from tables them selves. Therefore we encode the table and catalog information in DSV2Relation so no external changes are needed to make sure this information is available. ### Does this PR introduce any user-facing change? No ### How was this patch tested? Unit tests in the following suites: CatalogManagerSuite.scala CatalogV2UtilSuite.scala SupportsCatalogOptionsSuite.scala PlanResolutionSuite.scala Closes #26957 from yuchenhuo/SPARK-30314. Authored-by: Yuchen Huo <yuchen.huo@databricks.com> Signed-off-by: Burak Yavuz <brkyvz@gmail.com>
…Arrow to Pandas conversion ### What changes were proposed in this pull request? Prevent unnecessary copies of data during conversion from Arrow to Pandas. ### Why are the changes needed? During conversion of pyarrow data to Pandas, columns are checked for timestamp types and then modified to correct for local timezone. If the data contains no timestamp types, then unnecessary copies of the data can be made. This is most prevalent when checking columns of a pandas DataFrame where each series is assigned back to the DataFrame, regardless if it had timestamps. See https://www.mail-archive.com/devarrow.apache.org/msg17008.html and ARROW-7596 for discussion. ### Does this PR introduce any user-facing change? No ### How was this patch tested? Existing tests Closes #27358 from BryanCutler/pyspark-pandas-timestamp-copy-fix-SPARK-30640. Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
…Reference ### What changes were proposed in this pull request? Document SORT BY clause of SELECT statement in SQL Reference Guide. ### Why are the changes needed? Currently Spark lacks documentation on the supported SQL constructs causing confusion among users who sometimes have to look at the code to understand the usage. This is aimed at addressing this issue. ### Does this PR introduce any user-facing change? Yes. **Before:** There was no documentation for this. **After.** <img width="972" alt="Screen Shot 2020-01-20 at 1 25 57 AM" src="https://user-images.githubusercontent.com/14225158/72714701-00698c00-3b24-11ea-810e-28400e196ae9.png"> <img width="972" alt="Screen Shot 2020-01-20 at 1 26 11 AM" src="https://user-images.githubusercontent.com/14225158/72714706-02cbe600-3b24-11ea-9072-6d5e6f256400.png"> <img width="972" alt="Screen Shot 2020-01-20 at 1 26 28 AM" src="https://user-images.githubusercontent.com/14225158/72714712-07909a00-3b24-11ea-9aed-51b6bb0849f2.png"> <img width="972" alt="Screen Shot 2020-01-20 at 1 26 46 AM" src="https://user-images.githubusercontent.com/14225158/72714722-0a8b8a80-3b24-11ea-9fea-4d2a166e9d92.png"> <img width="972" alt="Screen Shot 2020-01-20 at 1 27 02 AM" src="https://user-images.githubusercontent.com/14225158/72714731-0f503e80-3b24-11ea-9f6d-8223e5d88c65.png"> ### How was this patch tested? Tested using jykyll build --serve Closes #27289 from dilipbiswal/sql-ref-select-sortby. Authored-by: Dilip Biswal <dkbiswal@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com>
…in SQL Reference ### What changes were proposed in this pull request? Document DISTRIBUTE BY clause of SELECT statement in SQL Reference Guide. ### Why are the changes needed? Currently Spark lacks documentation on the supported SQL constructs causing confusion among users who sometimes have to look at the code to understand the usage. This is aimed at addressing this issue. ### Does this PR introduce any user-facing change? Yes. **Before:** There was no documentation for this. **After.** <img width="972" alt="Screen Shot 2020-01-20 at 3 08 24 PM" src="https://user-images.githubusercontent.com/14225158/72763045-c08fbc80-3b96-11ea-8fb6-023cba5eb96a.png"> <img width="972" alt="Screen Shot 2020-01-20 at 3 08 34 PM" src="https://user-images.githubusercontent.com/14225158/72763047-c38aad00-3b96-11ea-80d8-cd3d2d4257c8.png"> ### How was this patch tested? Tested using jykyll build --serve Closes #27298 from dilipbiswal/sql-ref-select-distributeby. Authored-by: Dilip Biswal <dkbiswal@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com>
### What changes were proposed in this pull request? mention that `INT96` timestamp is still useful for interoperability. ### Why are the changes needed? Give users more context of the behavior changes. ### Does this PR introduce any user-facing change? no ### How was this patch tested? N/A Closes #27622 from cloud-fan/parquet. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request? fix kubernetes-client version doc ### Why are the changes needed? correct doc ### Does this PR introduce any user-facing change? nah ### How was this patch tested? nah Closes #27605 from yaooqinn/k8s-version-update. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: Sean Owen <srowen@gmail.com>
… to non-existent table with same name ### Why are the changes needed? This ports the tests introduced in 7285eea to master to avoid future regressions. ### Background A query with Common Table Expressions can cause a stack overflow when it contains a CTE that refers a non-existing table with the same name. The name of the table need to have a database qualifier. This is caused by a couple of things: - CTESubstitution runs analysis on the CTE, but this does not throw an exception because the table has a database qualifier. The reason is that we don't fail is because we re-attempt to resolve the relation in a later rule; - CTESubstitution replace logic does not check if the table it is replacing has a database, it shouldn't replace the relation if it does. So now we will happily replace nonexist.t with t; Note that this not an issue for master or the spark-3.0 branch. ### Does this PR introduce any user-facing change? No ### How was this patch tested? Added regression test to `AnalysisErrorSuite` and `DataFrameSuite`. Closes #27635 from hvanhovell/SPARK-30811-master. Authored-by: herman <herman@databricks.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request?
The `allGather` method is added to the `BarrierTaskContext`. This method contains the same functionality as the `BarrierTaskContext.barrier` method; it blocks the task until all tasks make the call, at which time they may continue execution. In addition, the `allGather` method takes an input message. Upon returning from the `allGather` the task receives a list of all the messages sent by all the tasks that made the `allGather` call.
### Why are the changes needed?
There are many situations where having the tasks communicate in a synchronized way is useful. One simple example is if each task needs to start a server to serve requests from one another; first the tasks must find a free port (the result of which is undetermined beforehand) and then start making requests, but to do so they each must know the port chosen by the other task. An `allGather` method would allow them to inform each other of the port they will run on.
### Does this PR introduce any user-facing change?
Yes, an `BarrierTaskContext.allGather` method will be available through the Scala, Java, and Python APIs.
### How was this patch tested?
Most of the code path is already covered by tests to the `barrier` method, since this PR includes a refactor so that much code is shared by the `barrier` and `allGather` methods. However, a test is added to assert that an all gather on each tasks partition ID will return a list of every partition ID.
An example through the Python API:
```python
>>> from pyspark import BarrierTaskContext
>>>
>>> def f(iterator):
... context = BarrierTaskContext.get()
... return [context.allGather('{}'.format(context.partitionId()))]
...
>>> sc.parallelize(range(4), 4).barrier().mapPartitions(f).collect()[0]
[u'3', u'1', u'0', u'2']
```
Closes #27395 from sarthfrey/master.
Lead-authored-by: sarthfrey-db <sarth.frey@databricks.com>
Co-authored-by: sarthfrey <sarth.frey@gmail.com>
Signed-off-by: Xiangrui Meng <meng@databricks.com>
(cherry picked from commit 57254c9)
Signed-off-by: Xiangrui Meng <meng@databricks.com>
…zer in Aggregator test suite ### What changes were proposed in this pull request? There are three changes in this PR: 1. use Summarizer instead of MultivariateOnlineSummarizer in Aggregator test suites (similar to #26396) 2. Put common code in ```Summarizer.getRegressionSummarizers``` and ```Summarizer.getClassificationSummarizers```. 3. Move ```MultiClassSummarizer``` from ```LogisticRegression``` to ```ml.stat``` (this seems to be a better place since ```MultiClassSummarizer``` is not only used by ```LogisticRegression``` but also several other classes). ### Why are the changes needed? Minimize code duplication and improve performance ### Does this PR introduce any user-facing change? No ### How was this patch tested? existing test suites. Closes #27555 from huaxingao/spark-30802. Authored-by: Huaxin Gao <huaxing@us.ibm.com> Signed-off-by: Sean Owen <srowen@gmail.com>
This reverts commit af63971.
…ntext is restarted ### What changes were proposed in this pull request? As discussed on the Jira ticket, this change clears the SQLContext._instantiatedContext class attribute when the SparkSession is stopped. That way, the attribute will be reset with a new, usable SQLContext when a new SparkSession is started. ### Why are the changes needed? When the underlying SQLContext is instantiated for a SparkSession, the instance is saved as a class attribute and returned from subsequent calls to SQLContext.getOrCreate(). If the SparkContext is stopped and a new one started, the SQLContext class attribute is never cleared so any code which calls SQLContext.getOrCreate() will get a SQLContext with a reference to the old, unusable SparkContext. A similar issue was identified and fixed for SparkSession in [SPARK-19055](https://issues.apache.org/jira/browse/SPARK-19055), but the fix did not change SQLContext as well. I ran into this because mllib still [uses](https://github.com/apache/spark/blob/master/python/pyspark/mllib/common.py#L105) SQLContext.getOrCreate() under the hood. ### Does this PR introduce any user-facing change? No ### How was this patch tested? A new test was added. I verified that the test fails without the included change. Closes #27610 from afavaro/restart-sqlcontext. Authored-by: Alex Favaro <alex.favaro@affirm.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
### What changes were proposed in this pull request? Improve the CREATE TABLE document: 1. mention that some clauses can come in as any order. 2. refine the description for some parameters. 3. mention how data source table interacts with data source 4. make the examples consistent between data source and hive serde tables. ### Why are the changes needed? improve doc ### Does this PR introduce any user-facing change? no ### How was this patch tested? N/A Closes #27638 from cloud-fan/doc. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…it's disabled ### What changes were proposed in this pull request? Currently, after #27313, it shows the warning about dynamic allocation which is disabled by default. ```bash $ ./bin/spark-shell ``` ``` ... 20/02/18 11:04:56 WARN ResourceProfile: Please ensure that the number of slots available on your executors is limited by the number of cores to task cpus and not another custom resource. If cores is not the limiting resource then dynamic allocation will not work properly! ``` This PR brings back the configuration checking for this warning. Seems mistakenly removed at https://github.com/apache/spark/pull/27313/files#diff-364713d7776956cb8b0a771e9b62f82dL2841 ### Why are the changes needed? To remove false warning. ### Does this PR introduce any user-facing change? Yes, it will don't show the warning. It's master only change so no user-facing to end users. ### How was this patch tested? Manually tested. Closes #27615 from HyukjinKwon/SPARK-29148. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
…PartitionDiscovery.threshold ### What changes were proposed in this pull request? Revise the doc of SQL configuration `spark.sql.sources.parallelPartitionDiscovery.threshold`. ### Why are the changes needed? The doc of configuration "spark.sql.sources.parallelPartitionDiscovery.threshold" is not accurate on the part "This applies to Parquet, ORC, CSV, JSON and LibSVM data sources". We should revise it as effective on all the file-based data sources. ### Does this PR introduce any user-facing change? No ### How was this patch tested? None. It's just doc. Closes #27639 from gengliangwang/reviseParallelPartitionDiscovery. Authored-by: Gengliang Wang <gengliang.wang@databricks.com> Signed-off-by: Gengliang Wang <gengliang.wang@databricks.com>
…L config changes ### What changes were proposed in this pull request? In the PR, I propose to add the `returnLong` parameter to `IntegralDivide`, and pass the value of `spark.sql.legacy.integralDivide.returnBigint` if `returnLong` is not provided on creation of `IntegralDivide`. ### Why are the changes needed? This allows to avoid the issue when the configuration change between different phases of planning, and this can silently break a query plan which can lead to crashes or data corruption. OptionsAttachments ### Does this PR introduce any user-facing change? No ### How was this patch tested? By `ArithmeticExpressionSuite`. Closes #27628 from MaxGekk/integral-divide-conf. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…pe map key ### What changes were proposed in this pull request? mention the workaround if users do want to use map type as key, and add a test to demonstrate it. ### Why are the changes needed? it's better to provide an alternative when we ban something. ### Does this PR introduce any user-facing change? no ### How was this patch tested? N/A Closes #27621 from cloud-fan/map. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…fig checks from RuntimeConfig to SQLConf ### What changes were proposed in this pull request? - Output warnings for deprecated SQL configs in `SQLConf. setConfWithCheck()` and in `SQLConf. unsetConf()` - Throw an exception for removed SQL configs in `SQLConf. setConfWithCheck()` when they set to non-default values - Remove checking of deprecated and removed SQL configs from RuntimeConfig ### Why are the changes needed? Currently, warnings/exceptions are printed only when a SQL config is set dynamically, for instance via `spark.conf.set()`. After the changes, removed/deprecated SQL configs will be checked when they set statically. For example: ``` $ bin/spark-shell --conf spark.sql.fromJsonForceNullableSchema=false scala> spark.emptyDataFrame java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder': ... Caused by: org.apache.spark.sql.AnalysisException: The SQL config 'spark.sql.fromJsonForceNullableSchema' was removed in the version 3.0.0. It was removed to prevent errors like SPARK-23173 for non-default value. ``` ``` $ bin/spark-shell --conf spark.sql.hive.verifyPartitionPath=false scala> spark.emptyDataFrame 20/02/20 02:10:26 WARN SQLConf: The SQL config 'spark.sql.hive.verifyPartitionPath' has been deprecated in Spark v3.0 and may be removed in the future. This config is replaced by 'spark.files.ignoreMissingFiles'. ``` ### Does this PR introduce any user-facing change? Yes ### How was this patch tested? By `SQLConfSuite` Closes #27645 from MaxGekk/remove-sql-configs-followup-2. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
… `removedSQLConfigs` ### What changes were proposed in this pull request? Exclude the SQL config `spark.sql.variable.substitute.depth` from `SQLConf.removedSQLConfigs` ### Why are the changes needed? By the #27169, the config was placed to `SQLConf.removedSQLConfigs`. And as a consequence of that when an user set it non-default value (1 for example), he/she will get an exception. It is acceptable for SQL configs that could impact on the behavior but not for this particular config. Raising of such exception will just make migration to Spark 3.0 more difficult. ### Does this PR introduce any user-facing change? Yes, before the changes users get an exception when he/she set `spark.sql.variable.substitute.depth` to a value different from `40`. ### How was this patch tested? Run `spark.conf.set("spark.sql.variable.substitute.depth", 1)` in `spark-shell`. Closes #27646 from MaxGekk/remove-substitute-depth-conf. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
This PR aims to upgrade Py4J to `0.10.9` for better Python 3.7 support in Apache Spark 3.0.0 (master/branch-3.0). This is not for `branch-2.4`.
- Apache Spark 3.0.0 is using `Py4J 0.10.8.1` (released on 2018-10-21) because `0.10.8.1` was the first official release to support Python 3.7.
- https://www.py4j.org/changelog.html#py4j-0-10-8-and-py4j-0-10-8-1
- `Py4J 0.10.9` was released on January 25th 2020 with better Python 3.7 support and `magic_member` bug fix.
- https://github.com/bartdag/py4j/releases/tag/0.10.9
- https://www.py4j.org/changelog.html#py4j-0-10-9
No.
Pass the Jenkins with the existing tests.
Closes #27641 from dongjoon-hyun/SPARK-30884.
Authored-by: Dongjoon Hyun <dhyun@apple.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
### What changes were proposed in this pull request? Revise the documentation of `spark.ui.retainedTasks` to make it clear that the configuration is for one stage. ### Why are the changes needed? There are configurations for the limitation of UI data. `spark.ui.retainedJobs`, `spark.ui.retainedStages` and `spark.worker.ui.retainedExecutors` are the total max number for one application, while the configuration `spark.ui.retainedTasks` is the max number for one stage. ### Does this PR introduce any user-facing change? No ### How was this patch tested? None, just doc. Closes #27660 from gengliangwang/reviseRetainTask. Authored-by: Gengliang Wang <gengliang.wang@databricks.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
…der issue" ### What changes were proposed in this pull request? This reverts commit bef5d9d. ### Why are the changes needed? Revert it according to #24902 (comment). ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Pass Jenkins. Closes #27540 from Ngone51/revert_spark_28093. Lead-authored-by: wuyi <yi.wu@databricks.com> Co-authored-by: yi.wu <yi.wu@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
Split off from #27534. ### What changes were proposed in this pull request? This PR deletes some unused cruft from the Sphinx Makefiles. ### Why are the changes needed? To remove dead code and the associated maintenance burden. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Tested locally by building the Python docs and reviewing them in my browser. Closes #27625 from nchammas/SPARK-30731-makefile-cleanup. Lead-authored-by: Nicholas Chammas <nicholas.chammas@liveramp.com> Co-authored-by: Nicholas Chammas <nicholas.chammas@gmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
### What changes were proposed in this pull request? Point to correct position when miswrite `NOSCAN` detects. ### Why are the changes needed? Before: ``` [info] org.apache.spark.sql.catalyst.parser.ParseException: Expected `NOSCAN` instead of `SCAN`(line 1, pos 0) [info] [info] == SQL == [info] ANALYZE TABLE analyze_partition_with_null PARTITION (name) COMPUTE STATISTICS SCAN [info] ^^^ ``` After: ``` [info] org.apache.spark.sql.catalyst.parser.ParseException: Expected `NOSCAN` instead of `SCAN`(line 1, pos 78) [info] [info] == SQL == [info] ANALYZE TABLE analyze_partition_with_null PARTITION (name) COMPUTE STATISTICS SCAN [info] ------------------------------------------------------------------------------^^^ ``` ### Does this PR introduce any user-facing change? Yes, user will see better error message. ### How was this patch tested? Manually test. Closes #27662 from Ngone51/fix_noscan_reference. Authored-by: yi.wu <yi.wu@databricks.com> Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org>
…F by default ### What changes were proposed in this pull request? This PR proposes to throw exception by default when user use untyped UDF(a.k.a `org.apache.spark.sql.functions.udf(AnyRef, DataType)`). And user could still use it by setting `spark.sql.legacy.useUnTypedUdf.enabled` to `true`. ### Why are the changes needed? According to #23498, since Spark 3.0, the untyped UDF will return the default value of the Java type if the input value is null. For example, `val f = udf((x: Int) => x, IntegerType)`, `f($"x")` will return 0 in Spark 3.0 but null in Spark 2.4. And the behavior change is introduced due to Spark3.0 is built with Scala 2.12 by default. As a result, this might change data silently and may cause correctness issue if user still expect `null` in some cases. Thus, we'd better to encourage user to use typed UDF to avoid this problem. ### Does this PR introduce any user-facing change? Yeah. User will hit exception now when use untyped UDF. ### How was this patch tested? Added test and updated some tests. Closes #27488 from Ngone51/spark_26580_followup. Lead-authored-by: yi.wu <yi.wu@databricks.com> Co-authored-by: wuyi <yi.wu@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…hanges ### What changes were proposed in this pull request? In the PR, I propose to add the `legacySizeOfNull ` parameter to the `Size` expression, and pass the value of `spark.sql.legacy.sizeOfNull` if `legacySizeOfNull` is not provided on creation of `Size`. ### Why are the changes needed? This allows to avoid the issue when the configuration change between different phases of planning, and this can silently break a query plan which can lead to crashes or data corruption. ### Does this PR introduce any user-facing change? No ### How was this patch tested? By `CollectionExpressionsSuite`. Closes #27658 from MaxGekk/Size-SQLConf-get-deps. Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request? - Add missing `since` annotation. - Don't show classes under `org.apache.spark.sql.dynamicpruning` package in API docs. - Fix the scope of `xxxExactNumeric` to remove it from the API docs. ### Why are the changes needed? Avoid leaking APIs unintentionally in Spark 3.0.0. ### Does this PR introduce any user-facing change? No. All these changes are to avoid leaking APIs unintentionally in Spark 3.0.0. ### How was this patch tested? Manually generated the API docs and verified the above issues have been fixed. Closes #27560 from xuanyuanking/SPARK-30809. Authored-by: Yuanjian Li <xyliyuanjian@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request? This PR aims to fix the thread-safety issue in turning off AQE for CacheManager by cloning the current session and changing the AQE conf on the cloned session. This PR also adds a utility function for cloning the session with AQE disabled conf value, which can be shared by another caller. ### Why are the changes needed? To fix the potential thread-unsafe problem. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Manually tested CachedTableSuite with AQE settings enabled. Closes #27659 from maryannxue/spark-30906. Authored-by: maryannxue <maryannxue@apache.org> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
|
Can one of the admins verify this patch? |
Author
|
Messed up, only wanted a pull request for df6539b |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Fix for ForEachBatch streaming Java example
Why are the changes needed?
Example in docs was incorrect
Does this PR introduce any user-facing change?
Yes, but only docs.
How was this patch tested?
In IDE