-
Notifications
You must be signed in to change notification settings - Fork 28.6k
[SPARK-32976][SQL][FOLLOWUP] SET and RESTORE hive.exec.dynamic.partition.mode for HiveSQLInsertTestSuite to avoid flakiness #30843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ion.mode for Hive related TestSuites to avoid flakiness
cc @cloud-fan @gatorsmile @maropu thanks |
Kubernetes integration test starting |
Kubernetes integration test status success |
Test build #133024 has finished for PR 30843 at commit
|
@@ -21,5 +21,23 @@ import org.apache.spark.sql.SQLInsertTestSuite | |||
import org.apache.spark.sql.hive.test.TestHiveSingleton | |||
|
|||
class HiveSQLInsertTestSuite extends SQLInsertTestSuite with TestHiveSingleton { | |||
|
|||
private var originalPartitionMode = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can just use null
instead of an empty string.
private var originalPartitionMode: String = _
here and
spark.conf.get("hive.exec.dynamic.partition.mode", null)
later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can also do something like:
Option(spark.conf.get("hive.exec.dynamic.partition.mode", null))
too. and later:
originalPartitionMode
.map(v => spark.conf.set("hive.exec.dynamic.partition.mode", v)
.getOrElse(spark.conf.unset("hive.exec.dynamic.partition.mode"))
up to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forked this from HiveCharVarchar Test Suite. Shall we change there
too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No in this context because this is a follow-up of SPARK-32976. To touch other test suite, we need to make another JIRA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM. Merged to master/3.1.
…ion.mode for HiveSQLInsertTestSuite to avoid flakiness ### What changes were proposed in this pull request? As #29893 (comment) mentioned: > We need to set spark.conf.set("hive.exec.dynamic.partition.mode", "nonstrict") before executing this suite; otherwise, test("insert with column list - follow table output order + partitioned table") will fail. The reason why it does not fail because some test cases [running before this suite] do not change the default value of hive.exec.dynamic.partition.mode back to strict. However, the order of test suite execution is not deterministic. ### Why are the changes needed? avoid flakiness in tests ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? existing tests Closes #30843 from yaooqinn/SPARK-32976-F. Authored-by: Kent Yao <yaooqinn@hotmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit dd44ba5) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
Er... Shall we wait for the Jenkins or GA? 😁 |
Oh.. My bad. |
Thanks. I double-checked it by manually verifying quickly.
|
Kubernetes integration test starting |
Kubernetes integration test status success |
Test build #133074 has finished for PR 30843 at commit
|
@@ -21,5 +21,20 @@ import org.apache.spark.sql.SQLInsertTestSuite | |||
import org.apache.spark.sql.hive.test.TestHiveSingleton | |||
|
|||
class HiveSQLInsertTestSuite extends SQLInsertTestSuite with TestHiveSingleton { | |||
|
|||
private val originalPartitionMode = spark.conf.getOption("hive.exec.dynamic.partition.mode") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in sql/core, the SparkSession
is created for each test suite, and we can't use spark
in the class constructor. In sql/hive it's OK as all test suites share one global SparkSession
, but let's avoid using this pattern that only works in sql/hive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, this should be avoided
What changes were proposed in this pull request?
As #29893 (comment) mentioned:
Why are the changes needed?
avoid flakiness in tests
Does this PR introduce any user-facing change?
no
How was this patch tested?
existing tests