[SPARK-31847][CORE][TESTS] DAGSchedulerSuite: Rewrite the test framework to support apply specified spark configurations.#28917
Conversation
|
Test build #124472 has finished for PR 28917 at commit
|
| testWithSparkConf(testName, testTags: _*)()(testFun)(pos) | ||
| } | ||
|
|
||
| protected def testWithSparkConf(testName: String, testTags: Tag*) |
There was a problem hiding this comment.
Seems like we just need testWithSparkConf alone. Can we just have this as a private function alone?
|
Test build #124474 has finished for PR 28917 at commit
|
|
Test build #124587 has finished for PR 28917 at commit
|
| testWithSparkConf(testName, testTags: _*)()(testFun)(pos) | ||
| } | ||
|
|
||
| private def testWithSparkConf(testName: String, testTags: Tag*) |
There was a problem hiding this comment.
Shall we simulate the usage of withSQLConf instead of integrating the confs with test()?
There was a problem hiding this comment.
Most SQL-related configuration parameters can be changed dynamically, but most of Core's parameters are static.
| } | ||
|
|
||
| /** Sets all configurations specified in `pairs`, calls `init`, and then calls `testFun` */ | ||
| private def withSparkConf(pairs: (String, String)*)(testFun: => Any): Unit = { |
There was a problem hiding this comment.
For test(), withSparkConf(), shall we extract them into a base class? I guess they could be used by other test suites as well?
There was a problem hiding this comment.
OK. I will simulate SQLHelper.
| private def withSparkConf(pairs: (String, String)*)(testFun: => Any): Unit = { | ||
| val conf = new SparkConf() | ||
| pairs.foreach(kv => conf.set(kv._1, kv._2)) | ||
| init(conf) |
There was a problem hiding this comment.
init() is specific to DAGSchedulerSuite, we should separate it from the test framework.
|
Test build #124661 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124723 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124730 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124738 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124756 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124841 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124851 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124852 has finished for PR 28917 at commit
|
|
retest this please. |
|
Test build #124854 has finished for PR 28917 at commit
|
|
cc @jiangxb1987 |
|
Test build #125808 has finished for PR 28917 at commit
|
|
retest this please |
|
Test build #125814 has finished for PR 28917 at commit
|
|
Test build #125822 has finished for PR 28917 at commit
|
|
retest this please |
|
Test build #125842 has finished for PR 28917 at commit
|
b060daa to
ec0d8d0
Compare
|
Test build #125897 has finished for PR 28917 at commit
|
|
retest this please |
|
Test build #126022 has finished for PR 28917 at commit
|
|
retest this please |
|
Test build #126041 has finished for PR 28917 at commit
|
|
Because conflicts, I will close this PR. |
What changes were proposed in this pull request?
DAGSchedulerSuite exists some issue:
afterEachandinitare called when theSparkConfof the defaultSparkContexthas no configuration that the test case must set. This causes theSparkContextinitialized inbeforeEachto be discarded without being used, resulting in waste. On the other hand, the flexibility to add configurations toSparkConfshould be addressed by the test framework.Why are the changes needed?
SparkContext.Does this PR introduce any user-facing change?
No
How was this patch tested?
Jenkins test.