Skip to content

Commit

Permalink
[SPARK-20667][SQL][TESTS] Cleanup the cataloged metadata after comple…
Browse files Browse the repository at this point in the history
…ting the package of sql/core and sql/hive

## What changes were proposed in this pull request?

So far, we do not drop all the cataloged objects after each package. Sometimes, we might hit strange test case errors because the previous test suite did not drop the cataloged/temporary objects (tables/functions/database). At least, we can first clean up the environment when completing the package of `sql/core` and `sql/hive`.

## How was this patch tested?
N/A

Author: Xiao Li <gatorsmile@gmail.com>

Closes #17908 from gatorsmile/reset.

(cherry picked from commit 0d00c76)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
  • Loading branch information
gatorsmile authored and cloud-fan committed May 9, 2017
1 parent 4b7aa0b commit b330967
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -1251,9 +1251,10 @@ class SessionCatalog(
dropTempFunction(func.funcName, ignoreIfNotExists = false)
}
}
tempTables.clear()
clearTempTables()
globalTempViewManager.clear()
functionRegistry.clear()
tableRelationCache.invalidateAll()
// restore built-in functions
FunctionRegistry.builtin.listFunction().foreach { f =>
val expressionInfo = FunctionRegistry.builtin.lookupFunction(f)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ trait SharedSQLContext extends SQLTestUtils with BeforeAndAfterEach with Eventua
protected override def afterAll(): Unit = {
super.afterAll()
if (_spark != null) {
_spark.sessionState.catalog.reset()
_spark.stop()
_spark = null
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -488,14 +488,9 @@ private[hive] class TestHiveSparkSession(

sharedState.cacheManager.clearCache()
loadedTables.clear()
sessionState.catalog.clearTempTables()
sessionState.catalog.tableRelationCache.invalidateAll()

sessionState.catalog.reset()
metadataHive.reset()

FunctionRegistry.getFunctionNames.asScala.filterNot(originalUDFs.contains(_)).
foreach { udfName => FunctionRegistry.unregisterTemporaryUDF(udfName) }

// HDFS root scratch dir requires the write all (733) permission. For each connecting user,
// an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with
// ${hive.scratch.dir.permission}. To resolve the permission issue, the simplest way is to
Expand Down

0 comments on commit b330967

Please sign in to comment.