Skip to content

Commit

Permalink
[SPARK-23259][SQL] Clean up legacy code around hive external catalog …
Browse files Browse the repository at this point in the history
…and HiveClientImpl

## What changes were proposed in this pull request?

Three legacy statements are removed by this patch:

- in HiveExternalCatalog: The withClient wrapper is not necessary for the private method getRawTable.

- in HiveClientImpl: There are some redundant code in both the tableExists and getTableOption method.

This PR takes over #20425

## How was this patch tested?

Existing tests

Closes #20425

Author: hyukjinkwon <gurwls223@apache.org>

Closes #21780 from HyukjinKwon/SPARK-23259.
  • Loading branch information
Feng Liu authored and HyukjinKwon committed Jul 17, 2018
1 parent 0f0d186 commit d57a267
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ private[spark] class HiveExternalCatalog(conf: SparkConf, hadoopConf: Configurat
* should interpret these special data source properties and restore the original table metadata
* before returning it.
*/
private[hive] def getRawTable(db: String, table: String): CatalogTable = withClient {
private[hive] def getRawTable(db: String, table: String): CatalogTable = {
client.getTable(db, table)
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -353,15 +353,19 @@ private[hive] class HiveClientImpl(
client.getDatabasesByPattern(pattern).asScala
}

private def getRawTableOption(dbName: String, tableName: String): Option[HiveTable] = {
Option(client.getTable(dbName, tableName, false /* do not throw exception */))
}

override def tableExists(dbName: String, tableName: String): Boolean = withHiveState {
Option(client.getTable(dbName, tableName, false /* do not throw exception */)).nonEmpty
getRawTableOption(dbName, tableName).nonEmpty
}

override def getTableOption(
dbName: String,
tableName: String): Option[CatalogTable] = withHiveState {
logDebug(s"Looking up $dbName.$tableName")
Option(client.getTable(dbName, tableName, false)).map { h =>
getRawTableOption(dbName, tableName).map { h =>
// Note: Hive separates partition columns and the schema, but for us the
// partition columns are part of the schema
val cols = h.getCols.asScala.map(fromHiveColumn)
Expand Down

0 comments on commit d57a267

Please sign in to comment.