-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-14922][SPARK-17732][SPARK-23866][SQL] Support partition filters in ALTER TABLE DROP PARTITION #20999
[SPARK-14922][SPARK-17732][SPARK-23866][SQL] Support partition filters in ALTER TABLE DROP PARTITION #20999
Changes from 1 commit
b57a5d1
148f477
7d3cf0c
a964d2a
94d1862
67c2214
6397f98
77a945e
09498c6
5e9e28c
441acf3
9b84057
2085088
146aa32
25533a0
9676061
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -927,7 +927,7 @@ class SparkSqlAstBuilder(conf: SQLConf) extends AstBuilder(conf) { | |||
} | ||||
AlterTableDropPartitionCommand( | ||||
visitTableIdentifier(ctx.tableIdentifier), | ||||
ctx.partitionSpec.asScala.map(visitNonOptionalPartitionSpec), | ||||
ctx.dropPartitionSpec().asScala.map(visitDropPartitionSpec), | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you update the comment?: spark/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala Line 916 in 01c3dfa
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. mmmh, I am not sure how to update it. The only difference is that |
||||
ifExists = ctx.EXISTS != null, | ||||
purge = ctx.PURGE != null, | ||||
retainData = false) | ||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,10 +29,10 @@ import org.apache.hadoop.mapred.{FileInputFormat, JobConf} | |
|
||
import org.apache.spark.sql.{AnalysisException, Row, SparkSession} | ||
import org.apache.spark.sql.catalyst.TableIdentifier | ||
import org.apache.spark.sql.catalyst.analysis.{NoSuchTableException, Resolver} | ||
import org.apache.spark.sql.catalyst.analysis.Resolver | ||
import org.apache.spark.sql.catalyst.catalog._ | ||
import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec | ||
import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference} | ||
import org.apache.spark.sql.catalyst.catalog.CatalogTypes.{PartitionFiltersSpec, TablePartitionSpec} | ||
import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference, Cast, EqualNullSafe, EqualTo, GreaterThan, GreaterThanOrEqual, LessThan, LessThanOrEqual, Literal, Not} | ||
import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan | ||
import org.apache.spark.sql.execution.datasources.{HadoopFsRelation, LogicalRelation, PartitioningUtils} | ||
import org.apache.spark.sql.execution.datasources.orc.OrcFileFormat | ||
|
@@ -521,35 +521,114 @@ case class AlterTableRenamePartitionCommand( | |
*/ | ||
case class AlterTableDropPartitionCommand( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shall we make table relation as a child? then we can resolve the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought about that. The point is that we have anyway to check that the attributes specified are the partitioning ones. So I am not sure it is worth to run the whole analyzer rules for something we have anyway to handle somehow. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But it's also weird to use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sure, I will. Thanks. |
||
tableName: TableIdentifier, | ||
specs: Seq[TablePartitionSpec], | ||
partitionsFilters: Seq[PartitionFiltersSpec], | ||
ifExists: Boolean, | ||
purge: Boolean, | ||
retainData: Boolean) | ||
extends RunnableCommand { | ||
|
||
override def run(sparkSession: SparkSession): Seq[Row] = { | ||
val catalog = sparkSession.sessionState.catalog | ||
val timeZone = Option(sparkSession.sessionState.conf.sessionLocalTimeZone) | ||
val table = catalog.getTableMetadata(tableName) | ||
val partitionColumns = table.partitionColumnNames | ||
val partitionAttributes = table.partitionSchema.toAttributes.map(a => a.name -> a).toMap | ||
DDLUtils.verifyAlterTableType(catalog, table, isView = false) | ||
DDLUtils.verifyPartitionProviderIsHive(sparkSession, table, "ALTER TABLE DROP PARTITION") | ||
|
||
val normalizedSpecs = specs.map { spec => | ||
PartitioningUtils.normalizePartitionSpec( | ||
spec, | ||
table.partitionColumnNames, | ||
table.identifier.quotedString, | ||
sparkSession.sessionState.conf.resolver) | ||
val resolvedSpecs = partitionsFilters.flatMap { filtersSpec => | ||
if (hasComplexFilters(filtersSpec)) { | ||
generatePartitionSpec(filtersSpec, | ||
partitionColumns, | ||
partitionAttributes, | ||
table.identifier, | ||
catalog, | ||
sparkSession.sessionState.conf.resolver, | ||
timeZone) | ||
} else { | ||
val partitionSpec = filtersSpec.map { | ||
case (key, _, value) => key -> value | ||
}.toMap | ||
PartitioningUtils.normalizePartitionSpec( | ||
partitionSpec, | ||
partitionColumns, | ||
table.identifier.quotedString, | ||
sparkSession.sessionState.conf.resolver) :: Nil | ||
} | ||
} | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should check resolvedSpecs here to throw error message if total resolved spec is empty. |
||
catalog.dropPartitions( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. does hive have an API to drop partitions with a predicate? I think the current approach is very inefficient with non-equal partition predicates. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. unfortunately, no. I checked https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java but I could find none. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So the implementation here is similar to how hive implements it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, this is my understanding. You can check |
||
table.identifier, normalizedSpecs, ignoreIfNotExists = ifExists, purge = purge, | ||
table.identifier, resolvedSpecs, ignoreIfNotExists = ifExists, purge = purge, | ||
retainData = retainData) | ||
|
||
CommandUtils.updateTableStats(sparkSession, table) | ||
|
||
Seq.empty[Row] | ||
} | ||
|
||
def hasComplexFilters(partitionFilterSpec: PartitionFiltersSpec): Boolean = { | ||
!partitionFilterSpec.forall(_._2 == "EQ") | ||
} | ||
|
||
def generatePartitionSpec( | ||
partitionFilterSpec: PartitionFiltersSpec, | ||
partitionColumns: Seq[String], | ||
partitionAttributes: Map[String, Attribute], | ||
tableIdentifier: TableIdentifier, | ||
catalog: SessionCatalog, | ||
resolver: Resolver, | ||
timeZone: Option[String]): Seq[TablePartitionSpec] = { | ||
val filters = partitionFilterSpec.map { case (partitionColumn, operator, value) => | ||
val normalizedPartition = PartitioningUtils.normalizePartitionColumn( | ||
partitionColumn, | ||
partitionColumns, | ||
tableIdentifier.quotedString, | ||
resolver) | ||
val partitionAttr = partitionAttributes(normalizedPartition) | ||
val castedLiteralValue = Cast(Literal(value), partitionAttr.dataType, timeZone) | ||
operator match { | ||
case "EQ" => | ||
EqualTo(partitionAttr, castedLiteralValue) | ||
case "NSEQ" => | ||
EqualNullSafe(partitionAttr, castedLiteralValue) | ||
case "NEQ" | "NEQJ" => | ||
Not(EqualTo(partitionAttr, castedLiteralValue)) | ||
case "LT" => | ||
LessThan(partitionAttr, castedLiteralValue) | ||
case "LTE" => | ||
LessThanOrEqual(partitionAttr, castedLiteralValue) | ||
case "GT" => | ||
GreaterThan(partitionAttr, castedLiteralValue) | ||
case "GTE" => | ||
GreaterThanOrEqual(partitionAttr, castedLiteralValue) | ||
} | ||
} | ||
val partitions = catalog.listPartitionsByFilter(tableIdentifier, filters) | ||
partitions.map(_.spec) | ||
} | ||
} | ||
|
||
|
||
object AlterTableDropPartitionCommand { | ||
|
||
def fromSpecs( | ||
tableName: TableIdentifier, | ||
specs: Seq[TablePartitionSpec], | ||
ifExists: Boolean, | ||
purge: Boolean, | ||
retainData: Boolean): AlterTableDropPartitionCommand = { | ||
AlterTableDropPartitionCommand(tableName, | ||
specs.map(tablePartitionToPartitionFiltersSpec), | ||
ifExists, | ||
purge, | ||
retainData) | ||
} | ||
|
||
def tablePartitionToPartitionFiltersSpec(spec: TablePartitionSpec): PartitionFiltersSpec = { | ||
spec.map { | ||
case (key, value) => (key, "EQ", value) | ||
}.toSeq | ||
} | ||
} | ||
|
||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -861,7 +861,8 @@ class DDLParserSuite extends PlanTest with SharedSQLContext { | |
assertUnsupported(sql2_view) | ||
|
||
val tableIdent = TableIdentifier("table_name", None) | ||
val expected1_table = AlterTableDropPartitionCommand( | ||
|
||
val expected1_table = AlterTableDropPartitionCommand.fromSpecs( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you add tests case to check if the parser can accept the comparators added by this pr? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sure, will do, thanks. |
||
tableIdent, | ||
Seq( | ||
Map("dt" -> "2008-08-08", "country" -> "us"), | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has to be in this format?
partCol1 > 2
How about2 > partCol1
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, in Hive it has to be like this.
2 > partCol1
is not supported by Hive.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hive also throws antler errors for the case
2 > partCol1
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hive does throw an error in that case, you mean asking that error is a parsing or another kind of exception?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, yes. I like user-understandable error messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hive throws this parser exception:
so yes, it is analogous to this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the check. I still like meaningful messages though, we shold wait for other reviewer's comments.