Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-14922][SPARK-17732][SPARK-23866][SQL] Support partition filters in ALTER TABLE DROP PARTITION #20999

Closed
wants to merge 16 commits into from
Closed
Original file line number Diff line number Diff line change
Expand Up @@ -114,10 +114,10 @@ statement
partitionSpec+ #addTablePartition
| ALTER TABLE tableIdentifier
from=partitionSpec RENAME TO to=partitionSpec #renameTablePartition
| ALTER TABLE tableIdentifier
DROP (IF EXISTS)? partitionSpec (',' partitionSpec)* PURGE? #dropTablePartitions
| ALTER VIEW tableIdentifier
DROP (IF EXISTS)? partitionSpec (',' partitionSpec)* #dropTablePartitions
| ALTER TABLE tableIdentifier DROP (IF EXISTS)?
dropPartitionSpec (',' dropPartitionSpec)* PURGE? #dropTablePartitions
| ALTER VIEW tableIdentifier DROP (IF EXISTS)?
dropPartitionSpec (',' dropPartitionSpec)* #dropTablePartitions
| ALTER TABLE tableIdentifier partitionSpec? SET locationSpec #setTableLocation
| ALTER TABLE tableIdentifier RECOVER PARTITIONS #recoverPartitions
| DROP TABLE (IF EXISTS)? tableIdentifier PURGE? #dropTable
Expand Down Expand Up @@ -261,6 +261,14 @@ partitionVal
: identifier (EQ constant)?
;

dropPartitionSpec
: PARTITION '(' dropPartitionVal (',' dropPartitionVal)* ')'
;

dropPartitionVal
: identifier (comparisonOperator constant)?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has to be in this format? partCol1 > 2 How about 2 > partCol1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, in Hive it has to be like this. 2 > partCol1 is not supported by Hive.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hive also throws antler errors for the case 2 > partCol1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hive does throw an error in that case, you mean asking that error is a parsing or another kind of exception?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea, yes. I like user-understandable error messages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hive throws this parser exception:

hive> alter table test1 drop partition(1 > c);
NoViableAltException(368@[])
	at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:12014)
	at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.dropPartitionVal(HiveParser_IdentifiersParser.java:11684)
	at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.dropPartitionSpec(HiveParser_IdentifiersParser.java:11563)
	at org.apache.hadoop.hive.ql.parse.HiveParser.dropPartitionSpec(HiveParser.java:44851)
	at org.apache.hadoop.hive.ql.parse.HiveParser.alterStatementSuffixDropPartitions(HiveParser.java:11564)
	at org.apache.hadoop.hive.ql.parse.HiveParser.alterTableStatementSuffix(HiveParser.java:8000)
	at org.apache.hadoop.hive.ql.parse.HiveParser.alterStatement(HiveParser.java:7450)
	at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:4340)
	at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:2497)
	at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1423)
	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:209)
	at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74)
	at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:67)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:615)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776)
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:694)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FAILED: ParseException line 1:33 cannot recognize input near '1' '>' 'c' in drop partition statement

so yes, it is analogous to this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the check. I still like meaningful messages though, we shold wait for other reviewer's comments.

;

describeFuncName
: qualifiedName
| STRING
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -552,6 +552,11 @@ object CatalogTypes {
*/
type TablePartitionSpec = Map[String, String]

/**
* Specifications of table partition filters. Seq of column name, comparison operator and value.
*/
type PartitionFiltersSpec = Seq[(String, String, String)]

/**
* Initialize an empty spec.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,22 @@ class AstBuilder(conf: SQLConf) extends SqlBaseBaseVisitor[AnyRef] with Logging
}
}

/**
* Create a partition specification map with filters.
*/
override def visitDropPartitionSpec(
ctx: DropPartitionSpecContext): Seq[(String, String, String)] = {
withOrigin(ctx) {
ctx.dropPartitionVal().asScala.map { pFilter =>
val partition = pFilter.identifier().getText
val value = visitStringConstant(pFilter.constant())
val operator = pFilter.comparisonOperator().getChild(0).asInstanceOf[TerminalNode]
val stringOperator = SqlBaseParser.VOCABULARY.getSymbolicName(operator.getSymbol.getType)
(partition, stringOperator, value)
}
}
}

/**
* Convert a constant of any type into a string. This is typically used in DDL commands, and its
* main purpose is to prevent slight differences due to back to back conversions i.e.:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -927,7 +927,7 @@ class SparkSqlAstBuilder(conf: SQLConf) extends AstBuilder(conf) {
}
AlterTableDropPartitionCommand(
visitTableIdentifier(ctx.tableIdentifier),
ctx.partitionSpec.asScala.map(visitNonOptionalPartitionSpec),
ctx.dropPartitionSpec().asScala.map(visitDropPartitionSpec),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you update the comment?:

* ALTER TABLE table DROP [IF EXISTS] PARTITION spec1[, PARTITION spec2, ...] [PURGE];

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmmh, I am not sure how to update it. The only difference is that specN now can be any kind of filter instead of just partitionColumn = value. So it is actually the definition of specN which changed.

ifExists = ctx.EXISTS != null,
purge = ctx.PURGE != null,
retainData = false)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ import org.apache.hadoop.mapred.{FileInputFormat, JobConf}

import org.apache.spark.sql.{AnalysisException, Row, SparkSession}
import org.apache.spark.sql.catalyst.TableIdentifier
import org.apache.spark.sql.catalyst.analysis.{NoSuchTableException, Resolver}
import org.apache.spark.sql.catalyst.analysis.Resolver
import org.apache.spark.sql.catalyst.catalog._
import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec
import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference}
import org.apache.spark.sql.catalyst.catalog.CatalogTypes.{PartitionFiltersSpec, TablePartitionSpec}
import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference, Cast, EqualNullSafe, EqualTo, GreaterThan, GreaterThanOrEqual, LessThan, LessThanOrEqual, Literal, Not}
import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
import org.apache.spark.sql.execution.datasources.{HadoopFsRelation, LogicalRelation, PartitioningUtils}
import org.apache.spark.sql.execution.datasources.orc.OrcFileFormat
Expand Down Expand Up @@ -521,35 +521,114 @@ case class AlterTableRenamePartitionCommand(
*/
case class AlterTableDropPartitionCommand(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we make table relation as a child? then we can resolve the partitionsFilters automatically.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about that. The point is that we have anyway to check that the attributes specified are the partitioning ones. So I am not sure it is worth to run the whole analyzer rules for something we have anyway to handle somehow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it's also weird to use AttributeReference this way. Can we create a new Attribute implementation for this purpose? Basically we only need a resolved expression to hold the partition column name. The type doesn't matter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, I will. Thanks.

tableName: TableIdentifier,
specs: Seq[TablePartitionSpec],
partitionsFilters: Seq[PartitionFiltersSpec],
ifExists: Boolean,
purge: Boolean,
retainData: Boolean)
extends RunnableCommand {

override def run(sparkSession: SparkSession): Seq[Row] = {
val catalog = sparkSession.sessionState.catalog
val timeZone = Option(sparkSession.sessionState.conf.sessionLocalTimeZone)
val table = catalog.getTableMetadata(tableName)
val partitionColumns = table.partitionColumnNames
val partitionAttributes = table.partitionSchema.toAttributes.map(a => a.name -> a).toMap
DDLUtils.verifyAlterTableType(catalog, table, isView = false)
DDLUtils.verifyPartitionProviderIsHive(sparkSession, table, "ALTER TABLE DROP PARTITION")

val normalizedSpecs = specs.map { spec =>
PartitioningUtils.normalizePartitionSpec(
spec,
table.partitionColumnNames,
table.identifier.quotedString,
sparkSession.sessionState.conf.resolver)
val resolvedSpecs = partitionsFilters.flatMap { filtersSpec =>
if (hasComplexFilters(filtersSpec)) {
generatePartitionSpec(filtersSpec,
partitionColumns,
partitionAttributes,
table.identifier,
catalog,
sparkSession.sessionState.conf.resolver,
timeZone)
} else {
val partitionSpec = filtersSpec.map {
case (key, _, value) => key -> value
}.toMap
PartitioningUtils.normalizePartitionSpec(
partitionSpec,
partitionColumns,
table.identifier.quotedString,
sparkSession.sessionState.conf.resolver) :: Nil
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should check resolvedSpecs here to throw error message if total resolved spec is empty.

catalog.dropPartitions(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does hive have an API to drop partitions with a predicate? I think the current approach is very inefficient with non-equal partition predicates.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the implementation here is similar to how hive implements it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is my understanding. You can check DDLTaks.dropPartitions.

table.identifier, normalizedSpecs, ignoreIfNotExists = ifExists, purge = purge,
table.identifier, resolvedSpecs, ignoreIfNotExists = ifExists, purge = purge,
retainData = retainData)

CommandUtils.updateTableStats(sparkSession, table)

Seq.empty[Row]
}

def hasComplexFilters(partitionFilterSpec: PartitionFiltersSpec): Boolean = {
!partitionFilterSpec.forall(_._2 == "EQ")
}

def generatePartitionSpec(
partitionFilterSpec: PartitionFiltersSpec,
partitionColumns: Seq[String],
partitionAttributes: Map[String, Attribute],
tableIdentifier: TableIdentifier,
catalog: SessionCatalog,
resolver: Resolver,
timeZone: Option[String]): Seq[TablePartitionSpec] = {
val filters = partitionFilterSpec.map { case (partitionColumn, operator, value) =>
val normalizedPartition = PartitioningUtils.normalizePartitionColumn(
partitionColumn,
partitionColumns,
tableIdentifier.quotedString,
resolver)
val partitionAttr = partitionAttributes(normalizedPartition)
val castedLiteralValue = Cast(Literal(value), partitionAttr.dataType, timeZone)
operator match {
case "EQ" =>
EqualTo(partitionAttr, castedLiteralValue)
case "NSEQ" =>
EqualNullSafe(partitionAttr, castedLiteralValue)
case "NEQ" | "NEQJ" =>
Not(EqualTo(partitionAttr, castedLiteralValue))
case "LT" =>
LessThan(partitionAttr, castedLiteralValue)
case "LTE" =>
LessThanOrEqual(partitionAttr, castedLiteralValue)
case "GT" =>
GreaterThan(partitionAttr, castedLiteralValue)
case "GTE" =>
GreaterThanOrEqual(partitionAttr, castedLiteralValue)
}
}
val partitions = catalog.listPartitionsByFilter(tableIdentifier, filters)
partitions.map(_.spec)
}
}


object AlterTableDropPartitionCommand {

def fromSpecs(
tableName: TableIdentifier,
specs: Seq[TablePartitionSpec],
ifExists: Boolean,
purge: Boolean,
retainData: Boolean): AlterTableDropPartitionCommand = {
AlterTableDropPartitionCommand(tableName,
specs.map(tablePartitionToPartitionFiltersSpec),
ifExists,
purge,
retainData)
}

def tablePartitionToPartitionFiltersSpec(spec: TablePartitionSpec): PartitionFiltersSpec = {
spec.map {
case (key, value) => (key, "EQ", value)
}.toSeq
}
}


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ case class InsertIntoHadoopFsRelationCommand(
if (mode == SaveMode.Overwrite && !dynamicPartitionOverwrite) {
val deletedPartitions = initialMatchingPartitions.toSet -- updatedPartitions
if (deletedPartitions.nonEmpty) {
AlterTableDropPartitionCommand(
AlterTableDropPartitionCommand.fromSpecs(
catalogTable.get.identifier, deletedPartitions.toSeq,
ifExists = true, purge = false,
retainData = true /* already deleted */).run(sparkSession)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -296,9 +296,7 @@ object PartitioningUtils {
tblName: String,
resolver: Resolver): Map[String, T] = {
val normalizedPartSpec = partitionSpec.toSeq.map { case (key, value) =>
val normalizedKey = partColNames.find(resolver(_, key)).getOrElse {
throw new AnalysisException(s"$key is not a valid partition column in table $tblName.")
}
val normalizedKey = normalizePartitionColumn(key, partColNames, tblName, resolver)
normalizedKey -> value
}

Expand All @@ -308,6 +306,16 @@ object PartitioningUtils {
normalizedPartSpec.toMap
}

def normalizePartitionColumn(
partition: String,
partColNames: Seq[String],
tblName: String,
resolver: Resolver): String = {
partColNames.find(resolver(_, partition)).getOrElse {
throw new AnalysisException(s"$partition is not a valid partition column in table $tblName.")
}
}

/**
* Resolves possible type conflicts between partitions by up-casting "lower" types using
* [[findWiderTypeForPartitionColumn]].
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -861,7 +861,8 @@ class DDLParserSuite extends PlanTest with SharedSQLContext {
assertUnsupported(sql2_view)

val tableIdent = TableIdentifier("table_name", None)
val expected1_table = AlterTableDropPartitionCommand(

val expected1_table = AlterTableDropPartitionCommand.fromSpecs(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add tests case to check if the parser can accept the comparators added by this pr?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, will do, thanks.

tableIdent,
Seq(
Map("dt" -> "2008-08-08", "country" -> "us"),
Expand Down