Skip to content
Permalink
Browse files

[SPARK-11150][SQL][FOLLOW-UP] Dynamic partition pruning

### What changes were proposed in this pull request?
This is code cleanup PR for #25600, aiming to remove an unnecessary condition and to correct a code comment.

### Why are the changes needed?
For code cleanup only.

### Does this PR introduce any user-facing change?
No.

### How was this patch tested?
Passed existing tests.

Closes #26328 from maryannxue/dpp-followup.

Authored-by: maryannxue <maryannxue@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
  • Loading branch information
maryannxue authored and cloud-fan committed Oct 31, 2019
1 parent 5e9a155 commit 4d302cb7ed8a9bf3253c45db12642a709a5ece6b
@@ -48,7 +48,7 @@ private[sql] object PruneFileSourcePartitions extends Rule[LogicalPlan] {
partitionSchema, sparkSession.sessionState.analyzer.resolver)
val partitionSet = AttributeSet(partitionColumns)
val partitionKeyFilters = ExpressionSet(normalizedFilters.filter { f =>
f.references.subsetOf(partitionSet) && f.find(_.isInstanceOf[SubqueryExpression]).isEmpty
f.references.subsetOf(partitionSet)
})

if (partitionKeyFilters.nonEmpty) {
@@ -172,7 +172,7 @@ case class InSubqueryExec(
}

/**
* Plans scalar subqueries from that are present in the given [[SparkPlan]].
* Plans subqueries that are present in the given [[SparkPlan]].
*/
case class PlanSubqueries(sparkSession: SparkSession) extends Rule[SparkPlan] {
def apply(plan: SparkPlan): SparkPlan = {

0 comments on commit 4d302cb

Please sign in to comment.
You can’t perform that action at this time.