-
Notifications
You must be signed in to change notification settings - Fork 28k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-45592][SPARK-45282][SQL][3.4] Correctness issue in AQE with InMemoryTableScanExec #43729
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -306,6 +306,35 @@ case class HashPartitioning(expressions: Seq[Expression], numPartitions: Int) | |
|
||
override protected def withNewChildrenInternal( | ||
newChildren: IndexedSeq[Expression]): HashPartitioning = copy(expressions = newChildren) | ||
|
||
} | ||
|
||
case class CoalescedBoundary(startReducerIndex: Int, endReducerIndex: Int) | ||
|
||
/** | ||
* Represents a partitioning where partitions have been coalesced from a HashPartitioning into a | ||
* fewer number of partitions. | ||
*/ | ||
case class CoalescedHashPartitioning(from: HashPartitioning, partitions: Seq[CoalescedBoundary]) | ||
extends Expression with Partitioning with Unevaluable { | ||
|
||
override def children: Seq[Expression] = from.expressions | ||
override def nullable: Boolean = from.nullable | ||
override def dataType: DataType = from.dataType | ||
|
||
override def satisfies0(required: Distribution): Boolean = from.satisfies0(required) | ||
|
||
override def createShuffleSpec(distribution: ClusteredDistribution): ShuffleSpec = | ||
CoalescedHashShuffleSpec(from.createShuffleSpec(distribution), partitions) | ||
|
||
override protected def withNewChildrenInternal( | ||
newChildren: IndexedSeq[Expression]): CoalescedHashPartitioning = | ||
copy(from = from.copy(expressions = newChildren)) | ||
|
||
override val numPartitions: Int = partitions.length | ||
|
||
override def toString: String = from.toString | ||
override def sql: String = from.sql | ||
} | ||
|
||
/** | ||
|
@@ -661,6 +690,26 @@ case class HashShuffleSpec( | |
override def numPartitions: Int = partitioning.numPartitions | ||
} | ||
|
||
case class CoalescedHashShuffleSpec( | ||
from: ShuffleSpec, | ||
partitions: Seq[CoalescedBoundary]) extends ShuffleSpec { | ||
|
||
override def isCompatibleWith(other: ShuffleSpec): Boolean = other match { | ||
case SinglePartitionShuffleSpec => | ||
numPartitions == 1 | ||
case CoalescedHashShuffleSpec(otherParent, otherPartitions) => | ||
partitions == otherPartitions && from.isCompatibleWith(otherParent) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. suppose both In this case we'll consider the two incompatible but they actually should be? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Even they are coalesced to same number of partitions, the coalesced boundary could be different. I think this is root of the issue and why it needs to make sure boundaries are the same when checking compatibility. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, this is not related to my above comment. The check There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it must to be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I mean their partition numbers are different and thus There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In other words, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, if first hash partitioning is 5 partitions, second is 4 partitions. How can we get same coalesced partitions with that? For example: [[0, 3], [3, 5]] != [[0, 3], [3, 4]] The end reducer index of last coalesced partition should be different always, no? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure whether it is possible for this case to happen. But irrespective of that, I feel this check is unnecessary here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Of course this is relatively minor stuff and not related to this backport. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But what about two (nonsensical) CoalescedHashShuffleSpec where is partition is just coalesced from single partition from the some parent HashShuffleSpec. Is it not then correct to do the |
||
case ShuffleSpecCollection(specs) => | ||
specs.exists(isCompatibleWith) | ||
case _ => | ||
false | ||
} | ||
|
||
override def canCreatePartitioning: Boolean = false | ||
|
||
override def numPartitions: Int = partitions.length | ||
} | ||
|
||
case class KeyGroupedShuffleSpec( | ||
partitioning: KeyGroupedPartitioning, | ||
distribution: ClusteredDistribution) extends ShuffleSpec { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm why this needs to extend
Expression
andUnevaluable
, I thought justPartitioning
is enough.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was just based on how it was done for HashPartitioning, could be that it not needed.