[SPARK-45387][SQL]Optimize hive patition filter when the comparision dataType not match#46073
Closed
lastbus wants to merge 2 commits intoapache:branch-3.5from
Closed
Conversation
…dataType not match Suppose we have a partitioned table `table_pt` with partition colum `dt` which is StringType and the table metadata is managed by Hive Metastore, if we filter partition by dt = '123', this filter can be pushed down to data source directly, but if the filter condition is number, e.g. dt = 123, Spark will not known which partition should be pushed down. Thus in the process of physical plan optimization, Spark will pull all of that table's partition meta data to client side, to decide which partition filter should be push down to the data source. This is poor of performance if the table has thousands of partitions and increasing the risk of hive metastore oom.
fe7cf2c to
d2bdb78
Compare
…dataType not match What changes were proposed in this pull request? During the PruneFileSourcePartitions process, we can optimize by casting the dataType of the constant to match the dataType of the corresponding partition key. Why are the changes needed? Suppose we have a partitioned table table_pt with partition colum dt which is StringType and the table metadata is managed by Hive Metastore, if we filter partition by dt = '123', this filter can be pushed down to data source directly, but if the filter condition is number, e.g. dt = 123, Spark will not known which partition should be pushed down. Thus in the process of physical plan optimization, Spark will pull all of that table's partition meta data to client side, to decide which partition filter should be push down to the data source. This is poor of performance if the table has thousands of partitions and increasing the risk of hive metastore oom. In our production env, we encounter this problem. Does this PR introduce any user-facing change? No How was this patch tested? in our production env, this fix is OK Was this patch authored or co-authored using generative AI tooling? No
85d5385 to
c28b6fa
Compare
|
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
During the PruneFileSourcePartitions process, we can optimize by casting the dataType of the constant to match the dataType of the corresponding partition key.
Why are the changes needed?
Suppose we have a partitioned table
table_ptwith partition columdtwhich is StringType and the table metadata is managed by Hive Metastore, if we filter partition by dt = '123', this filter can be pushed down to data source directly, but if the filter condition is number, e.g. dt = 123, Spark will not known which partition should be pushed down. Thus in the process of physical plan optimization, Spark will pull all of that table's partition meta data to client side, to decide which partition filter should be push down to the data source. This is poor of performance if the table has thousands of partitions and increasing the risk of hive metastore oom. In our production env, we encounter this problem.Does this PR introduce any user-facing change?
No
How was this patch tested?
in our production env, this fix is OK
Was this patch authored or co-authored using generative AI tooling?
No