[HUDI-6729] Fix get partition values from path for non-string type partition column#9484
Merged
danny0405 merged 4 commits intoapache:masterfrom Aug 23, 2023
Merged
[HUDI-6729] Fix get partition values from path for non-string type partition column#9484danny0405 merged 4 commits intoapache:masterfrom
danny0405 merged 4 commits intoapache:masterfrom
Conversation
boneanxs
reviewed
Aug 21, 2023
| } | ||
| } | ||
| val timeZoneId = conf.get("timeZone", sparkSession.sessionState.conf.sessionLocalTimeZone) | ||
| val rowValues = HoodieSparkUtils.parsePartitionColumnValues( |
Contributor
There was a problem hiding this comment.
HoodieSparkUtils.parsePartitionColumnValues could return empty if it can't parse partition values, we better add an assertion here to ensure the values' size is equal to the number of partition columns.
…ception in HoodieBaseRelation#getPartitionColumnsAsInternalRowInternal
Collaborator
danny0405
approved these changes
Aug 23, 2023
prashantwason
pushed a commit
that referenced
this pull request
Sep 1, 2023
…rtition column (#9484) * reuse HoodieSparkUtils#parsePartitionColumnValues to support multi spark versions * assert parsed partition values from path * throw exception instead of return empty InternalRow when encounter exception in HoodieBaseRelation#getPartitionColumnsAsInternalRowInternal
leosanqing
pushed a commit
to leosanqing/hudi
that referenced
this pull request
Sep 13, 2023
…rtition column (apache#9484) * reuse HoodieSparkUtils#parsePartitionColumnValues to support multi spark versions * assert parsed partition values from path * throw exception instead of return empty InternalRow when encounter exception in HoodieBaseRelation#getPartitionColumnsAsInternalRowInternal
TheR1sing3un
pushed a commit
to TheR1sing3un/hudi
that referenced
this pull request
Feb 12, 2025
…rtition column (apache#9484) * reuse HoodieSparkUtils#parsePartitionColumnValues to support multi spark versions * assert parsed partition values from path * throw exception instead of return empty InternalRow when encounter exception in HoodieBaseRelation#getPartitionColumnsAsInternalRowInternal
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Change Logs
When we enable
hoodie.datasource.read.extract.partition.values.from.pathto get partition values from path instead of data file, the exception throw if partition column is not string type.This patch fix the issue by cast partition value string to target datatype, following Spark's approach.
Caused by: java.lang.ClassCastException: org.apache.spark.unsafe.types.UTF8String cannot be cast to java.lang.Integer at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:103) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getInt(rows.scala:41) at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow.getInt$(rows.scala:41) at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getInt(rows.scala:195) at org.apache.spark.sql.execution.vectorized.ColumnVectorUtils.populate(ColumnVectorUtils.java:97) at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initBatch(VectorizedParquetRecordReader.java:245) at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initBatch(VectorizedParquetRecordReader.java:264) at org.apache.spark.sql.execution.datasources.parquet.Spark32LegacyHoodieParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(Spark32LegacyHoodieParquetFileFormat.scala:314) at org.apache.hudi.HoodieDataSourceHelper$.$anonfun$buildHoodieParquetReader$1(HoodieDataSourceHelper.scala:67) at org.apache.hudi.HoodieBaseRelation.$anonfun$createBaseFileReader$2(HoodieBaseRelation.scala:602) at org.apache.hudi.HoodieBaseRelation$BaseFileReader.apply(HoodieBaseRelation.scala:680) at org.apache.hudi.HoodieBaseRelation$.$anonfun$projectReader$1(HoodieBaseRelation.scala:706) at org.apache.hudi.HoodieBaseRelation$.$anonfun$projectReader$2(HoodieBaseRelation.scala:711) at org.apache.hudi.HoodieBaseRelation$BaseFileReader.apply(HoodieBaseRelation.scala:680) at org.apache.hudi.HoodieMergeOnReadRDD.compute(HoodieMergeOnReadRDD.scala:96) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)Impact
No
Risk level (write none, low medium or high below)
None
Documentation Update
Describe any necessary documentation update if there is any new feature, config, or user-facing change
ticket number here and follow the instruction to make
changes to the website.
Contributor's checklist