-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SUPPORT] HoodieRealtimeRecordReader can only work on RealtimeSplit and not with hdfs://111.parquet:0+4 #2813
Comments
cc @n3nash Can you try setting hive.input.format=org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat and trying. |
I tried to set hive.input.format = org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat; Diagnostic Messages for this Task: |
@qianjiangbing @bvaradar There is a ticket created for this -> https://issues.apache.org/jira/browse/HUDI-1036. I will look into this later this week. This looks like a legitimate issue |
update the question description. i'm using org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat agg query mor table when error occur. |
I use hive2.3.8 to test, it's ok! |
@qianjiangbing Thanks for confirming. @MyLanPangzi I just noticed that you were using Hive version : 1.1 cdh 5.6.12. This is a very old version of Hive. The latest Hudi builds only work with Hive 2.x+ versions. Are you able to migrate to a higher version of Hive ? |
Sorry, I can't upgrade the cluster version. so the only option is using org.apache.hudi.hadoop.HoodieParquetInputFormat for mor table in my cluster. |
CC @n3nash |
@n3nash @nsivabalan may i check does this feature(agg query table) work well in hive 3.1.2, I meet the the sample problem about cast class exception as bellow shows:
Not sure if we have verified that this feature works well in Hive3, I'm using hive 3.1.2 and hudi 0.12.2 |
You may need to turn off the vectorized execution for Hive. |
thanks @danny0405, it works. |
@danny0405 one more question, after disable vectorized vectorization, count RT table works, but got empty content during query the RT table, which haven't done any compaction(only have log file without base file). Any parameter I need to set?
it seems that |
rt table with pure log is not supported well for Hive queries, you may need to switch to ro table instead. |
Describe the problem you faced
flink write mor table but cannot using hive agg query newest data.
To Reproduce
Steps to reproduce the behavior:
1.flink write mor table
2.create hive extrenal table using org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
3.using hive shell agg query get error
4.
Expected behavior
hive query mor correctly and return agg result.
Environment Description
Hudi version : 0.9.0
Spark version :
Hive version : 1.1 cdh 5.6.12
Hadoop version : 2.6 cdh 5.6.12
Storage (HDFS/S3/GCS..) : hdfs
Running on Docker? (yes/no) : no
Additional context
Add any other context about the problem here.
Stacktrace
021-04-13 17:05:45,815 INFO [IPC Server handler 6 on 46363] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1594105654926_12624744_m_000012_0 is : 0.0
2021-04-13 17:05:45,818 FATAL [IPC Server handler 8 on 46363] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1594105654926_12624744_m_000012_0 - exited : java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:267)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:213)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:334)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:734)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:438)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:253)
... 11 more
Caused by: java.lang.IllegalArgumentException: HoodieRealtimeRecordReader can only work on RealtimeSplit and not with hdfs://nameservice1/user/hudi/dwd/dwd_sale_sale_detail_rt/20210413/ab5a8ff3-4647-46ae-ba13-7b6eb7914516_8-10-0_20210413170058.parquet:0+57883047
at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:40)
at org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:117)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:68)
The text was updated successfully, but these errors were encountered: