-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[fix](hudi) use hudi api to split the COW table #21385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
run buildall |
|
TeamCity pipeline, clickbench performance test result: |
morningman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR approved by at least one committer and no changes requested. |
|
PR approved by anyone and no changes requested. |
yiguolei
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Fix tow bugs: COW & Read Optimized table will use hive splitter to split files, but it can't recognize some specific files. ERROR 1105 (HY000): errCode = 2, detailMessage = (172.21.0.101)[CORRUPTION]Invalid magic number in parquet file, bytes read: 3035, file size: 3035, path: /usr/hive/warehouse/hudi.db/test/.hoodie/metadata/.hoodie/00000000000000.deltacommit.inflight, read magic: The read optimized table created by spark will add empty partition even if the table has no partition, so we have to filter these empty partition keys in hive client. | test_ro | CREATE TABLE `test_ro`( `_hoodie_commit_time` string COMMENT '', ... `ts` bigint COMMENT '') PARTITIONED BY ( `` string) ROW FORMAT SERDE
Fix tow bugs: COW & Read Optimized table will use hive splitter to split files, but it can't recognize some specific files. ERROR 1105 (HY000): errCode = 2, detailMessage = (172.21.0.101)[CORRUPTION]Invalid magic number in parquet file, bytes read: 3035, file size: 3035, path: /usr/hive/warehouse/hudi.db/test/.hoodie/metadata/.hoodie/00000000000000.deltacommit.inflight, read magic: The read optimized table created by spark will add empty partition even if the table has no partition, so we have to filter these empty partition keys in hive client. | test_ro | CREATE TABLE `test_ro`( `_hoodie_commit_time` string COMMENT '', ... `ts` bigint COMMENT '') PARTITIONED BY ( `` string) ROW FORMAT SERDE
Proposed changes
Fix tow bugs: