Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-2856] Bit cask disk map delete modified #4116

Merged
merged 3 commits into from
Nov 26, 2021

Conversation

xuzifu666
Copy link
Contributor

Tips

What is the purpose of the pull request

(For example: This pull request adds quick-start document.)

Brief change log

(for example:)

  • Modify AnnotationLocation checkstyle rule in checkstyle.xml

Verify this pull request

(Please pick either of the following options)

This pull request is a trivial rework / code cleanup without any test coverage.

(or)

This pull request is already covered by existing tests, such as (please describe tests).

(or)

This change added tests and can be verified as follows:

(example:)

  • Added integration tests for end-to-end.
  • Added HoodieClientWriteTest to verify the change.
  • Manually verified the change by running a job locally.

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

@xuzifu666
Copy link
Contributor Author

xuzifu666 commented Nov 25, 2021

@danny0405 @leesf hi, review the pr plz, some modify maybe more suitable, because if occur exception, iterators alse need be deleted

@xuzifu666
Copy link
Contributor Author

@hudi-bot run azure

@xuzifu666 xuzifu666 changed the title Bit cask disk map delete modified [HUDI-2856] Bit cask disk map delete modified Nov 25, 2021
} finally {
this.iterators.forEach(ClosableIterator::close);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix the indentention.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okey, had fix it

Copy link
Contributor Author

@xuzifu666 xuzifu666 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indent fixed

@hudi-bot
Copy link

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@rmahindra123
Copy link
Contributor

lgtm.

Copy link
Member

@vinothchandar vinothchandar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danny0405 this LGTM

@vinothchandar vinothchandar merged commit 257a6a7 into apache:master Nov 26, 2021
@yihua
Copy link
Contributor

yihua commented Nov 29, 2021

When running the Hudi Kafka Connect Sink, although the writes are successful, the sink keeps throwing the following repetitive errors, which can be noisy to users. After reverting this PR locally, the errors are gone.

14:23:40.363 [task-thread-hudi-sink-0] ERROR org.apache.hudi.common.util.collection.BitCaskDiskMap - BitCaskDisMap close error 
java.nio.channels.ClosedChannelException: null
	at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110) ~[?:1.8.0_265]
	at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:379) ~[?:1.8.0_265]
	at org.apache.hudi.common.util.collection.BitCaskDiskMap.close(BitCaskDiskMap.java:270) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.hudi.common.util.collection.ExternalSpillableMap.close(ExternalSpillableMap.java:261) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.hudi.connect.writers.BufferedConnectWriter.flushRecords(BufferedConnectWriter.java:121) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.hudi.connect.writers.AbstractConnectWriter.close(AbstractConnectWriter.java:95) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.hudi.connect.transaction.ConnectTransactionParticipant.cleanupOngoingTransaction(ConnectTransactionParticipant.java:249) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.hudi.connect.transaction.ConnectTransactionParticipant.handleAckCommit(ConnectTransactionParticipant.java:209) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.hudi.connect.transaction.ConnectTransactionParticipant.processRecords(ConnectTransactionParticipant.java:127) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.hudi.connect.HoodieSinkTask.put(HoodieSinkTask.java:114) [hudi-kafka-connect-bundle-0.10.0-rc2.jar:0.10.0-rc2]
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581) [connect-runtime-3.0.0.jar:?]
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329) [connect-runtime-3.0.0.jar:?]
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232) [connect-runtime-3.0.0.jar:?]
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201) [connect-runtime-3.0.0.jar:?]
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186) [connect-runtime-3.0.0.jar:?]
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:241) [connect-runtime-3.0.0.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_265]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_265]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_265]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_265]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_265]

@yihua
Copy link
Contributor

yihua commented Dec 1, 2021

Reverting this change for 0.10.0-rc3

yihua added a commit that referenced this pull request Dec 1, 2021
danny0405 pushed a commit that referenced this pull request Dec 1, 2021
danny0405 pushed a commit that referenced this pull request Dec 4, 2021
aditiwari01 added a commit to aditiwari01/hudi that referenced this pull request Dec 29, 2021
* [HUDI-2702] Set up keygen class explicit for write config for flink table upgrade (apache#3931)

* [HUDI-313] bugfix: NPE when select count start from a  realtime table with Tez(apache#3630)

Co-authored-by: dylonyu <dylonyu@tencent.com>

* HUDI-1827 : Add ORC support in Bootstrap Op (apache#3457)

 Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2679] Fix the TestMergeIntoLogOnlyTable typo. (apache#3918)

* [HUDI-2709] Add more options when initializing table (apache#3939)

* [HUDI-2698] Remove the table source options validation (apache#3940)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2595] Fixing metadata table updates such that only regular writes from data table can trigger table services in metadata table (apache#3900)

* [HUDI-2715] The BitCaskDiskMap iterator may cause memory leak (apache#3951)

* [HUDI-2591] Bootstrap metadata table only if upgrade / downgrade is not required. (apache#3836)

* [HUDI-2579] Make deltastreamer checkpoint state merging more explicit (apache#3820)

 Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-1877] Support records staying in same fileId after clustering (apache#3833)

* [HUDI-1877] Support records staying in same fileId after clustering

Add plan strategy

* Ensure same filegroup id and refactor based on comments

* [HUDI-2297] Estimate available memory size for spillable map accurately. (apache#3455)

* [HUDI-2086]redo the logical of mor_incremental_view for hive (apache#3203)

* [HUDI-2442] Change default values for certin clustering configs (apache#3875)

* [HUDI-2730] Move EventTimeAvroPayload into hudi-common module (apache#3959)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2685] Support scheduling online compaction plan when there are no commit data (apache#3928)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2634] Improved the metadata table bootstrap for very large tables. (apache#3873)

* [HUDI-2634] Improved the metadata table bootstrap for very large tables.

Following improvements are implemented:
1. Memory overhead reduction:
  - Existing code caches FileStatus for each file in memory.
  - Created a new class DirectoryInfo which is used to cache a director's file list with parts of the FileStatus (only filename and file len). This reduces the memory requirements.

2. Improved parallelism:
  - Existing code collects all the listing to the Driver and then creates HoodieRecord on the Driver.
  - This takes a long time for large tables (11million HoodieRecords to be created)
  - Created a new function in SparkRDDWriteClient specifically for bootstrap commit. In it, the HoodieRecord creation is parallelized across executors so it completes fast.

3. Fixed setting to limit the number of parallel listings:
  - Existing code had a bug wherein 1500 executors were hardcoded to perform listing. This leads to exception due to limit in the spark's result memory.
  - Corrected the use of the config.

Result:
Dataset has 1299 partitions and 12Million files.
file listing time=1.5mins
HoodieRecord creation time=13seconds
deltacommit duration=2.6mins

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2495] Resolve inconsistent key generation for timestamp types  by GenericRecord and Row (apache#3944)

* [HUDI-2738] Remove the bucketAssignFunction useless context (apache#3972)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2746] Do not bootstrap for flink insert overwrite (apache#3980)

* [HUDI-2151] Part1 Setting default parallelism to 200 for some of write configs (apache#3948)

* [HUDI-2718] ExternalSpillableMap payload size re-estimation throws ArithmeticException (apache#3955)

- ExternalSpillableMap does the payload/value size estimation on the first put to
  determine when to spill over to disk map. The payload size re-estimation also
  happens after a minimum threshold of puts. This size re-estimation goes my the
  current in-memory map size for calculating average payload size and does attempts
  divide by zero operation when the map is size is empty. Avoiding the
  ArithmeticException during the payload size re-estimate by checking the map size
  upfront.

* [HUDI-2741] Fixing instantiating metadata table config in HoodieFileIndex (apache#3974)

* [HUDI-2697] Minor changes about hbase index config. (apache#3927)

* [HUDI-2472] Enabling metadata table in TestHoodieIndex and TestMergeOnReadRollbackActionExecutor (apache#3978)

- With rollback after first commit support added to metadata table, these test cases are safe to have metadata table turned on.

* [HUDI-2756] Fix flink parquet writer decimal type conversion (apache#3988)

* [HUDI-2706] refactor spark-sql to make consistent with DataFrame api (apache#3936)

* [HUDI-2589] Claiming RFC-37 for Metadata based bloom index feature. (apache#3995)

* [HUDI-2758] remove redundant code in the hoodieRealtimeInputFormatUitls.getRealtimeSplits (apache#3994)

* [MINOR] Fix typo in IntervalTreeBasedGlobalIndexFileFilter (apache#3993)

Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>

* [HUDI-2744] Fix parsing of metadadata table compaction timestamp when metrics are enabled (apache#3976)

* [HUDI-2683] Parallelize deleting archived hoodie commits (apache#3920)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2712] Fixing a bug with rollback of partially failed commit which has new partitions (apache#3947)

* [HUDI-2769] Fix StreamerUtil#medianInstantTime for very near instant time (apache#4005)

* [MINOR] Fixed checkstyle config to be based off Maven root-dir (requires Maven >=3.3.1 to work properly); (apache#4009)

Updated README

* [HUDI-2753] Ensure list based rollback strategy is used for restore (apache#3983)

* [HUDI-2151] Part3 Enabling marker based rollback as default rollback strategy (apache#3950)

* Enabling timeline server based markers

* Enabling timeline server based markers and marker based rollback

* Removing constraint that timeline server can be enabled only for hdfs

* Fixing tests

* Check --source-avro-schema-path  parameter (apache#3987)

Co-authored-by: 0x3E6 <dragon1996>

* [MINOR] Fix typo,'Hooide' corrected to 'Hoodie' (apache#4007)

* [MINOR] Add the Schema for GooseFS to StorageSchemes (apache#3982)

Co-authored-by: lubo <bollu@tencent.com>

* [HUDI-2314] Add support for DynamoDb based lock provider (apache#3486)

- Co-authored-by: Wenning Ding <wenningd@amazon.com>
- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2716] InLineFS support for S3FS logs (apache#3977)

* [HUDI-2734] Setting default metadata enable as false for Java (apache#4003)

* [HUDI-2789] Flink batch upsert for non partitioned table does not work (apache#4028)

* [HUDI-2790] Fix the changelog mode of HoodieTableSource (apache#4029)

* [HUDI-2362] Add external config file support (apache#3416)


Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [HUDI-2641] Avoid deleting all inflight commits heartbeats while rolling back failed writes (apache#3956)

* [HUDI-2791] Allows duplicate files for metadata commit (apache#4033)

* [HUDI-2798] Fix flink query operation fields (apache#4041)

* [HUDI-2731] Make clustering work regardless of whether there are base… (apache#3970)

* [HUDI-2593] Virtual keys support for metadata table (apache#3968)

- Metadata table today has virtual keys disabled, thereby populating the metafields
  for each record written out and increasing the overall storage space used. Hereby
  adding virtual keys support for metadata table so that metafields are disabled
  for metadata table records.

- Adding a custom KeyGenerator for Metadata table so as to not rely on the
  default Base/SimpleKeyGenerators which currently look for record key
  and partition field set in the table config.

- AbstractHoodieLogRecordReader's version of processing next data block and
  createHoodieRecord() will be a generic version and making the derived class
  HoodieMetadataMergedLogRecordReader take care of the special creation of
  records from explictly passed in partition names.

* [HUDI-2472] Enabling metadata table for TestHoodieMergeOnReadTable and TestHoodieCompactor (apache#4023)

* [HUDI-2796] Metadata table support for Restore action to first commit (apache#4039)

 - Adding support for the metadata table to restore to first commit and
   take proper action for the bootstrap on subequent commits.

* [HUDI-2242] Add configuration inference logic for few options (apache#3359)


Co-authored-by: Wenning Ding <wenningd@amazon.com>

* Remove the aws packages from hudi flink bundle jar (apache#4050)

* [HUDI-2742] Added S3 object filter to support multiple S3EventsHoodieIncrSources single S3 meta table (apache#4025)

* [HUDI-2795] Add mechanism to safely update,delete and recover table properties (apache#4038)

* [HUDI-2795] Add mechanism to safely update,delete and recover table properties

  - Fail safe mechanism, that lets queries succeed off a backup file
  - Readers who are not upgraded to this version of code will just fail until recovery is done.
  - Added unit tests that exercises all these scenarios.
  - Adding CLI for recovery, updation to table command.
  - [Pending] Add some hash based verfication to ensure any rare partial writes for HDFS

* Fixing upgrade/downgrade infrastructure to use new updation method

* [MINOR] Claim RFC number for RFC for debezium source for deltastreamer (apache#4047)

* [MINOR] optimize in constructor of inputbatch class (apache#4040)

Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>

* [HUDI-2813] Claim RFC number for RFC for spark datasource V2 Integration (apache#4059)

* [HUDI-2804] Add option to skip compaction instants for streaming read (apache#4051)

* [HUDI-2392] Make flink parquet reader compatible with decimal BINARY encoding (apache#4057)

* [HUDI-1932] Update Hive sync timestamp when change detected (apache#3053)

* Update Hive sync timestamp when change detected

Only update the last commit timestamp on the Hive table when the table schema
has changed or a partition is created/updated.

When using AWS Glue Data Catalog as the metastore for Hive this will ensure
that table versions are substantive (including schema and/or partition
changes). Prior to this change when a Hive sync is performed without schema
or partition changes the table in the Glue Data Catalog would have a new
version published with the only change being the timestamp property.

https://issues.apache.org/jira/browse/HUDI-1932

* add conditional sync flag

* fix testSyncWithoutDiffs

* fix HiveSyncConfig

Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>

* [MINOR] Fix typos (apache#4053)

* [HUDI-2799] Fix the classloader of flink write task (apache#4042)

* [HUDI-1870] Add more Spark CI build tasks  (apache#4022)

* [HUDI-1870] Add more Spark CI build tasks

- build for spark3.0.x
- build for spark-shade-unbundle-avro
- fix build failures
  - delete unnecessary assertion for spark 3.0.x
  - use AvroConversionUtils#convertAvroSchemaToStructType instead of calling SchemaConverters#toSqlType directly to solve the compilation failures with spark-shade-unbundle-avro (apache#5)

Co-authored-by: Yann <biyan900116@gmail.com>

* [HUDI-2533] New option for hoodieClusteringJob to check, rollback and re-execute the last failed clustering job (apache#3765)

* coding finished and need to do uts

* add uts

* code review

* code review

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2472] Enabling metadata table for TestHoodieIndex test case (apache#4045)

- Enablng the metadata table for testSimpleGlobalIndexTagLocationWhenShouldUpdatePartitionPath.
   This is more of a test issue.

* [MINOR] Fix instant parsing in HoodieClusteringJob (apache#4071)

* [HUDI-2559] Converting commit timestamp format to millisecs (apache#4024)

- Adds support for generating commit timestamps with millisecs granularity. 
- Older commit timestamps (in secs granularity) will be suffixed with 999 and parsed with millisecs format.

* [HUDI-2599] Make addFilesToview and fetchLatestBaseFiles public (apache#4066)

* [HUDI-2550] Expand File-Group candidates list for appending for MOR tables (apache#3986)

* [HUDI-2737] Use earliest instant by default for async compaction and clustering jobs (apache#3991)

Address review comments

Fix test failures

Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>

* [MINOR] Fix typo,'multipe' corrected to 'multiple' (apache#4068)

* [HUDI-1937] Rollback unfinished replace commit to allow updates (apache#3869)

* [HUDI-1937] Rollback unfinished replace commit to allow updates while clustering

* Revert and delete requested replacecommit too

* Rollback pending clustering instants transactionally

* No double locking and add a config to enable rollback

* Update config to be clear about rollback only on conflict

* [MINOR] Add more configuration to Kafka setup script (apache#3992)

* [MINOR] Add more configuration to Kafka setup script

* Add option to reuse Kafka topic

* Minor fixes to README

* [HUDI-2743] Assume path exists and defer fs.exists() in AbstractTableFileSystemView (apache#4002)

* [HUDI-2778] Optimize statistics collection related codes and add some docs for z-order add fix some bugs (apache#4013)

* [HUDI-2778] Optimize statistics collection related codes and add more docs for z-order.

* add test code for multi-thread parquet footer read

* [HUDI-2409] Using HBase shaded jars in Hudi presto bundle (apache#3623)

* using hbase-shaded-jars-in-hudi-presto-hundle

* test

* add hudi-common-bundle

* code review

* code review

* code review

* code review

* test

* test

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2332] Add clustering and compaction in Kafka Connect Sink (apache#3857)

* [HUDI-2332] Add clustering and compaction in Kafka Connect Sink

* Disable validation check on instant time for compaction and adjust configs

* Add javadocs

* Add clustering and compaction config

* Fix transaction causing missing records in the target table

* Add debugging logs

* Fix kafka offset sync in participant

* Adjust how clustering and compaction are configured in kafka-connect

* Fix clustering strategy

* Remove irrelevant changes from other published PRs

* Update clustering logic and others

* Update README

* Fix test failures

* Fix indentation

* Fix clustering config

* Add JavaCustomColumnsSortPartitioner and make async compaction enabled by default

* Add test for JavaCustomColumnsSortPartitioner

* Add more changes after IDE sync

* Update README with clarification

* Fix clustering logic after rebasing

* Remove unrelated changes

* [MINOR] Fix typo,rename 'HooodieAvroDeserializer' to 'HoodieAvroDeserializer' (apache#4064)

* [HUDI-2325] Add hive sync support to kafka connect (apache#3660)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2831] Securing usages of `SimpleDateFormat` to be thread-safe (apache#4073)

* [HUDI-2818] Fix 2to3 upgrade when set `hoodie.table.keygenerator.class` (apache#4077)

* [HUDI-2838] refresh table after drop partition (apache#4084)

* Revert "[HUDI-2799] Fix the classloader of flink write task (apache#4042)" (apache#4069)

This reverts commit 8281cbf.

* [HUDI-2847] Flink metadata table supports virtual keys (apache#4096)

* [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table (apache#3998)

* [HUDI-2688] Claim the next rfc 40 for Hudi connector for Trino (apache#4105)

* [HUDI-2671] Fix kafka offset handling in Kafka Connect protocol (apache#4021)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2443] Hudi KVComparator for all HFile writer usages (apache#3889)

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Hudi relies on custom class shading for Hbase's KeyValue.KVComparator to
  avoid versioning and class loading issues. There are few places which are
  still using the Hbase's comparator class directly and version upgrades
  would make them obsolete. Refactoring the HoodieKVComparator and making
  all HFile writer creation using the same shaded class.

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Moving HoodieKVComparator from common.bootstrap.index to common.util

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Retaining the old HoodieKVComparatorV2 for boostrap case. Adding the
  new comparator as HoodieKVComparatorV2 to differentiate from the old
  one.

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

 - Renamed HoodieKVComparatorV2 to HoodieMetadataKVComparator and moved it
   under the package org.apache.hudi.metadata.

* Make comparator classname configurable

* Revert new config and address other review comments

Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>

* [HUDI-2788] Fixing issues w/ Z-order Layout Optimization (apache#4026)

* Simplyfying, tidying up

* Fixed packaging for `TestOptimizeTable`

* Cleaned up `HoodiFileIndex` file filtering seq;
Removed optimization manually reading Parquet table circumventing Spark

* Refactored `DataSkippingUtils`:
  - Fixed checks to validate all statistics cols are present
  - Fixed some predicates being constructed incorrectly
  - Rewrote comments for easier comprehension, added more notes
  - Tidying up

* Tidying up tests

* `lint`

* Fixing compilation

* `TestOptimizeTable` > `TestTableLayoutOptimization`;
Added assertions to test data skipping paths

* Fixed tests to properly hit data-skipping path

* Fixed pruned files candidates lookup seq to conservatively included all non-indexed files

* Added java-doc

* Fixed compilation

* [HUDI-2766] Cluster update strategy should not be fenced by write config (apache#4093)

Fix pending clustering rollback test

* [HUDI-2793] Fixing deltastreamer checkpoint fetch/copy over (apache#4034)

- Removed the copy over logic in transaction utils. Deltastreamer will go back to previous commits and get the checkpoint value.

* [HUDI-2853] Add JMX deps in hudi utilities and kafka connect bundles (apache#4108)


Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2844][CLI] Fixing archived Timeline crashing if timeline contains REPLACE_COMMIT (apache#4091)

* [MINOR] Fix build failure due to checkstyle issues (apache#4111)

* [HUDI-1290] [RFC-39] Deltastreamer avro source for Debezium CDC (apache#4048)

* Add RFC entry for deltastreamer source for debezium

* Add RFC for debezium source

* Add RFC for debezium source

* Add RFC for debezium source

* fix hyperlink issue and rebase

* Update progress

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-1290] Add Debezium Source for deltastreamer (apache#4063)

* add source for postgres debezium

* Add tests for debezium payload

* Fix test

* Fix test

* Add tests for debezium source

* Add tests for debezium source

* Fix schema for debezium

* Fix checkstyle issues

* Fix config issue for schema registry

* Add mysql source for debezium

* Fix checkstyle issues an tests

* Improve code for merging toasted values

* Improve code for merging toasted values

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2792] Configure metadata payload consistency check (apache#4035)

- Relax metadata payload consistency check to consider spark task failures with spurious deletes

* [HUDI-2855] Change the default value of 'PAYLOAD_CLASS_NAME' to 'DefaultHoodieRecordPayload' (apache#4115)

* [HUDI-2480] FileSlice after pending compaction-requested instant-time… (apache#3703)

* [HUDI-2480] FileSlice after pending compaction-requested instant-time is ignored by MOR snapshot reader

* include file slice after a pending compaction for spark reader

Co-authored-by: garyli1019 <yanjia.gary.li@gmail.com>

* [HUDI-1290] fixing mysql debezium source (apache#4119)

* [HUDI-2800] Remove rdd.isEmpty() validation to prevent CreateHandle being called twice (apache#4121)

* [HUDI-2794] Guarding table service commits within a single lock to commit to both data table and metadata table (apache#4037)

* Fixing a single lock to commit table services across metadata table and data table

* Addressing comments

* rebasing with master

* [HUDI-2671] Making error -> warn logs from timeline server with concurrent writers for inconsistent state (apache#4088)

* Making error -> warn logs from timeline server with concurrent writers for inconsistent state

* Fixing bad request response exception for timeline out of sync

* Addressing feedback. removed write concurrency mode depedency

* [HUDI-2858] Fixing handling of cluster update reject exception in deltastreamer (apache#4120)

* [HUDI-2841] Fixing lazy rollback for MOR with list based strategy (apache#4110)

* [HUDI-2801] Add Amazon CloudWatch metrics reporter (apache#4081)

* [HUDI-2840] Fixed DeltaStreaemer to properly respect configuration passed t/h properties file (apache#4090)

* Rebased `DFSPropertiesConfiguration` to access Hadoop config in liue of FS to avoid confusion

* Fixed `readConfig` to take Hadoop's `Configuration` instead of FS;
Fixing usages

* Added test for local FS access

* Rebase to use `FSUtils.getFs`

* Combine properties provided as a file along w/ overrides provided from the CLI

* Added helper utilities to `HoodieClusteringConfig`;
Make sure corresponding config methods fallback to defaults;

* Fixed DeltaStreamer usage to respect properly combined configuration;
Abstracted `HoodieClusteringConfig.from` convenience utility to init Clustering config from `Properties`

* Tidying up

* `lint`

* Reverting changes to `HoodieWriteConfig`

* Tdiying up

* Fixed incorrect merge of the props

* Converted `HoodieConfig` to wrap around `Properties` into `TypedProperties`

* Fixed compilation

* Fixed compilation

* [HUDI-2005] Removing direct fs call in HoodieLogFileReader (apache#3865)

* [HUDI-2851] Shade org.apache.hadoop.hive.ql.optimizer package for flink bundle jar (apache#4104)

* [MINOR] Include hudi-aws in flink bundle jar (apache#4127)

HUDI-2801 makes this jar as required.

* [HUDI-2852] Table metadata returns empty for non-exist partition (apache#4117)

* [HUDI-2852] Table metadata returns empty for non-exist partition

* add unit test

* fix code checkstyle

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-2863] Rename option 'hoodie.parquet.page.size' to 'write.parquet.page.size' (apache#4128)

* [HUDI-2850] Fixing Clustering CLI - schedule and run command fixes to avoid NumberFormatException (apache#4101)

* [HUDI-2814] Addressing issues w/ Z-order Layout Optimization (apache#4060)

* `ZCurveOptimizeHelper` > `ZOrderingIndexHelper`;
Moved Z-index helper under `hudi.index.zorder` package

* Tidying up `ZOrderingIndexHelper`

* Fixing compilation

* Fixed index new/original table merging sequence to always prefer values from new index;
Cleaned up `HoodieSparkUtils`

* Added test for `mergeIndexSql`

* Abstracted Z-index name composition w/in `ZOrderingIndexHelper`;

* Fixed `DataSkippingUtils` to interrupt prunning in case data filter contains non-indexed column reference

* Properly handle exceptions origination during pruning in `HoodieFileIndex`

* Make sure no errors are logged upon encountering `AnalysisException`

* Cleaned up Z-index updating sequence;
Tidying up comments, java-docs;

* Fixed Z-index to properly handle changes of the list of clustered columns

* Tidying up

* `lint`

* Suppressing `JavaDocStyle` first sentence check

* Fixed compilation

* Fixing incorrect `DecimalType` conversion

* Refactored test `TestTableLayoutOptimization`
  - Added Z-index table composition test (against fixtures)
  - Separated out GC test;
Tidying up

* Fixed tests re-shuffling column order for Z-Index table `DataFrame` to align w/ the one by one loaded from JSON

* Scaffolded `DataTypeUtils` to do basic checks of Spark types;
Added proper compatibility checking b/w old/new index-tables

* Added test for Z-index tables merging

* Fixed import being shaded by creating internal `hudi.util` package

* Fixed packaging for `TestOptimizeTable`

* Revised `updateMetadataIndex` seq to provide Z-index updating process w/ source table schema

* Make sure existing Z-index table schema is sync'd to source table's one

* Fixed shaded refs

* Fixed tests

* Fixed type conversion of Parquet provided metadata values into Spark expected schemas

* Fixed `composeIndexSchema` utility to propose proper schema

* Added more tests for Z-index:
  - Checking that Z-index table is built correctly
  - Checking that Z-index tables are merged correctly (during update)

* Fixing source table

* Fixing tests to read from Parquet w/ proper schema

* Refactored `ParquetUtils` utility reading stats from Parquet footers

* Fixed incorrect handling of Decimals extracted from Parquet footers

* Worked around issues in javac failign to compile stream's collection

* Fixed handling of `Date` type

* Fixed handling of `DateType` to be parsed as `LocalDate`

* Updated fixture;
Make sure test loads Z-index fixture using proper schema

* Removed superfluous scheme adjusting when reading from Parquet, since Spark is actually able to perfectly restore schema (given Parquet was previously written by Spark as well)

* Fixing race-condition in Parquet's `DateStringifier` trying to share `SimpleDataFormat` object which is inherently not thread-safe

* Tidying up

* Make sure schema is used upon reading to validate input files are in the appropriate format;
Tidying up;

* Worked around javac (1.8) inability to infer expression type properly

* Updated fixtures;
Tidying up

* Fixing compilation after rebase

* Assert clustering have in Z-order layout optimization testing

* Tidying up exception messages

* XXX

* Added test validating Z-index lookup filter correctness

* Added more test-cases;
Tidying up

* Added tests for string expressions

* Fixed incorrect Z-index filter lookup translations

* Added more test-cases

* Added proper handling on complex negations of AND/OR expressions by pushing NOT operator down into inner expressions for appropriate handling

* Added `-target:jvm-1.8` for `hudi-spark` module

* Adding more tests

* Added tests for non-indexed columns

* Properly handle non-indexed columns by falling back to a re-write of containing expression as  `TrueLiteral` instead

* Fixed tests

* Removing the parquet test files and disabling corresponding tests

Co-authored-by: Vinoth Chandar <vinoth@apache.org>

* [MINOR] Fixing test failure to fix CI build failure (apache#4132)

* [HUDI-2861] Re-use same rollback instant time for failed rollbacks (apache#4123)

* [HUDI-2767] Enabling timeline-server-based marker as default (apache#4112)

- Changes the default config of marker type (HoodieWriteConfig.MARKERS_TYPE or hoodie.write.markers.type) from DIRECT to TIMELINE_SERVER_BASED for Spark Engine.
- Adds engine-specific marker type configs: Spark -> TIMELINE_SERVER_BASED, Flink -> DIRECT, Java -> DIRECT.
- Uses DIRECT markers as well for Spark structured streaming due to timeline server only available for the first mini-batch.
- Fixes the marker creation method for non-partitioned table in TimelineServerBasedWriteMarkers.
- Adds the fallback to direct markers even when TIMELINE_SERVER_BASED is configured, in WriteMarkersFactory: when HDFS is used, or embedded timeline server is disabled, the fallback to direct markers happens.
- Fixes the closing of timeline service.
- Fixes tests that depend on markers, mainly by starting the timeline service for each test.

* [HUDI-2845] Metadata CLI - files/partition file listing fix and new validate option (apache#4092)

- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2848] Excluse guava from hudi-cli pom (apache#4100)

* [HUDI-2864] Fix README and scripts with current limitations of hive sync (apache#4129)

* Fix README with current limitations of hive sync

* Fix README with current limitations of hive sync

* Fix dep issue

* Fix Copy on Write flow

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2856] Bit cask disk map delete modified (apache#4116)

* modified BitCaskDiskMap_close_function

* change iterators location to finally

* Update BitCaskDiskMap.java

* [MINOR] Follow ups from HUDI-2861 (re-use same rollback instant for failed rollback) (apache#4133)

* [HUDI-2868] Fix skipped HoodieSparkSqlWriterSuite (apache#4125)

- Co-authored-by: Yann Byron <biyan900116@gmail.com>

* [HUDI-2475] [HUDI-2862] Metadata table creation and avoid bootstrapping race for write client & add locking for upgrade (apache#4114)

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2102] Support hilbert curve for hudi (apache#3952)

Co-authored-by: Y Ethan Guo <ethan.guoyihua@gmail.com>

* Moving to 0.11.0-SNAPSHOT on master branch.

* [MINOR] fix typo (apache#4140)

* [MINOR] Fixing integ test suite for hudi-aws and archival validation (apache#4142)

* Removing rfc from release package and fixing release validation script (apache#4147)

* [MINOR] Fix syntax error in create_source_release.sh (apache#4150)

* [MINOR] Fix typo,rename 'getUrlEncodePartitoning' to 'getUrlEncodePartitioning' (apache#4130)

* [HUDI-2642] Add support ignoring case in update sql operation (apache#3882)

* [HUDI-2891] Fix write configs for Java engine in Kafka Connect Sink (apache#4161)

* Revert "[HUDI-2855] Change the default value of 'PAYLOAD_CLASS_NAME' to 'DefaultHoodieRecordPayload' (apache#4115)" (apache#4169)

This reverts commit 88067f5.

* Revert "[HUDI-2856] Bit cask disk map delete modified (apache#4116)" (apache#4171)

This reverts commit 257a6a7.

* [HUDI-2880] Fixing loading of props from default dir (apache#4167)

* Fixing loading of props from default dir

* addressing comments

* [HUDI-2881] Compact the file group with larger log files to reduce write amplification (apache#4152)

* Fixed partitions produced by layout optimization in case order-by key is composed of a single column (apache#4183)

* [MINOR] Fix the wrong usage of timestamp length variable bug (apache#4179)

Signed-off-by: zzzhy <candle_1667@163.com>

* [HUDI-2904] Fix metadata table archival overstepping between regular writers and table services (apache#4186)

- Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2914] Fix remote timeline server config for flink (apache#4191)

* [minor] Refactor write profile to always generate fs view (apache#4198)

* [HUDI-2924] Refresh the fs view on successful checkpoints for write profile (apache#4199)

* [MINOR] use catalog schema if can not find table schema (apache#4182)

* [HUDI-2902] Fixing populate meta fields with Hfile writers and Disabling virtual keys by default for metadata table (apache#4194)

* [HUDI-2911] Removing default value for `PARTITIONPATH_FIELD_NAME` resulting in incorrect `KeyGenerator` configuration (apache#4195)

* Revert "[HUDI-2495] Resolve inconsistent key generation for timestamp types  by GenericRecord and Row (apache#3944)" (apache#4201)

* [HUDI-2894][HUDI-2905] Metadata table - avoiding key lookup failures on base files over S3 (apache#4185)

- Fetching partition files or all partitions from the metadata table is failing
   when run over S3. Metadata table uses HFile format for the base files and the
   record lookup uses HFile.Reader and HFileScanner interfaces to get records by
   partition keys. When the backing storage is S3, this record lookup from HFiles
   is failing with IOException, in turn failing the caller commit/update operations.

 - Metadata table looks up HFile records with positional read enabled so as to
   perform better for random lookups. But this positional read key lookup is
   returning with partial read sizes over S3 leading to HFile scanner throwing
   IOException. This doesn't happen over HDFS. Metadata table though uses the HFile
   for random key lookups, the positional read is not mandatory as we sort the keys
   when doing a lookup for multiple keys.

 - The fix is to disable HFile positional read for all HFile scanner based
   key lookups.

* Revert "[HUDI-2489]Tuning HoodieROTablePathFilter by caching hoodieTableFileSystemView, aiming to reduce unnecessary list/get requests"

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [MINOR] Mitigate CI jobs timeout issues (apache#4173)

* skip shutdown zookeeper in `@AfterAll` in TestHBaseIndex

* rebalance CI tests

* [HUDI-2933] DISABLE Metadata table by default (apache#4213)

* [HUDI-2890] Kafka Connect: Fix failed writes and avoid table service concurrent operations (apache#4211)

* Fix kafka connect readme

* Fix handling of errors in write records for kafka connect

* By default, ensure we skip error records and keep the pipeline alive

* Fix indentation

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight (apache#4206)

* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight

* Fixing retry of pending compaction in metadata table and enhancing tests

* [HUDI-2934] Optimize RequestHandler code style

close apache#4215

* [HUDI-2935] Remove special casing of clustering in deltastreamer checkpoint retrival (apache#4216)

- We now seek backwards to find the checkpoint
 - No need to return empty anymore

* [HUDI-2877] Support flink catalog to help user use flink table conveniently (apache#4153)

* [HUDI-2877] Support flink catalog to help user use flink table conveniently

* Fix comment

* fix comment2

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit … (apache#4217)

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2418] Support HiveSchemaProvider (apache#3671)


Co-authored-by: jian.feng <fengjian428@gmial.com>

* [HUDI-2916] Add IssueNavigationLink for IDEA (apache#4192)

* [HUDI-2900] Fix corrupt block end position (apache#4181)

* [HUDI-2900] Fix corrupt block end position

* add a test

* [HUDI-2876] for hive/presto hudi should remove the temp file which created by HoodieMergedLogRecordSanner  when the query finished. (apache#4139)

* [MINOR] Fix partition path formatting in error log (apache#4168)

* [MINOR] Use maven-shade-plugin version for hudi-timeline-server-bundle from main pom.xml (apache#4209)

Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [MINOR] Remove redundant and conflicting spark-hive dependency (apache#4228)

Disable TestHiveSchemaProvider

* [HUDI-2951] Disable remote view storage config for flink (apache#4237)

* [HUDI-2942] add error message log in HoodieCombineHiveInputFormat (apache#4224)

* [MINOR] Update DOAP with 0.10.0 Release (apache#4246)

* [HUDI-2832][RFC-41] Proposal to integrate Hudi on Snowflake platform (apache#4074)

* [HUDI-2832][RFC-40] Proposal to integrate Hudi on Snowflake platform

* rebased and addressed review comments

* [HUDI-2964] Fixing aws lock configs to inherit from HoodieConfig (apache#4258)

* [HUDI-2957] Shade kryo jar for flink bundle jar (apache#4251)

* [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter (apache#3912)

Co-authored-by: guanziyue.gzy <guanziyue.gzy@bytedance.com>

* [MINOR] Fix Compile broken (apache#4263)

* [HUDI-2779] Cache BaseDir if HudiTableNotFound Exception thrown (apache#4014)

* [HUDI-2966] Add TaskCompletionListener for HoodieMergeOnReadRDD to close logScaner when the query finished. (apache#4265)

* [HUDI-2966] Add TaskCompletionListener for HoodieMergeOnReadRDD to close logScaner when the query finished.

* [MINOR] FAQ link in SUPPORT_REQUEST template (apache#4266)

* Claiming RFC for data skipping index for updated version (apache#4271)

* Revert "Claiming RFC for data skipping index for updated version (apache#4271)" (apache#4272)

This reverts commit 8321d20.

* [HUDI-2901] Fixed the bug clustering jobs cannot running in parallel (apache#4178)

* [HUDI-2936] Add data count checks in async clustering tests (apache#4236)

* [HUDI-2849] Improve SparkUI job description for write path (apache#4222)

* [HUDI-2952] Fixing metadata table for non-partitioned dataset (apache#4243)

* [HUDI-2912] Fix CompactionPlanOperator typo (apache#4187)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* Adding verbose output for metadata validate files command (apache#4166)

* [HUDI-2892][BUG] Pending Clustering may stain the ActiveTimeLine and lead to incomplete query results (apache#4172)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2784] Add a hudi-trino-bundle for Trino (apache#4279)

* [HUDI-2814] Make Z-index more generic Column-Stats Index (apache#4106)

* [HUDI-2527] Multi writer test with conflicting async table services (apache#4046)

* [HUDI-2974] Make the prefix for metrics name configurable (apache#4274)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2959] Fix the thread leak of cleaning service (apache#4252)

* [HUDI-2985] Shade jackson for hudi flink bundle jar (apache#4284)

* [HUDI-2906] Add a repair util to clean up dangling data and log files (apache#4278)

* [HUDI-2984] Implement #close for AbstractTableFileSystemView (apache#4285)

* [HUDI-2946] Upgrade maven plugins to be compatible with higher Java versions (apache#4232)

Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [HUDI-2938] Metadata table util to get latest file slices for reader/writers (apache#4218)

* [HUDI-2990] Sync to HMS when deleting partitions (apache#4291)

* [HUDI-2994] Add judgement to existed partitionPath in the catch code block for HU… (apache#4294)

* [HUDI-2994] Add judgement to existed partition path in the catch code block for HUDI-2743

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-2996] Flink streaming reader 'skip_compaction' option does not work (apache#4304)

close apache#4304

* [HUDI-2997] Skip the corrupt meta file for pending rollback action (apache#4296)

* [HUDI-2995] Enabling metadata table by default (apache#4295)

- Enabling metadata table by default

* [HUDI-3022] Fix NPE for isDropPartition method (apache#4319)

* [HUDI-3022] Fix NPE for isDropPartition method

* [HUDI-3024] Add explicit write handler for flink (apache#4329)

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-3025] Add additional wait time for namenode availability during IT tests initiatialization  (apache#4328)

- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-3028] Use blob storage to speed up CI downloads (apache#4331)

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2998] claiming rfc number for consistent hashing index (apache#4303)

Co-authored-by: xiaoyuwei <xiaoyuwei.yw@alibaba-inc.com>

* [HUDI-3015] Implement #reset and #sync for metadata filesystem view (apache#4307)

* [Minor] Catch and ignore all the exceptions in quietDeleteMarkerDir (apache#4301)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-3001] Clean up the marker directory when finish bootstrap operation. (apache#4298)

* [HUDI-3043] Revert async cleaner leak commit to unblock CI failure (apache#4343)

* Revert "[HUDI-2959] Fix the thread leak of cleaning service (apache#4252)"
Reverting to unblock CI failure for now. will revisit this with the right fix

* [HUDI-3037] Add back remote view storage config for flink (apache#4338)

* [HUDI-3046] Claim RFC number for RFC for Compaction / Clustering Service (apache#4347)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2958] Automatically set spark.sql.parquet.writelegacyformat, when using bulkinsert to insert data which contains decimalType (apache#4253)

* [HUDI-3043] Adding some test fixes to continuous mode multi writer tests (apache#4356)

* [HUDI-2962] InProcess lock provider to guard single writer process with async table operations (apache#4259)

 - Adding Local JVM process based lock provider implementation

 - This local lock provider can be used by a single writer process with async
   table operations to guard the metadata tabl against concurrent updates.

* [HUDI-3043] De-coupling multi writer tests (apache#4362)

* [HUDI-3029]  Transaction manager: avoid deadlock when doing begin and end transactions (apache#4363)

* [HUDI-3029] Transaction manager: avoid deadlock when doing begin and end transactions

 - Transaction manager has begin and end transactions as synchronized methods.
   Based on the lock provider implementaion, this can lead to deadlock
   situation when the underlying lock() calls are blocking or with a long timeout.

 - Fixing transaction manager begin and end transactions to not get to deadlock
   and to not assume anything on the lock provider implementation.

* [HUDI-3029]  Transaction manager: avoid deadlock when doing begin and end transactions (apache#4373)

* [HUDI-3064] Fixing a bug in TransactionManager and FileSystemTestLock (apache#4372)

* [HUDI-3054] Fixing default lock configs for FileSystemBasedLock and fixing a flaky test (apache#4374)

* [MINOR] Azure CI IT tasks clean up (apache#4337)

* [HUDI-3052] Fix flaky testJsonKafkaSourceResetStrategy (apache#4381)

* [minor] fix NetworkUtils#getHostname (apache#4355)

* [HUDI-2970] Adding tests for archival of replace commit actions (apache#4268)

* [HUDI-3064][HUDI-3054] FileSystemBasedLockProviderTestClass tryLock fix and TestHoodieClientMultiWriter test fixes (apache#4384)

 - Made FileSystemBasedLockProviderTestClass thread safe and fixed the
   tryLock retry logic.

 - Made TestHoodieClientMultiWriter. testHoodieClientBasicMultiWriter
   deterministic in verifying the HoodieWriteConflictException.

* remove unused import (apache#4349)

* [MINOR] Remove unused method in HoodieActiveTimeline (apache#4401)

* [MINOR] Increasing CI timeout to 90 mins (apache#4407)

* [HUDI-3070] Add rerunFailingTestsCount for flakly testes (apache#4398)



Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2970] Add test for archiving replace commit (apache#4345)

* [HUDI-3008] Fixing HoodieFileIndex partition column parsing for nested fields

* [HUDI-3027] Update hudi-examples README.md (apache#4330)

* [HUDI-3032] Do not clean the log files right after compaction for metadata table (apache#4336)

* [HUDI-2547] Schedule Flink compaction in service (apache#4254)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-3011] Adding ability to read entire data with HoodieIncrSource with empty checkpoint (apache#4334)

* Adding ability to read entire data with HoodieIncrSource with empty checkpoint

* Addressing comments

* [HUDI-3060] drop table for spark sql (apache#4364)

* [MINOR] Fix DedupeSparkJob typo (apache#4418)

* [HUDI-3014] Add table option to set utc timezone (apache#4306)

* [MINOR] Remove unused method in HoodieActiveTimeline (apache#4435)

* [HUDI-3101] Excluding compaction instants from pending rollback info (apache#4443)

* [HUDI-3102] Do not store rollback plan in inflight instant (apache#4445)

* [HUDI-3099] Purge drop partition for spark sql (apache#4436)

* [HUDI-2374] Fixing AvroDFSSource does not use the overridden schema to deserialize Avro binaries (apache#4353)

* [HUDI-3093] fix spark-sql query table that write with TimestampBasedKeyGenerator (apache#4416)

* [HUDI-3106] Fix HiveSyncTool not sync schema (apache#4452)

* [HUDI-2811] Support Spark 3.2 (apache#4270)

* Fixing dynamoDbLockConfig required prop check (apache#4422)

* [HUDI-2983] Remove Log4j2 transitive dependencies (apache#4281)

Co-authored-by: Danny Chan <yuzhao.cyz@gmail.com>
Co-authored-by: Genmao Yu <hustyugm@gmail.com>
Co-authored-by: dylonyu <dylonyu@tencent.com>
Co-authored-by: manasaks <manasas2004@gmail.com>
Co-authored-by: Shawy Geng <gengxiaoyu1996@gmail.com>
Co-authored-by: yuzhaojing <32435329+yuzhaojing@users.noreply.github.com>
Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>
Co-authored-by: Sivabalan Narayanan <sivabala@uber.com>
Co-authored-by: Prashant Wason <pwason@uber.com>
Co-authored-by: davehagman <73851873+davehagman@users.noreply.github.com>
Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>
Co-authored-by: xiarixiaoyao <mengtao0326@qq.com>
Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>
Co-authored-by: Yann Byron <biyan900116@gmail.com>
Co-authored-by: Manoj Govindassamy <manoj.govindassamy@gmail.com>
Co-authored-by: dufeng1010 <dufeng1010@126.com>
Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>
Co-authored-by: zhangyue19921010 <69956021+zhangyue19921010@users.noreply.github.com>
Co-authored-by: yuezhang <yuezhang@freewheel.tv>
Co-authored-by: Alexey Kudinkin <alexey@infinilake.com>
Co-authored-by: 0x574C <761604382@qq.com>
Co-authored-by: 董可伦 <dongkelun01@inspur.com>
Co-authored-by: 卢波 <26039470+lubo212@users.noreply.github.com>
Co-authored-by: lubo <bollu@tencent.com>
Co-authored-by: wenningd <wenningding95@gmail.com>
Co-authored-by: Wenning Ding <wenningd@amazon.com>
Co-authored-by: Udit Mehrotra <udit.mehrotra90@gmail.com>
Co-authored-by: Ron <ldliulsy@163.com>
Co-authored-by: Harsha Teja Kanna <h7kanna@users.noreply.github.com>
Co-authored-by: vinoth chandar <vinothchandar@users.noreply.github.com>
Co-authored-by: rmahindra123 <76502047+rmahindra123@users.noreply.github.com>
Co-authored-by: leesf <490081539@qq.com>
Co-authored-by: Nate Radtke <5672085+nateradtke@users.noreply.github.com>
Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>
Co-authored-by: Y Ethan Guo <ethan.guoyihua@gmail.com>
Co-authored-by: Jimmy.Zhou <zhouyongjin@inspur.com>
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
Co-authored-by: garyli1019 <yanjia.gary.li@gmail.com>
Co-authored-by: satishm <84978833+data-storyteller@users.noreply.github.com>
Co-authored-by: mincwang <33626973+mincwang@users.noreply.github.com>
Co-authored-by: wangminchao <wangminchao@asinking.com>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
Co-authored-by: huleilei <584620569@qq.com>
Co-authored-by: xuzifu666 <1206332514@qq.com>
Co-authored-by: vortual <1039505040@qq.com>
Co-authored-by: zzzhy <candle_1667@163.com>
Co-authored-by: ForwardXu <forwardxu315@gmail.com>
Co-authored-by: 冯健 <fengjian428@gmail.com>
Co-authored-by: jian.feng <fengjian428@gmial.com>
Co-authored-by: Vinoth Govindarajan <vinothg@uber.com>
Co-authored-by: guanziyue <30882822+guanziyue@users.noreply.github.com>
Co-authored-by: guanziyue.gzy <guanziyue.gzy@bytedance.com>
Co-authored-by: RexAn <anh131@126.com>
Co-authored-by: arunkc <arunkc91@gmail.com>
Co-authored-by: Yuwei XIAO <ywxiaozero@gmail.com>
Co-authored-by: Fugle666 <30539368+Fugle666@users.noreply.github.com>
Co-authored-by: xiaoyuwei <xiaoyuwei.yw@alibaba-inc.com>
Co-authored-by: xuzifu666 <xuyu@zepp.com>
Co-authored-by: harshal patil <harshal.j.patil@gmail.com>
Co-authored-by: Aimiyoo <aimiyooo@gmail.com>
@vinishjail97 vinishjail97 mentioned this pull request Jan 24, 2022
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants