Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-2706] refactor spark-sql to make consistent with DataFrame api #3936

Merged
merged 6 commits into from Nov 14, 2021

Conversation

YannByron
Copy link
Contributor

What is the purpose of the pull request

  1. refactor CreateHoodieTableCommand;
  2. use TBLPROPERTIES syntax to pass config instead of OPTIONS, and sync hudi's config to properties of the table rather other properties of the storage of the table in hive metastore, also keep compatible with OPTIONS syntax;
  3. force to provide PrimaryKey, so that make Update/Delete available;
  4. hudi spark-sql decouple from metastore as far as possible, get config from local hoodie.properties first;
  5. modify operation and related configs when insert/merge, to make consistent with dataframe api.
  6. add parameter validation for recordKey, preCombineKey, keyGenerator, even if those parameters are defined by different keys.

Brief change log

(for example:)

  • Modify AnnotationLocation checkstyle rule in checkstyle.xml

Verify this pull request

(Please pick either of the following options)

This pull request is a trivial rework / code cleanup without any test coverage.

(or)

This pull request is already covered by existing tests, such as (please describe tests).

(or)

This change added tests and can be verified as follows:

(example:)

  • Added integration tests for end-to-end.
  • Added HoodieClientWriteTest to verify the change.
  • Manually verified the change by running a job locally.

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

@xushiyan xushiyan self-assigned this Nov 7, 2021
@YannByron YannByron force-pushed the master_2706 branch 2 times, most recently from 58abfc5 to d003fdb Compare November 7, 2021 03:55
@YannByron
Copy link
Contributor Author

@hudi-bot run azure

@melin
Copy link

melin commented Nov 7, 2021

The primary key syntax is consistent with other relational databases

create table test_hudi_demo ( 
    id int, 
    name string, 
    price double,
    ds date)
stored as hudi    
primary key (id)
lifeCycle 300

@@ -73,9 +75,10 @@ case class CreateHoodieTableAsSelectCommand(

// Execute the insert query
try {
val tblProperties = table.storage.properties ++ table.properties
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have seen this needed repeated. can we consider making a HoodieCatalogTable to encapsulate this and other hudi specific logic inside, e.g. validation, options transform, etc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea.

@xushiyan
Copy link
Member

xushiyan commented Nov 9, 2021

@YannByron just did a rough pass over the changes. will do another round on details soon.

Copy link
Member

@xushiyan xushiyan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with some more UT coverage it shall be good to go


def getRealKeyGenerator(hoodieConfig: HoodieConfig): String = {
val kg = hoodieConfig.getString(KEYGENERATOR_CLASS_NAME.key())
if (classOf[SqlKeyGenerator].getCanonicalName == kg) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there uts cover the logic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -38,7 +38,7 @@ class TestAlterTable extends TestHoodieSqlBase {
| ts long
|) using hudi
| location '$tablePath'
| options (
| tblproperties (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here means users would not use options?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

options will be saved to storage.properties in hive metastore. and tblproperties to properties.
Using tblproperties is a more proper way. Also for compatibility, we still support options.

| dt string
| ) using hudi
| partitioned by (dt)
| options (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we would change here to tblproperties and rename the test description?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a UT for test the compatibility to use options. We should show a standard example for users in official document.

@xushiyan xushiyan added this to Under Discussion PRs in PR Tracker Board via automation Nov 12, 2021
@xushiyan xushiyan moved this from Under Discussion PRs to Nearing Landing in PR Tracker Board Nov 12, 2021
@YannByron
Copy link
Contributor Author

@hudi-bot run azure

Copy link
Member

@xushiyan xushiyan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @leesf any other input?

@hudi-bot
Copy link

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@leesf leesf changed the title [WIP][HUDI-2706] refactor spark-sql to make consistent with DataFrame api [HUDI-2706] refactor spark-sql to make consistent with DataFrame api Nov 13, 2021
Copy link
Contributor

@leesf leesf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xushiyan over to you

@xushiyan xushiyan merged commit 0bb6d8f into apache:master Nov 14, 2021
PR Tracker Board automation moved this from Nearing Landing to Done Nov 14, 2021
aditiwari01 added a commit to aditiwari01/hudi that referenced this pull request Dec 29, 2021
* [HUDI-2702] Set up keygen class explicit for write config for flink table upgrade (apache#3931)

* [HUDI-313] bugfix: NPE when select count start from a  realtime table with Tez(apache#3630)

Co-authored-by: dylonyu <dylonyu@tencent.com>

* HUDI-1827 : Add ORC support in Bootstrap Op (apache#3457)

 Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2679] Fix the TestMergeIntoLogOnlyTable typo. (apache#3918)

* [HUDI-2709] Add more options when initializing table (apache#3939)

* [HUDI-2698] Remove the table source options validation (apache#3940)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2595] Fixing metadata table updates such that only regular writes from data table can trigger table services in metadata table (apache#3900)

* [HUDI-2715] The BitCaskDiskMap iterator may cause memory leak (apache#3951)

* [HUDI-2591] Bootstrap metadata table only if upgrade / downgrade is not required. (apache#3836)

* [HUDI-2579] Make deltastreamer checkpoint state merging more explicit (apache#3820)

 Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-1877] Support records staying in same fileId after clustering (apache#3833)

* [HUDI-1877] Support records staying in same fileId after clustering

Add plan strategy

* Ensure same filegroup id and refactor based on comments

* [HUDI-2297] Estimate available memory size for spillable map accurately. (apache#3455)

* [HUDI-2086]redo the logical of mor_incremental_view for hive (apache#3203)

* [HUDI-2442] Change default values for certin clustering configs (apache#3875)

* [HUDI-2730] Move EventTimeAvroPayload into hudi-common module (apache#3959)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2685] Support scheduling online compaction plan when there are no commit data (apache#3928)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2634] Improved the metadata table bootstrap for very large tables. (apache#3873)

* [HUDI-2634] Improved the metadata table bootstrap for very large tables.

Following improvements are implemented:
1. Memory overhead reduction:
  - Existing code caches FileStatus for each file in memory.
  - Created a new class DirectoryInfo which is used to cache a director's file list with parts of the FileStatus (only filename and file len). This reduces the memory requirements.

2. Improved parallelism:
  - Existing code collects all the listing to the Driver and then creates HoodieRecord on the Driver.
  - This takes a long time for large tables (11million HoodieRecords to be created)
  - Created a new function in SparkRDDWriteClient specifically for bootstrap commit. In it, the HoodieRecord creation is parallelized across executors so it completes fast.

3. Fixed setting to limit the number of parallel listings:
  - Existing code had a bug wherein 1500 executors were hardcoded to perform listing. This leads to exception due to limit in the spark's result memory.
  - Corrected the use of the config.

Result:
Dataset has 1299 partitions and 12Million files.
file listing time=1.5mins
HoodieRecord creation time=13seconds
deltacommit duration=2.6mins

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2495] Resolve inconsistent key generation for timestamp types  by GenericRecord and Row (apache#3944)

* [HUDI-2738] Remove the bucketAssignFunction useless context (apache#3972)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2746] Do not bootstrap for flink insert overwrite (apache#3980)

* [HUDI-2151] Part1 Setting default parallelism to 200 for some of write configs (apache#3948)

* [HUDI-2718] ExternalSpillableMap payload size re-estimation throws ArithmeticException (apache#3955)

- ExternalSpillableMap does the payload/value size estimation on the first put to
  determine when to spill over to disk map. The payload size re-estimation also
  happens after a minimum threshold of puts. This size re-estimation goes my the
  current in-memory map size for calculating average payload size and does attempts
  divide by zero operation when the map is size is empty. Avoiding the
  ArithmeticException during the payload size re-estimate by checking the map size
  upfront.

* [HUDI-2741] Fixing instantiating metadata table config in HoodieFileIndex (apache#3974)

* [HUDI-2697] Minor changes about hbase index config. (apache#3927)

* [HUDI-2472] Enabling metadata table in TestHoodieIndex and TestMergeOnReadRollbackActionExecutor (apache#3978)

- With rollback after first commit support added to metadata table, these test cases are safe to have metadata table turned on.

* [HUDI-2756] Fix flink parquet writer decimal type conversion (apache#3988)

* [HUDI-2706] refactor spark-sql to make consistent with DataFrame api (apache#3936)

* [HUDI-2589] Claiming RFC-37 for Metadata based bloom index feature. (apache#3995)

* [HUDI-2758] remove redundant code in the hoodieRealtimeInputFormatUitls.getRealtimeSplits (apache#3994)

* [MINOR] Fix typo in IntervalTreeBasedGlobalIndexFileFilter (apache#3993)

Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>

* [HUDI-2744] Fix parsing of metadadata table compaction timestamp when metrics are enabled (apache#3976)

* [HUDI-2683] Parallelize deleting archived hoodie commits (apache#3920)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2712] Fixing a bug with rollback of partially failed commit which has new partitions (apache#3947)

* [HUDI-2769] Fix StreamerUtil#medianInstantTime for very near instant time (apache#4005)

* [MINOR] Fixed checkstyle config to be based off Maven root-dir (requires Maven >=3.3.1 to work properly); (apache#4009)

Updated README

* [HUDI-2753] Ensure list based rollback strategy is used for restore (apache#3983)

* [HUDI-2151] Part3 Enabling marker based rollback as default rollback strategy (apache#3950)

* Enabling timeline server based markers

* Enabling timeline server based markers and marker based rollback

* Removing constraint that timeline server can be enabled only for hdfs

* Fixing tests

* Check --source-avro-schema-path  parameter (apache#3987)

Co-authored-by: 0x3E6 <dragon1996>

* [MINOR] Fix typo,'Hooide' corrected to 'Hoodie' (apache#4007)

* [MINOR] Add the Schema for GooseFS to StorageSchemes (apache#3982)

Co-authored-by: lubo <bollu@tencent.com>

* [HUDI-2314] Add support for DynamoDb based lock provider (apache#3486)

- Co-authored-by: Wenning Ding <wenningd@amazon.com>
- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2716] InLineFS support for S3FS logs (apache#3977)

* [HUDI-2734] Setting default metadata enable as false for Java (apache#4003)

* [HUDI-2789] Flink batch upsert for non partitioned table does not work (apache#4028)

* [HUDI-2790] Fix the changelog mode of HoodieTableSource (apache#4029)

* [HUDI-2362] Add external config file support (apache#3416)


Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [HUDI-2641] Avoid deleting all inflight commits heartbeats while rolling back failed writes (apache#3956)

* [HUDI-2791] Allows duplicate files for metadata commit (apache#4033)

* [HUDI-2798] Fix flink query operation fields (apache#4041)

* [HUDI-2731] Make clustering work regardless of whether there are base… (apache#3970)

* [HUDI-2593] Virtual keys support for metadata table (apache#3968)

- Metadata table today has virtual keys disabled, thereby populating the metafields
  for each record written out and increasing the overall storage space used. Hereby
  adding virtual keys support for metadata table so that metafields are disabled
  for metadata table records.

- Adding a custom KeyGenerator for Metadata table so as to not rely on the
  default Base/SimpleKeyGenerators which currently look for record key
  and partition field set in the table config.

- AbstractHoodieLogRecordReader's version of processing next data block and
  createHoodieRecord() will be a generic version and making the derived class
  HoodieMetadataMergedLogRecordReader take care of the special creation of
  records from explictly passed in partition names.

* [HUDI-2472] Enabling metadata table for TestHoodieMergeOnReadTable and TestHoodieCompactor (apache#4023)

* [HUDI-2796] Metadata table support for Restore action to first commit (apache#4039)

 - Adding support for the metadata table to restore to first commit and
   take proper action for the bootstrap on subequent commits.

* [HUDI-2242] Add configuration inference logic for few options (apache#3359)


Co-authored-by: Wenning Ding <wenningd@amazon.com>

* Remove the aws packages from hudi flink bundle jar (apache#4050)

* [HUDI-2742] Added S3 object filter to support multiple S3EventsHoodieIncrSources single S3 meta table (apache#4025)

* [HUDI-2795] Add mechanism to safely update,delete and recover table properties (apache#4038)

* [HUDI-2795] Add mechanism to safely update,delete and recover table properties

  - Fail safe mechanism, that lets queries succeed off a backup file
  - Readers who are not upgraded to this version of code will just fail until recovery is done.
  - Added unit tests that exercises all these scenarios.
  - Adding CLI for recovery, updation to table command.
  - [Pending] Add some hash based verfication to ensure any rare partial writes for HDFS

* Fixing upgrade/downgrade infrastructure to use new updation method

* [MINOR] Claim RFC number for RFC for debezium source for deltastreamer (apache#4047)

* [MINOR] optimize in constructor of inputbatch class (apache#4040)

Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>

* [HUDI-2813] Claim RFC number for RFC for spark datasource V2 Integration (apache#4059)

* [HUDI-2804] Add option to skip compaction instants for streaming read (apache#4051)

* [HUDI-2392] Make flink parquet reader compatible with decimal BINARY encoding (apache#4057)

* [HUDI-1932] Update Hive sync timestamp when change detected (apache#3053)

* Update Hive sync timestamp when change detected

Only update the last commit timestamp on the Hive table when the table schema
has changed or a partition is created/updated.

When using AWS Glue Data Catalog as the metastore for Hive this will ensure
that table versions are substantive (including schema and/or partition
changes). Prior to this change when a Hive sync is performed without schema
or partition changes the table in the Glue Data Catalog would have a new
version published with the only change being the timestamp property.

https://issues.apache.org/jira/browse/HUDI-1932

* add conditional sync flag

* fix testSyncWithoutDiffs

* fix HiveSyncConfig

Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>

* [MINOR] Fix typos (apache#4053)

* [HUDI-2799] Fix the classloader of flink write task (apache#4042)

* [HUDI-1870] Add more Spark CI build tasks  (apache#4022)

* [HUDI-1870] Add more Spark CI build tasks

- build for spark3.0.x
- build for spark-shade-unbundle-avro
- fix build failures
  - delete unnecessary assertion for spark 3.0.x
  - use AvroConversionUtils#convertAvroSchemaToStructType instead of calling SchemaConverters#toSqlType directly to solve the compilation failures with spark-shade-unbundle-avro (apache#5)

Co-authored-by: Yann <biyan900116@gmail.com>

* [HUDI-2533] New option for hoodieClusteringJob to check, rollback and re-execute the last failed clustering job (apache#3765)

* coding finished and need to do uts

* add uts

* code review

* code review

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2472] Enabling metadata table for TestHoodieIndex test case (apache#4045)

- Enablng the metadata table for testSimpleGlobalIndexTagLocationWhenShouldUpdatePartitionPath.
   This is more of a test issue.

* [MINOR] Fix instant parsing in HoodieClusteringJob (apache#4071)

* [HUDI-2559] Converting commit timestamp format to millisecs (apache#4024)

- Adds support for generating commit timestamps with millisecs granularity. 
- Older commit timestamps (in secs granularity) will be suffixed with 999 and parsed with millisecs format.

* [HUDI-2599] Make addFilesToview and fetchLatestBaseFiles public (apache#4066)

* [HUDI-2550] Expand File-Group candidates list for appending for MOR tables (apache#3986)

* [HUDI-2737] Use earliest instant by default for async compaction and clustering jobs (apache#3991)

Address review comments

Fix test failures

Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>

* [MINOR] Fix typo,'multipe' corrected to 'multiple' (apache#4068)

* [HUDI-1937] Rollback unfinished replace commit to allow updates (apache#3869)

* [HUDI-1937] Rollback unfinished replace commit to allow updates while clustering

* Revert and delete requested replacecommit too

* Rollback pending clustering instants transactionally

* No double locking and add a config to enable rollback

* Update config to be clear about rollback only on conflict

* [MINOR] Add more configuration to Kafka setup script (apache#3992)

* [MINOR] Add more configuration to Kafka setup script

* Add option to reuse Kafka topic

* Minor fixes to README

* [HUDI-2743] Assume path exists and defer fs.exists() in AbstractTableFileSystemView (apache#4002)

* [HUDI-2778] Optimize statistics collection related codes and add some docs for z-order add fix some bugs (apache#4013)

* [HUDI-2778] Optimize statistics collection related codes and add more docs for z-order.

* add test code for multi-thread parquet footer read

* [HUDI-2409] Using HBase shaded jars in Hudi presto bundle (apache#3623)

* using hbase-shaded-jars-in-hudi-presto-hundle

* test

* add hudi-common-bundle

* code review

* code review

* code review

* code review

* test

* test

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2332] Add clustering and compaction in Kafka Connect Sink (apache#3857)

* [HUDI-2332] Add clustering and compaction in Kafka Connect Sink

* Disable validation check on instant time for compaction and adjust configs

* Add javadocs

* Add clustering and compaction config

* Fix transaction causing missing records in the target table

* Add debugging logs

* Fix kafka offset sync in participant

* Adjust how clustering and compaction are configured in kafka-connect

* Fix clustering strategy

* Remove irrelevant changes from other published PRs

* Update clustering logic and others

* Update README

* Fix test failures

* Fix indentation

* Fix clustering config

* Add JavaCustomColumnsSortPartitioner and make async compaction enabled by default

* Add test for JavaCustomColumnsSortPartitioner

* Add more changes after IDE sync

* Update README with clarification

* Fix clustering logic after rebasing

* Remove unrelated changes

* [MINOR] Fix typo,rename 'HooodieAvroDeserializer' to 'HoodieAvroDeserializer' (apache#4064)

* [HUDI-2325] Add hive sync support to kafka connect (apache#3660)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2831] Securing usages of `SimpleDateFormat` to be thread-safe (apache#4073)

* [HUDI-2818] Fix 2to3 upgrade when set `hoodie.table.keygenerator.class` (apache#4077)

* [HUDI-2838] refresh table after drop partition (apache#4084)

* Revert "[HUDI-2799] Fix the classloader of flink write task (apache#4042)" (apache#4069)

This reverts commit 8281cbf.

* [HUDI-2847] Flink metadata table supports virtual keys (apache#4096)

* [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table (apache#3998)

* [HUDI-2688] Claim the next rfc 40 for Hudi connector for Trino (apache#4105)

* [HUDI-2671] Fix kafka offset handling in Kafka Connect protocol (apache#4021)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2443] Hudi KVComparator for all HFile writer usages (apache#3889)

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Hudi relies on custom class shading for Hbase's KeyValue.KVComparator to
  avoid versioning and class loading issues. There are few places which are
  still using the Hbase's comparator class directly and version upgrades
  would make them obsolete. Refactoring the HoodieKVComparator and making
  all HFile writer creation using the same shaded class.

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Moving HoodieKVComparator from common.bootstrap.index to common.util

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Retaining the old HoodieKVComparatorV2 for boostrap case. Adding the
  new comparator as HoodieKVComparatorV2 to differentiate from the old
  one.

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

 - Renamed HoodieKVComparatorV2 to HoodieMetadataKVComparator and moved it
   under the package org.apache.hudi.metadata.

* Make comparator classname configurable

* Revert new config and address other review comments

Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>

* [HUDI-2788] Fixing issues w/ Z-order Layout Optimization (apache#4026)

* Simplyfying, tidying up

* Fixed packaging for `TestOptimizeTable`

* Cleaned up `HoodiFileIndex` file filtering seq;
Removed optimization manually reading Parquet table circumventing Spark

* Refactored `DataSkippingUtils`:
  - Fixed checks to validate all statistics cols are present
  - Fixed some predicates being constructed incorrectly
  - Rewrote comments for easier comprehension, added more notes
  - Tidying up

* Tidying up tests

* `lint`

* Fixing compilation

* `TestOptimizeTable` > `TestTableLayoutOptimization`;
Added assertions to test data skipping paths

* Fixed tests to properly hit data-skipping path

* Fixed pruned files candidates lookup seq to conservatively included all non-indexed files

* Added java-doc

* Fixed compilation

* [HUDI-2766] Cluster update strategy should not be fenced by write config (apache#4093)

Fix pending clustering rollback test

* [HUDI-2793] Fixing deltastreamer checkpoint fetch/copy over (apache#4034)

- Removed the copy over logic in transaction utils. Deltastreamer will go back to previous commits and get the checkpoint value.

* [HUDI-2853] Add JMX deps in hudi utilities and kafka connect bundles (apache#4108)


Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2844][CLI] Fixing archived Timeline crashing if timeline contains REPLACE_COMMIT (apache#4091)

* [MINOR] Fix build failure due to checkstyle issues (apache#4111)

* [HUDI-1290] [RFC-39] Deltastreamer avro source for Debezium CDC (apache#4048)

* Add RFC entry for deltastreamer source for debezium

* Add RFC for debezium source

* Add RFC for debezium source

* Add RFC for debezium source

* fix hyperlink issue and rebase

* Update progress

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-1290] Add Debezium Source for deltastreamer (apache#4063)

* add source for postgres debezium

* Add tests for debezium payload

* Fix test

* Fix test

* Add tests for debezium source

* Add tests for debezium source

* Fix schema for debezium

* Fix checkstyle issues

* Fix config issue for schema registry

* Add mysql source for debezium

* Fix checkstyle issues an tests

* Improve code for merging toasted values

* Improve code for merging toasted values

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2792] Configure metadata payload consistency check (apache#4035)

- Relax metadata payload consistency check to consider spark task failures with spurious deletes

* [HUDI-2855] Change the default value of 'PAYLOAD_CLASS_NAME' to 'DefaultHoodieRecordPayload' (apache#4115)

* [HUDI-2480] FileSlice after pending compaction-requested instant-time… (apache#3703)

* [HUDI-2480] FileSlice after pending compaction-requested instant-time is ignored by MOR snapshot reader

* include file slice after a pending compaction for spark reader

Co-authored-by: garyli1019 <yanjia.gary.li@gmail.com>

* [HUDI-1290] fixing mysql debezium source (apache#4119)

* [HUDI-2800] Remove rdd.isEmpty() validation to prevent CreateHandle being called twice (apache#4121)

* [HUDI-2794] Guarding table service commits within a single lock to commit to both data table and metadata table (apache#4037)

* Fixing a single lock to commit table services across metadata table and data table

* Addressing comments

* rebasing with master

* [HUDI-2671] Making error -> warn logs from timeline server with concurrent writers for inconsistent state (apache#4088)

* Making error -> warn logs from timeline server with concurrent writers for inconsistent state

* Fixing bad request response exception for timeline out of sync

* Addressing feedback. removed write concurrency mode depedency

* [HUDI-2858] Fixing handling of cluster update reject exception in deltastreamer (apache#4120)

* [HUDI-2841] Fixing lazy rollback for MOR with list based strategy (apache#4110)

* [HUDI-2801] Add Amazon CloudWatch metrics reporter (apache#4081)

* [HUDI-2840] Fixed DeltaStreaemer to properly respect configuration passed t/h properties file (apache#4090)

* Rebased `DFSPropertiesConfiguration` to access Hadoop config in liue of FS to avoid confusion

* Fixed `readConfig` to take Hadoop's `Configuration` instead of FS;
Fixing usages

* Added test for local FS access

* Rebase to use `FSUtils.getFs`

* Combine properties provided as a file along w/ overrides provided from the CLI

* Added helper utilities to `HoodieClusteringConfig`;
Make sure corresponding config methods fallback to defaults;

* Fixed DeltaStreamer usage to respect properly combined configuration;
Abstracted `HoodieClusteringConfig.from` convenience utility to init Clustering config from `Properties`

* Tidying up

* `lint`

* Reverting changes to `HoodieWriteConfig`

* Tdiying up

* Fixed incorrect merge of the props

* Converted `HoodieConfig` to wrap around `Properties` into `TypedProperties`

* Fixed compilation

* Fixed compilation

* [HUDI-2005] Removing direct fs call in HoodieLogFileReader (apache#3865)

* [HUDI-2851] Shade org.apache.hadoop.hive.ql.optimizer package for flink bundle jar (apache#4104)

* [MINOR] Include hudi-aws in flink bundle jar (apache#4127)

HUDI-2801 makes this jar as required.

* [HUDI-2852] Table metadata returns empty for non-exist partition (apache#4117)

* [HUDI-2852] Table metadata returns empty for non-exist partition

* add unit test

* fix code checkstyle

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-2863] Rename option 'hoodie.parquet.page.size' to 'write.parquet.page.size' (apache#4128)

* [HUDI-2850] Fixing Clustering CLI - schedule and run command fixes to avoid NumberFormatException (apache#4101)

* [HUDI-2814] Addressing issues w/ Z-order Layout Optimization (apache#4060)

* `ZCurveOptimizeHelper` > `ZOrderingIndexHelper`;
Moved Z-index helper under `hudi.index.zorder` package

* Tidying up `ZOrderingIndexHelper`

* Fixing compilation

* Fixed index new/original table merging sequence to always prefer values from new index;
Cleaned up `HoodieSparkUtils`

* Added test for `mergeIndexSql`

* Abstracted Z-index name composition w/in `ZOrderingIndexHelper`;

* Fixed `DataSkippingUtils` to interrupt prunning in case data filter contains non-indexed column reference

* Properly handle exceptions origination during pruning in `HoodieFileIndex`

* Make sure no errors are logged upon encountering `AnalysisException`

* Cleaned up Z-index updating sequence;
Tidying up comments, java-docs;

* Fixed Z-index to properly handle changes of the list of clustered columns

* Tidying up

* `lint`

* Suppressing `JavaDocStyle` first sentence check

* Fixed compilation

* Fixing incorrect `DecimalType` conversion

* Refactored test `TestTableLayoutOptimization`
  - Added Z-index table composition test (against fixtures)
  - Separated out GC test;
Tidying up

* Fixed tests re-shuffling column order for Z-Index table `DataFrame` to align w/ the one by one loaded from JSON

* Scaffolded `DataTypeUtils` to do basic checks of Spark types;
Added proper compatibility checking b/w old/new index-tables

* Added test for Z-index tables merging

* Fixed import being shaded by creating internal `hudi.util` package

* Fixed packaging for `TestOptimizeTable`

* Revised `updateMetadataIndex` seq to provide Z-index updating process w/ source table schema

* Make sure existing Z-index table schema is sync'd to source table's one

* Fixed shaded refs

* Fixed tests

* Fixed type conversion of Parquet provided metadata values into Spark expected schemas

* Fixed `composeIndexSchema` utility to propose proper schema

* Added more tests for Z-index:
  - Checking that Z-index table is built correctly
  - Checking that Z-index tables are merged correctly (during update)

* Fixing source table

* Fixing tests to read from Parquet w/ proper schema

* Refactored `ParquetUtils` utility reading stats from Parquet footers

* Fixed incorrect handling of Decimals extracted from Parquet footers

* Worked around issues in javac failign to compile stream's collection

* Fixed handling of `Date` type

* Fixed handling of `DateType` to be parsed as `LocalDate`

* Updated fixture;
Make sure test loads Z-index fixture using proper schema

* Removed superfluous scheme adjusting when reading from Parquet, since Spark is actually able to perfectly restore schema (given Parquet was previously written by Spark as well)

* Fixing race-condition in Parquet's `DateStringifier` trying to share `SimpleDataFormat` object which is inherently not thread-safe

* Tidying up

* Make sure schema is used upon reading to validate input files are in the appropriate format;
Tidying up;

* Worked around javac (1.8) inability to infer expression type properly

* Updated fixtures;
Tidying up

* Fixing compilation after rebase

* Assert clustering have in Z-order layout optimization testing

* Tidying up exception messages

* XXX

* Added test validating Z-index lookup filter correctness

* Added more test-cases;
Tidying up

* Added tests for string expressions

* Fixed incorrect Z-index filter lookup translations

* Added more test-cases

* Added proper handling on complex negations of AND/OR expressions by pushing NOT operator down into inner expressions for appropriate handling

* Added `-target:jvm-1.8` for `hudi-spark` module

* Adding more tests

* Added tests for non-indexed columns

* Properly handle non-indexed columns by falling back to a re-write of containing expression as  `TrueLiteral` instead

* Fixed tests

* Removing the parquet test files and disabling corresponding tests

Co-authored-by: Vinoth Chandar <vinoth@apache.org>

* [MINOR] Fixing test failure to fix CI build failure (apache#4132)

* [HUDI-2861] Re-use same rollback instant time for failed rollbacks (apache#4123)

* [HUDI-2767] Enabling timeline-server-based marker as default (apache#4112)

- Changes the default config of marker type (HoodieWriteConfig.MARKERS_TYPE or hoodie.write.markers.type) from DIRECT to TIMELINE_SERVER_BASED for Spark Engine.
- Adds engine-specific marker type configs: Spark -> TIMELINE_SERVER_BASED, Flink -> DIRECT, Java -> DIRECT.
- Uses DIRECT markers as well for Spark structured streaming due to timeline server only available for the first mini-batch.
- Fixes the marker creation method for non-partitioned table in TimelineServerBasedWriteMarkers.
- Adds the fallback to direct markers even when TIMELINE_SERVER_BASED is configured, in WriteMarkersFactory: when HDFS is used, or embedded timeline server is disabled, the fallback to direct markers happens.
- Fixes the closing of timeline service.
- Fixes tests that depend on markers, mainly by starting the timeline service for each test.

* [HUDI-2845] Metadata CLI - files/partition file listing fix and new validate option (apache#4092)

- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2848] Excluse guava from hudi-cli pom (apache#4100)

* [HUDI-2864] Fix README and scripts with current limitations of hive sync (apache#4129)

* Fix README with current limitations of hive sync

* Fix README with current limitations of hive sync

* Fix dep issue

* Fix Copy on Write flow

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2856] Bit cask disk map delete modified (apache#4116)

* modified BitCaskDiskMap_close_function

* change iterators location to finally

* Update BitCaskDiskMap.java

* [MINOR] Follow ups from HUDI-2861 (re-use same rollback instant for failed rollback) (apache#4133)

* [HUDI-2868] Fix skipped HoodieSparkSqlWriterSuite (apache#4125)

- Co-authored-by: Yann Byron <biyan900116@gmail.com>

* [HUDI-2475] [HUDI-2862] Metadata table creation and avoid bootstrapping race for write client & add locking for upgrade (apache#4114)

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2102] Support hilbert curve for hudi (apache#3952)

Co-authored-by: Y Ethan Guo <ethan.guoyihua@gmail.com>

* Moving to 0.11.0-SNAPSHOT on master branch.

* [MINOR] fix typo (apache#4140)

* [MINOR] Fixing integ test suite for hudi-aws and archival validation (apache#4142)

* Removing rfc from release package and fixing release validation script (apache#4147)

* [MINOR] Fix syntax error in create_source_release.sh (apache#4150)

* [MINOR] Fix typo,rename 'getUrlEncodePartitoning' to 'getUrlEncodePartitioning' (apache#4130)

* [HUDI-2642] Add support ignoring case in update sql operation (apache#3882)

* [HUDI-2891] Fix write configs for Java engine in Kafka Connect Sink (apache#4161)

* Revert "[HUDI-2855] Change the default value of 'PAYLOAD_CLASS_NAME' to 'DefaultHoodieRecordPayload' (apache#4115)" (apache#4169)

This reverts commit 88067f5.

* Revert "[HUDI-2856] Bit cask disk map delete modified (apache#4116)" (apache#4171)

This reverts commit 257a6a7.

* [HUDI-2880] Fixing loading of props from default dir (apache#4167)

* Fixing loading of props from default dir

* addressing comments

* [HUDI-2881] Compact the file group with larger log files to reduce write amplification (apache#4152)

* Fixed partitions produced by layout optimization in case order-by key is composed of a single column (apache#4183)

* [MINOR] Fix the wrong usage of timestamp length variable bug (apache#4179)

Signed-off-by: zzzhy <candle_1667@163.com>

* [HUDI-2904] Fix metadata table archival overstepping between regular writers and table services (apache#4186)

- Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2914] Fix remote timeline server config for flink (apache#4191)

* [minor] Refactor write profile to always generate fs view (apache#4198)

* [HUDI-2924] Refresh the fs view on successful checkpoints for write profile (apache#4199)

* [MINOR] use catalog schema if can not find table schema (apache#4182)

* [HUDI-2902] Fixing populate meta fields with Hfile writers and Disabling virtual keys by default for metadata table (apache#4194)

* [HUDI-2911] Removing default value for `PARTITIONPATH_FIELD_NAME` resulting in incorrect `KeyGenerator` configuration (apache#4195)

* Revert "[HUDI-2495] Resolve inconsistent key generation for timestamp types  by GenericRecord and Row (apache#3944)" (apache#4201)

* [HUDI-2894][HUDI-2905] Metadata table - avoiding key lookup failures on base files over S3 (apache#4185)

- Fetching partition files or all partitions from the metadata table is failing
   when run over S3. Metadata table uses HFile format for the base files and the
   record lookup uses HFile.Reader and HFileScanner interfaces to get records by
   partition keys. When the backing storage is S3, this record lookup from HFiles
   is failing with IOException, in turn failing the caller commit/update operations.

 - Metadata table looks up HFile records with positional read enabled so as to
   perform better for random lookups. But this positional read key lookup is
   returning with partial read sizes over S3 leading to HFile scanner throwing
   IOException. This doesn't happen over HDFS. Metadata table though uses the HFile
   for random key lookups, the positional read is not mandatory as we sort the keys
   when doing a lookup for multiple keys.

 - The fix is to disable HFile positional read for all HFile scanner based
   key lookups.

* Revert "[HUDI-2489]Tuning HoodieROTablePathFilter by caching hoodieTableFileSystemView, aiming to reduce unnecessary list/get requests"

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [MINOR] Mitigate CI jobs timeout issues (apache#4173)

* skip shutdown zookeeper in `@AfterAll` in TestHBaseIndex

* rebalance CI tests

* [HUDI-2933] DISABLE Metadata table by default (apache#4213)

* [HUDI-2890] Kafka Connect: Fix failed writes and avoid table service concurrent operations (apache#4211)

* Fix kafka connect readme

* Fix handling of errors in write records for kafka connect

* By default, ensure we skip error records and keep the pipeline alive

* Fix indentation

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight (apache#4206)

* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight

* Fixing retry of pending compaction in metadata table and enhancing tests

* [HUDI-2934] Optimize RequestHandler code style

close apache#4215

* [HUDI-2935] Remove special casing of clustering in deltastreamer checkpoint retrival (apache#4216)

- We now seek backwards to find the checkpoint
 - No need to return empty anymore

* [HUDI-2877] Support flink catalog to help user use flink table conveniently (apache#4153)

* [HUDI-2877] Support flink catalog to help user use flink table conveniently

* Fix comment

* fix comment2

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit … (apache#4217)

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2418] Support HiveSchemaProvider (apache#3671)


Co-authored-by: jian.feng <fengjian428@gmial.com>

* [HUDI-2916] Add IssueNavigationLink for IDEA (apache#4192)

* [HUDI-2900] Fix corrupt block end position (apache#4181)

* [HUDI-2900] Fix corrupt block end position

* add a test

* [HUDI-2876] for hive/presto hudi should remove the temp file which created by HoodieMergedLogRecordSanner  when the query finished. (apache#4139)

* [MINOR] Fix partition path formatting in error log (apache#4168)

* [MINOR] Use maven-shade-plugin version for hudi-timeline-server-bundle from main pom.xml (apache#4209)

Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [MINOR] Remove redundant and conflicting spark-hive dependency (apache#4228)

Disable TestHiveSchemaProvider

* [HUDI-2951] Disable remote view storage config for flink (apache#4237)

* [HUDI-2942] add error message log in HoodieCombineHiveInputFormat (apache#4224)

* [MINOR] Update DOAP with 0.10.0 Release (apache#4246)

* [HUDI-2832][RFC-41] Proposal to integrate Hudi on Snowflake platform (apache#4074)

* [HUDI-2832][RFC-40] Proposal to integrate Hudi on Snowflake platform

* rebased and addressed review comments

* [HUDI-2964] Fixing aws lock configs to inherit from HoodieConfig (apache#4258)

* [HUDI-2957] Shade kryo jar for flink bundle jar (apache#4251)

* [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter (apache#3912)

Co-authored-by: guanziyue.gzy <guanziyue.gzy@bytedance.com>

* [MINOR] Fix Compile broken (apache#4263)

* [HUDI-2779] Cache BaseDir if HudiTableNotFound Exception thrown (apache#4014)

* [HUDI-2966] Add TaskCompletionListener for HoodieMergeOnReadRDD to close logScaner when the query finished. (apache#4265)

* [HUDI-2966] Add TaskCompletionListener for HoodieMergeOnReadRDD to close logScaner when the query finished.

* [MINOR] FAQ link in SUPPORT_REQUEST template (apache#4266)

* Claiming RFC for data skipping index for updated version (apache#4271)

* Revert "Claiming RFC for data skipping index for updated version (apache#4271)" (apache#4272)

This reverts commit 8321d20.

* [HUDI-2901] Fixed the bug clustering jobs cannot running in parallel (apache#4178)

* [HUDI-2936] Add data count checks in async clustering tests (apache#4236)

* [HUDI-2849] Improve SparkUI job description for write path (apache#4222)

* [HUDI-2952] Fixing metadata table for non-partitioned dataset (apache#4243)

* [HUDI-2912] Fix CompactionPlanOperator typo (apache#4187)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* Adding verbose output for metadata validate files command (apache#4166)

* [HUDI-2892][BUG] Pending Clustering may stain the ActiveTimeLine and lead to incomplete query results (apache#4172)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2784] Add a hudi-trino-bundle for Trino (apache#4279)

* [HUDI-2814] Make Z-index more generic Column-Stats Index (apache#4106)

* [HUDI-2527] Multi writer test with conflicting async table services (apache#4046)

* [HUDI-2974] Make the prefix for metrics name configurable (apache#4274)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2959] Fix the thread leak of cleaning service (apache#4252)

* [HUDI-2985] Shade jackson for hudi flink bundle jar (apache#4284)

* [HUDI-2906] Add a repair util to clean up dangling data and log files (apache#4278)

* [HUDI-2984] Implement #close for AbstractTableFileSystemView (apache#4285)

* [HUDI-2946] Upgrade maven plugins to be compatible with higher Java versions (apache#4232)

Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [HUDI-2938] Metadata table util to get latest file slices for reader/writers (apache#4218)

* [HUDI-2990] Sync to HMS when deleting partitions (apache#4291)

* [HUDI-2994] Add judgement to existed partitionPath in the catch code block for HU… (apache#4294)

* [HUDI-2994] Add judgement to existed partition path in the catch code block for HUDI-2743

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-2996] Flink streaming reader 'skip_compaction' option does not work (apache#4304)

close apache#4304

* [HUDI-2997] Skip the corrupt meta file for pending rollback action (apache#4296)

* [HUDI-2995] Enabling metadata table by default (apache#4295)

- Enabling metadata table by default

* [HUDI-3022] Fix NPE for isDropPartition method (apache#4319)

* [HUDI-3022] Fix NPE for isDropPartition method

* [HUDI-3024] Add explicit write handler for flink (apache#4329)

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-3025] Add additional wait time for namenode availability during IT tests initiatialization  (apache#4328)

- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-3028] Use blob storage to speed up CI downloads (apache#4331)

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2998] claiming rfc number for consistent hashing index (apache#4303)

Co-authored-by: xiaoyuwei <xiaoyuwei.yw@alibaba-inc.com>

* [HUDI-3015] Implement #reset and #sync for metadata filesystem view (apache#4307)

* [Minor] Catch and ignore all the exceptions in quietDeleteMarkerDir (apache#4301)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-3001] Clean up the marker directory when finish bootstrap operation. (apache#4298)

* [HUDI-3043] Revert async cleaner leak commit to unblock CI failure (apache#4343)

* Revert "[HUDI-2959] Fix the thread leak of cleaning service (apache#4252)"
Reverting to unblock CI failure for now. will revisit this with the right fix

* [HUDI-3037] Add back remote view storage config for flink (apache#4338)

* [HUDI-3046] Claim RFC number for RFC for Compaction / Clustering Service (apache#4347)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2958] Automatically set spark.sql.parquet.writelegacyformat, when using bulkinsert to insert data which contains decimalType (apache#4253)

* [HUDI-3043] Adding some test fixes to continuous mode multi writer tests (apache#4356)

* [HUDI-2962] InProcess lock provider to guard single writer process with async table operations (apache#4259)

 - Adding Local JVM process based lock provider implementation

 - This local lock provider can be used by a single writer process with async
   table operations to guard the metadata tabl against concurrent updates.

* [HUDI-3043] De-coupling multi writer tests (apache#4362)

* [HUDI-3029]  Transaction manager: avoid deadlock when doing begin and end transactions (apache#4363)

* [HUDI-3029] Transaction manager: avoid deadlock when doing begin and end transactions

 - Transaction manager has begin and end transactions as synchronized methods.
   Based on the lock provider implementaion, this can lead to deadlock
   situation when the underlying lock() calls are blocking or with a long timeout.

 - Fixing transaction manager begin and end transactions to not get to deadlock
   and to not assume anything on the lock provider implementation.

* [HUDI-3029]  Transaction manager: avoid deadlock when doing begin and end transactions (apache#4373)

* [HUDI-3064] Fixing a bug in TransactionManager and FileSystemTestLock (apache#4372)

* [HUDI-3054] Fixing default lock configs for FileSystemBasedLock and fixing a flaky test (apache#4374)

* [MINOR] Azure CI IT tasks clean up (apache#4337)

* [HUDI-3052] Fix flaky testJsonKafkaSourceResetStrategy (apache#4381)

* [minor] fix NetworkUtils#getHostname (apache#4355)

* [HUDI-2970] Adding tests for archival of replace commit actions (apache#4268)

* [HUDI-3064][HUDI-3054] FileSystemBasedLockProviderTestClass tryLock fix and TestHoodieClientMultiWriter test fixes (apache#4384)

 - Made FileSystemBasedLockProviderTestClass thread safe and fixed the
   tryLock retry logic.

 - Made TestHoodieClientMultiWriter. testHoodieClientBasicMultiWriter
   deterministic in verifying the HoodieWriteConflictException.

* remove unused import (apache#4349)

* [MINOR] Remove unused method in HoodieActiveTimeline (apache#4401)

* [MINOR] Increasing CI timeout to 90 mins (apache#4407)

* [HUDI-3070] Add rerunFailingTestsCount for flakly testes (apache#4398)



Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2970] Add test for archiving replace commit (apache#4345)

* [HUDI-3008] Fixing HoodieFileIndex partition column parsing for nested fields

* [HUDI-3027] Update hudi-examples README.md (apache#4330)

* [HUDI-3032] Do not clean the log files right after compaction for metadata table (apache#4336)

* [HUDI-2547] Schedule Flink compaction in service (apache#4254)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-3011] Adding ability to read entire data with HoodieIncrSource with empty checkpoint (apache#4334)

* Adding ability to read entire data with HoodieIncrSource with empty checkpoint

* Addressing comments

* [HUDI-3060] drop table for spark sql (apache#4364)

* [MINOR] Fix DedupeSparkJob typo (apache#4418)

* [HUDI-3014] Add table option to set utc timezone (apache#4306)

* [MINOR] Remove unused method in HoodieActiveTimeline (apache#4435)

* [HUDI-3101] Excluding compaction instants from pending rollback info (apache#4443)

* [HUDI-3102] Do not store rollback plan in inflight instant (apache#4445)

* [HUDI-3099] Purge drop partition for spark sql (apache#4436)

* [HUDI-2374] Fixing AvroDFSSource does not use the overridden schema to deserialize Avro binaries (apache#4353)

* [HUDI-3093] fix spark-sql query table that write with TimestampBasedKeyGenerator (apache#4416)

* [HUDI-3106] Fix HiveSyncTool not sync schema (apache#4452)

* [HUDI-2811] Support Spark 3.2 (apache#4270)

* Fixing dynamoDbLockConfig required prop check (apache#4422)

* [HUDI-2983] Remove Log4j2 transitive dependencies (apache#4281)

Co-authored-by: Danny Chan <yuzhao.cyz@gmail.com>
Co-authored-by: Genmao Yu <hustyugm@gmail.com>
Co-authored-by: dylonyu <dylonyu@tencent.com>
Co-authored-by: manasaks <manasas2004@gmail.com>
Co-authored-by: Shawy Geng <gengxiaoyu1996@gmail.com>
Co-authored-by: yuzhaojing <32435329+yuzhaojing@users.noreply.github.com>
Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>
Co-authored-by: Sivabalan Narayanan <sivabala@uber.com>
Co-authored-by: Prashant Wason <pwason@uber.com>
Co-authored-by: davehagman <73851873+davehagman@users.noreply.github.com>
Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>
Co-authored-by: xiarixiaoyao <mengtao0326@qq.com>
Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>
Co-authored-by: Yann Byron <biyan900116@gmail.com>
Co-authored-by: Manoj Govindassamy <manoj.govindassamy@gmail.com>
Co-authored-by: dufeng1010 <dufeng1010@126.com>
Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>
Co-authored-by: zhangyue19921010 <69956021+zhangyue19921010@users.noreply.github.com>
Co-authored-by: yuezhang <yuezhang@freewheel.tv>
Co-authored-by: Alexey Kudinkin <alexey@infinilake.com>
Co-authored-by: 0x574C <761604382@qq.com>
Co-authored-by: 董可伦 <dongkelun01@inspur.com>
Co-authored-by: 卢波 <26039470+lubo212@users.noreply.github.com>
Co-authored-by: lubo <bollu@tencent.com>
Co-authored-by: wenningd <wenningding95@gmail.com>
Co-authored-by: Wenning Ding <wenningd@amazon.com>
Co-authored-by: Udit Mehrotra <udit.mehrotra90@gmail.com>
Co-authored-by: Ron <ldliulsy@163.com>
Co-authored-by: Harsha Teja Kanna <h7kanna@users.noreply.github.com>
Co-authored-by: vinoth chandar <vinothchandar@users.noreply.github.com>
Co-authored-by: rmahindra123 <76502047+rmahindra123@users.noreply.github.com>
Co-authored-by: leesf <490081539@qq.com>
Co-authored-by: Nate Radtke <5672085+nateradtke@users.noreply.github.com>
Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>
Co-authored-by: Y Ethan Guo <ethan.guoyihua@gmail.com>
Co-authored-by: Jimmy.Zhou <zhouyongjin@inspur.com>
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
Co-authored-by: garyli1019 <yanjia.gary.li@gmail.com>
Co-authored-by: satishm <84978833+data-storyteller@users.noreply.github.com>
Co-authored-by: mincwang <33626973+mincwang@users.noreply.github.com>
Co-authored-by: wangminchao <wangminchao@asinking.com>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
Co-authored-by: huleilei <584620569@qq.com>
Co-authored-by: xuzifu666 <1206332514@qq.com>
Co-authored-by: vortual <1039505040@qq.com>
Co-authored-by: zzzhy <candle_1667@163.com>
Co-authored-by: ForwardXu <forwardxu315@gmail.com>
Co-authored-by: 冯健 <fengjian428@gmail.com>
Co-authored-by: jian.feng <fengjian428@gmial.com>
Co-authored-by: Vinoth Govindarajan <vinothg@uber.com>
Co-authored-by: guanziyue <30882822+guanziyue@users.noreply.github.com>
Co-authored-by: guanziyue.gzy <guanziyue.gzy@bytedance.com>
Co-authored-by: RexAn <anh131@126.com>
Co-authored-by: arunkc <arunkc91@gmail.com>
Co-authored-by: Yuwei XIAO <ywxiaozero@gmail.com>
Co-authored-by: Fugle666 <30539368+Fugle666@users.noreply.github.com>
Co-authored-by: xiaoyuwei <xiaoyuwei.yw@alibaba-inc.com>
Co-authored-by: xuzifu666 <xuyu@zepp.com>
Co-authored-by: harshal patil <harshal.j.patil@gmail.com>
Co-authored-by: Aimiyoo <aimiyooo@gmail.com>
@vinishjail97 vinishjail97 mentioned this pull request Jan 24, 2022
5 tasks
val path = getTableLocation(targetTable, sparkSession)
val conf = sparkSession.sessionState.newHadoopConf()
val metaClient = HoodieTableMetaClient.builder()
.setBasePath(path)
.setConf(conf)
.build()
val tableConfig = metaClient.getTableConfig
val primaryColumns = HoodieOptionConfig.getPrimaryColumns(targetTable.storage.properties)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@YannByron @xushiyan @danny0405 @leesf : Do we have context around why the case sensitivity was changed here.
Looks like case sensitivity is broken w/ spark-sql Merge Into as of now.
We are looking to work towards a fix. but wanted to ensure we don't unintentionally break something else if this piece of code was intentionally written for some reason.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @boneanxs for taking a look if you have time~

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know, hudi currently uses spark.sql.caseSensitive to choose caseSensitive or not during analyze stage, and by default it's false, so it might be reasonable that we need to respect that configure as well here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

None yet

8 participants