Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-2814] Addressing issues w/ Z-order Layout Optimization #4060

Merged
merged 59 commits into from Nov 26, 2021

Conversation

alexeykudinkin
Copy link
Contributor

@alexeykudinkin alexeykudinkin commented Nov 21, 2021

Tips

What is the purpose of the pull request

Addressing issues discovered during our extensive preparatory testing (summarized in HUDI-2814)

Brief change log

NOTE: This change is stacked on top of #4026 and currently contains all of the commits from it as well. As such, am marking this PR as WIP to advance incrementally merging aforementioned PR first before moving to that one.

  • [critical] Fixed index new/original table merging sequence to always prefer values from the new index-table
  • [critical] Fixed DataSkippingUtils to (conservatively) interrupt pruning in case data filter contains non-indexed column references
  • [critical] Fixed Z-index to properly handle changes of the list of clustered columns
  • [critical] Fixed incorrect data-type conversions (for ex, Decimal to Double)
  • [critical] Fixed race-condition in Parquet's DateStringifier class sharing SimpleDataFormat object which is NOT thread-safe (and have been cause for tests flakiness)
  • [major] Properly handle exceptions origination during pruning in HoodieFileIndex
  • [major] Adding tests for most of the Z-indexing critical flows (Z-index table creation, merging, (basic) data-skipping)

Verify this pull request

This change added tests and can be verified as follows:

  • Added functional tests for critical aspects of the Z-indexing workflow (index table creation, merging, (basic) data-skipping)

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

// indexed data pruning becomes (largely) impossible
if (!checkColIsIndexed(colName, indexSchema)) {
logDebug(s"Filtering expression contains column that is not indexed ($colName)")
throw new AnalysisException(s"Filtering expression contains column that is not indexed ($colName)")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This place should not throw exceptions. For a filter, if the column it refers to does not exist in the index, just change the filter to literal.trueliteral. View the original logic of rewritecondition, In this way, data will not be lost in the process of index query

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this change is not by accident: when re-writing condition as TrueLiteral it actually returns incorrect results -- let's consider following scenario:

We have a Table T with Z-ordered columns A, B

Now, if we query SELECT * FROM T WHERE A = ... OR B = ... OR C = ... this will return incorrect results b/c C is not indexed and we'd do data-pruning based on A and B only.

We can't do data-pruning if in the query there are present columns that aren't indexed*.

*We actually can in some cases but for simplicity of making sure that we do things correctly i'm currently conservatively removing any of such opts

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I may not agree with you here. select * from T where A= or B = C= , those fitler will be rewrite to A = xx or b= xx or literal.trueliteral. Due to the existence of this literal.trueliteral we can push those fitler to index table directly, This will not cause any problems。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will cause problems: in essence, we are relying on index to tell us the trimmed set of files that we need to scan, right?

Now if query does contain filter on the column that is not indexed, we can't use index any more b/c in the current flow we'd rely on set of files provided by the index as the only files we will be scanning, therefore missing on the rows that match criteria C = ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorrry, i still cannot understand, why we will missing on the rows that match criteria C = ... could you give me a example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Surely: let's imagine that we have three files F1, F2, F3 all of which are indexed on clustered columns A and B.

Now, imagine that i'm executing query that would do WHERE A = ... OR B = ... OR C = ..., if F3 does not satisfy A = ... OR B ... condition, we will skip it, though it could contain columns satisfying C = ... condition.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, WHERE A = ... OR B = ... OR C = ... will be convert to A condition Or B condiation Or Literal.TrueLiteral
since this is an Or condition; Literal.TrueLiteral will suppress other filter condition。 F3 will also include, no files will missing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding/expectation is also leaning more towards what @xiarixiaoyao mentions.
@alexeykudinkin are you saying that's not how we convert the query predicate to the query on the index table?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xiarixiaoyao i see what you were referring to now. It makes sense. Let me stress-test these use-cases in the tests that i've added for Data Skipping and reinstate these cases

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed

reWriteCondition(colName, Or(And(LessThanOrEqual(minValue(colName), v), GreaterThanOrEqual(maxValue(colName), v)) ,
Or(StartsWith(minValue(colName), v), StartsWith(maxValue(colName), v))))
// query filter "colA not in (a, b)" convert it to " (not( colA_minValue = a and colA_maxValue = a)) and (not( colA_minValue = b and colA_maxValue = b)) " for index table
val colName = getTargetColName(attribute, indexSchema)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

original logical is : // query filter "colA like xxx" convert it to " (colA_minValue <= xxx and colA_maxValue >= xxx) or (colA_min start with xxx or colA_max start with xxx) " for index table

why remove or (colA_min start with xxx or colA_max start with xxx) 。 it should be worked. think the follow quey:
select count(*) from conn_optimize where src_ip like '157%' and dst_ip like '216.%'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the second leg b/c it's included in the first one -- if column A contains string S that starts w/ prefix P, that entails that min(A) <= P <= max(A)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good job

Not(colContainsValuesEqualToLiterals(colName, list))
// Filter "colA != b"
// Translates to "colA_minValue > b OR colA_maxValue < b" (which is an inversion of expr for "colA = b") for index lookup
// NOTE: This is an inversion of `colA = b` expr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A filter of type not cannot be operated directly in this way。
think that we have a filter colA != 3,
the file1 contains three values 1,2,4 , now colA_minValue= 1 and colA_maxValue = 4
the translate filter: 1 > 3 or 4 < 3 This judgment condition will not hold, and file1 will be excluded, this is wrong.

All not operations cannot be reversed directly, and data will be lost

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch!
You're right -- we can't simply invert b/c we're looking only at "contained" metrics which we can't invert

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will address all negations

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not operation is difficult to convert directly, look forward to your work

@alexeykudinkin alexeykudinkin changed the title [HUDI-2814] Addressing issues w/ Z-order Layout Optimization [HUDI-2814][WIP] Addressing issues w/ Z-order Layout Optimization Nov 22, 2021
@vinothchandar vinothchandar self-assigned this Nov 23, 2021
@alexeykudinkin alexeykudinkin changed the title [HUDI-2814][WIP] Addressing issues w/ Z-order Layout Optimization [HUDI-2814][Part 1] Addressing issues w/ Z-order Layout Optimization Nov 24, 2021
@vinothchandar
Copy link
Member

Reviewing this. But the collaboration here is freakin awesome :) @alexeykudinkin @xiarixiaoyao !

@alexeykudinkin alexeykudinkin changed the title [HUDI-2814][Part 1] Addressing issues w/ Z-order Layout Optimization [HUDI-2814] Addressing issues w/ Z-order Layout Optimization Nov 25, 2021
Copy link
Member

@vinothchandar vinothchandar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skimmed the changes. Can we avoid checking in those parquet files.

* @param context instance of {@link HoodieEngineContext}
* @param instantTime instant of the carried operation triggering the update
*/
public abstract void updateMetadataIndexes(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename: updateColumnStats? for now, this is not connected to the metadata table at all and even then would only update one partition. So kind of misleading

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method actually calls into updateColumnStats in #4106. Will inline it there

if (config.isDataSkippingEnabled() && config.isLayoutOptimizationEnabled() && !config.getClusteringSortColumns().isEmpty()) {
table.updateStatistics(context, writeStats, clusteringCommitTime, true);
// Update outstanding metadata indexes
if (config.isLayoutOptimizationEnabled()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a similar comment before. this. is a good change

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you really want to modify it in this way, pls remove layout_ OPTIMIZE_ DATA_ SKIPPING_ Enable this configuration item.
This configuration item was introduced for fear of unstable data skipping, let's remove this config.

by the way, it is suggested to directly remove this judgment condition. In this way, the cluster + sort operation can also generate indexes for query

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, I think this is okay to be decoupled? I see what you are saying though. I will try adding a new config to just create data skipping indexes decoupled from space curves or linear sorting (time is the only issue)

@@ -275,7 +275,9 @@
<module name="EmptyStatement" />

<!-- Checks for Java Docs. -->
<module name="JavadocStyle"/>
<module name="JavadocStyle">
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer these in separate PRs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean separate PR just for Checkstyle?
This check's false-positives were very annoying, thus made me switch it off.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. checkstyle changes is something that needs separate attention. it can be a MINOR PR. but separate nonetheless

@alexeykudinkin
Copy link
Contributor Author

@vinothchandar i need to check in fixtures to validate that we're building the index correctly. What would you recommend if checking in Parquet is not recommended?

@vinothchandar
Copy link
Member

All of our tests generate parquet files on the fly.

@alexeykudinkin
Copy link
Contributor Author

I don't think this is a particularly well-suited approach in this case: not having fixed content of the fixtures will lead to every test run having different outputs, making it a moving target whenever you're debugging, which is what i would like to avoid.

Copy link
Member

@vinothchandar vinothchandar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am going to try and simplify this.

.map(blocks -> getColumnRangeInFile(blocks))
.collect(Collectors.toList()));
// Collect stats from all individual Parquet blocks
Map<String, List<HoodieColumnRangeMetadata<Comparable>>> columnToStatsListMap =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to align on this style of formatting at some point. This increases lines of code by a lot. with the modern wide monitors, this does not make good use of screen width.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer if we kept the formatting same especially changing existing code. it just make review so much harder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, that it bloats diffs unnecessarily, but i actually find it hard to read when it's just treaded into a single line -- much harder to understand the nesting and scoping of things. With stacking it's crystal clear what scope it is, and where it belongs. We also need to keep in mind that not everyone is using wide monitors (i myself do use laptops)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of these can be subjective and I still think the threading adds too many lines. But this has been agreed upon in the project before. So I suggest sticking to what is done in the project in the PRs and decouple these discussions on the mailing list to drive consensus, before changing course.

Changing parts of code touched in different ways, increases maintenance overhead and makes for an inconsistent experience.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This tweak kept most of what you wanted but is 6 lines shorter.

    Map<String, List<HoodieColumnRangeMetadata<Comparable>>> columnToStatsListMap = metadata.getBlocks().stream().sequential()
            .flatMap(blockMetaData -> blockMetaData.getColumns().stream()
                    .filter(f -> cols.contains(f.getPath().toDotString()))
                    .map(columnChunkMetaData ->
                        new HoodieColumnRangeMetadata<Comparable>(
                            parquetFilePath.getName(),
                            columnChunkMetaData.getPath().toDotString(),
                            convertToNativeJavaType(
                                columnChunkMetaData.getPrimitiveType(),
                                columnChunkMetaData.getStatistics().genericGetMin()),
                            convertToNativeJavaType(
                                columnChunkMetaData.getPrimitiveType(),
                                columnChunkMetaData.getStatistics().genericGetMax()),
                            columnChunkMetaData.getStatistics().getNumNulls(),
                            columnChunkMetaData.getPrimitiveType().stringifier()))
            ).collect(Collectors.groupingBy(HoodieColumnRangeMetadata::getColumnName));

// Combine those into file-level statistics
// NOTE: Inlining this var makes javac (1.8) upset (due to its inability to infer
// expression type correctly)
Stream<HoodieColumnRangeMetadata<Comparable>> stream = columnToStatsListMap.values()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the variable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried to clarify it with a comment: some weird issues w/ javac not being able to deduce types appropriately, resuling into compilation failures if you inline -- so either had to cast or do a var

}

// there are multiple blocks. Compute min(block_mins) and max(block_maxs)
return blockRanges.stream()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another example. this line is probably better left alone. adds more review overhead.

minValue = range2.getMinValue();
minValueAsString = range2.getMinValueAsString();
private <T extends Comparable<T>> HoodieColumnRangeMetadata<T> combineRanges(
HoodieColumnRangeMetadata<T> one,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the renaming?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have strong preferences regarding numerical suffixes in the variable names

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is kind of minor. but please remember to keep larger renames in separate PRs. again subjective preferences need a broader agreement.

if (config.isDataSkippingEnabled() && config.isLayoutOptimizationEnabled() && !config.getClusteringSortColumns().isEmpty()) {
table.updateStatistics(context, writeStats, clusteringCommitTime, true);
// Update outstanding metadata indexes
if (config.isLayoutOptimizationEnabled()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, I think this is okay to be decoupled? I see what you are saying though. I will try adding a new config to just create data skipping indexes decoupled from space curves or linear sorting (time is the only issue)

}

private void updateOptimizeOperationStatistics(HoodieEngineContext context, List<HoodieWriteStat> stats, String instantTime) {
String cols = config.getClusteringSortColumns();
private void updateZIndex(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if this naming should be tied to zindex. Probably good to actually make even the /zindex index path name more generic, given we have hilbert curves

// Fetch table schema to appropriately construct Z-index schema
Schema tableWriteSchema =
HoodieAvroUtils.createHoodieWriteSchema(
new TableSchemaResolver(metaClient).getTableAvroSchemaWithoutMetadataFields()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indexing metadata fields is also useful actually.


private static final Logger LOG = LogManager.getLogger(ZOrderingIndexHelper.class);

private static final String SPARK_JOB_DESCRIPTION = "spark.job.description";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this got moved and changed in the same commit. which means, its very hard for the reviewer to understand the delta changes?

@vinothchandar
Copy link
Member

vinothchandar commented Nov 26, 2021

I don't think this is a particularly well-suited approach in this case: not having fixed content of the fixtures will lead to every test run having different outputs

Checking in parquet files is a non-starter. We need to figure something out

@vinothchandar
Copy link
Member

@xiarixiaoyao do you have any major concerns with this PR? If not, I ll start revising it

@xiarixiaoyao
Copy link
Contributor

@vinothchandar @alexeykudinkin
LGTM, just a little hint: pls recover some UTs for TestZOrderLayoutOptimization.testZOrderingLayoutClustering
we need to Check whether the following query results are correct in testZOrderingLayoutClustering。
case1: use sort columns and unsort columns as filter together in query。
case2: use only unsort columns as filter in query。

@alexeykudinkin
Copy link
Contributor Author

Checking in parquet files is a non-starter. We need to figure something out

@vinothchandar can you please elaborate on what's the issue here? We can store fixtures as JSON, and then convert these to Parquet, but this seems more like an overhead which also carries some hidden pitfalls (schema recovery when writing to JSON is lossy, as compared to Parquet)

Keep in mind that there's #4106 stacked on top that does renaming Z-index lingo into neutral Column-Stats Index terms.

@alexeykudinkin
Copy link
Contributor Author

@xiarixiaoyao these cases are covered by TestDataSkippingUtils now

@vinothchandar
Copy link
Member

can you please elaborate on what's the issue here?

Generally not a fan of checking in binary files as test resources even. it bloats test jars and it's not an approach we can sustain.

@xiarixiaoyao
Copy link
Contributor

how about gennerate these test files dynamiclly using spark? @alexeykudinkin

@vinothchandar
Copy link
Member

https://github.com/vinothchandar/hudi/pull/new/pull-4060
I have stashed the current PR here as is to revive the parquet files later.

Moved Z-index helper under `hudi.index.zorder` package
Alexey Kudinkin and others added 22 commits November 26, 2021 07:57
… Spark is actually able to perfectly restore schema (given Parquet was previously written by Spark as well)
…`SimpleDataFormat` object which is inherently not thread-safe
Tidying up
…ushing NOT operator down into inner expressions for appropriate handling
…containing expression as `TrueLiteral` instead
@hudi-bot
Copy link

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@vinothchandar vinothchandar merged commit 5755ff2 into apache:master Nov 26, 2021
aditiwari01 added a commit to aditiwari01/hudi that referenced this pull request Dec 29, 2021
* [HUDI-2702] Set up keygen class explicit for write config for flink table upgrade (apache#3931)

* [HUDI-313] bugfix: NPE when select count start from a  realtime table with Tez(apache#3630)

Co-authored-by: dylonyu <dylonyu@tencent.com>

* HUDI-1827 : Add ORC support in Bootstrap Op (apache#3457)

 Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2679] Fix the TestMergeIntoLogOnlyTable typo. (apache#3918)

* [HUDI-2709] Add more options when initializing table (apache#3939)

* [HUDI-2698] Remove the table source options validation (apache#3940)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2595] Fixing metadata table updates such that only regular writes from data table can trigger table services in metadata table (apache#3900)

* [HUDI-2715] The BitCaskDiskMap iterator may cause memory leak (apache#3951)

* [HUDI-2591] Bootstrap metadata table only if upgrade / downgrade is not required. (apache#3836)

* [HUDI-2579] Make deltastreamer checkpoint state merging more explicit (apache#3820)

 Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-1877] Support records staying in same fileId after clustering (apache#3833)

* [HUDI-1877] Support records staying in same fileId after clustering

Add plan strategy

* Ensure same filegroup id and refactor based on comments

* [HUDI-2297] Estimate available memory size for spillable map accurately. (apache#3455)

* [HUDI-2086]redo the logical of mor_incremental_view for hive (apache#3203)

* [HUDI-2442] Change default values for certin clustering configs (apache#3875)

* [HUDI-2730] Move EventTimeAvroPayload into hudi-common module (apache#3959)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2685] Support scheduling online compaction plan when there are no commit data (apache#3928)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2634] Improved the metadata table bootstrap for very large tables. (apache#3873)

* [HUDI-2634] Improved the metadata table bootstrap for very large tables.

Following improvements are implemented:
1. Memory overhead reduction:
  - Existing code caches FileStatus for each file in memory.
  - Created a new class DirectoryInfo which is used to cache a director's file list with parts of the FileStatus (only filename and file len). This reduces the memory requirements.

2. Improved parallelism:
  - Existing code collects all the listing to the Driver and then creates HoodieRecord on the Driver.
  - This takes a long time for large tables (11million HoodieRecords to be created)
  - Created a new function in SparkRDDWriteClient specifically for bootstrap commit. In it, the HoodieRecord creation is parallelized across executors so it completes fast.

3. Fixed setting to limit the number of parallel listings:
  - Existing code had a bug wherein 1500 executors were hardcoded to perform listing. This leads to exception due to limit in the spark's result memory.
  - Corrected the use of the config.

Result:
Dataset has 1299 partitions and 12Million files.
file listing time=1.5mins
HoodieRecord creation time=13seconds
deltacommit duration=2.6mins

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2495] Resolve inconsistent key generation for timestamp types  by GenericRecord and Row (apache#3944)

* [HUDI-2738] Remove the bucketAssignFunction useless context (apache#3972)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2746] Do not bootstrap for flink insert overwrite (apache#3980)

* [HUDI-2151] Part1 Setting default parallelism to 200 for some of write configs (apache#3948)

* [HUDI-2718] ExternalSpillableMap payload size re-estimation throws ArithmeticException (apache#3955)

- ExternalSpillableMap does the payload/value size estimation on the first put to
  determine when to spill over to disk map. The payload size re-estimation also
  happens after a minimum threshold of puts. This size re-estimation goes my the
  current in-memory map size for calculating average payload size and does attempts
  divide by zero operation when the map is size is empty. Avoiding the
  ArithmeticException during the payload size re-estimate by checking the map size
  upfront.

* [HUDI-2741] Fixing instantiating metadata table config in HoodieFileIndex (apache#3974)

* [HUDI-2697] Minor changes about hbase index config. (apache#3927)

* [HUDI-2472] Enabling metadata table in TestHoodieIndex and TestMergeOnReadRollbackActionExecutor (apache#3978)

- With rollback after first commit support added to metadata table, these test cases are safe to have metadata table turned on.

* [HUDI-2756] Fix flink parquet writer decimal type conversion (apache#3988)

* [HUDI-2706] refactor spark-sql to make consistent with DataFrame api (apache#3936)

* [HUDI-2589] Claiming RFC-37 for Metadata based bloom index feature. (apache#3995)

* [HUDI-2758] remove redundant code in the hoodieRealtimeInputFormatUitls.getRealtimeSplits (apache#3994)

* [MINOR] Fix typo in IntervalTreeBasedGlobalIndexFileFilter (apache#3993)

Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>

* [HUDI-2744] Fix parsing of metadadata table compaction timestamp when metrics are enabled (apache#3976)

* [HUDI-2683] Parallelize deleting archived hoodie commits (apache#3920)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2712] Fixing a bug with rollback of partially failed commit which has new partitions (apache#3947)

* [HUDI-2769] Fix StreamerUtil#medianInstantTime for very near instant time (apache#4005)

* [MINOR] Fixed checkstyle config to be based off Maven root-dir (requires Maven >=3.3.1 to work properly); (apache#4009)

Updated README

* [HUDI-2753] Ensure list based rollback strategy is used for restore (apache#3983)

* [HUDI-2151] Part3 Enabling marker based rollback as default rollback strategy (apache#3950)

* Enabling timeline server based markers

* Enabling timeline server based markers and marker based rollback

* Removing constraint that timeline server can be enabled only for hdfs

* Fixing tests

* Check --source-avro-schema-path  parameter (apache#3987)

Co-authored-by: 0x3E6 <dragon1996>

* [MINOR] Fix typo,'Hooide' corrected to 'Hoodie' (apache#4007)

* [MINOR] Add the Schema for GooseFS to StorageSchemes (apache#3982)

Co-authored-by: lubo <bollu@tencent.com>

* [HUDI-2314] Add support for DynamoDb based lock provider (apache#3486)

- Co-authored-by: Wenning Ding <wenningd@amazon.com>
- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2716] InLineFS support for S3FS logs (apache#3977)

* [HUDI-2734] Setting default metadata enable as false for Java (apache#4003)

* [HUDI-2789] Flink batch upsert for non partitioned table does not work (apache#4028)

* [HUDI-2790] Fix the changelog mode of HoodieTableSource (apache#4029)

* [HUDI-2362] Add external config file support (apache#3416)


Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [HUDI-2641] Avoid deleting all inflight commits heartbeats while rolling back failed writes (apache#3956)

* [HUDI-2791] Allows duplicate files for metadata commit (apache#4033)

* [HUDI-2798] Fix flink query operation fields (apache#4041)

* [HUDI-2731] Make clustering work regardless of whether there are base… (apache#3970)

* [HUDI-2593] Virtual keys support for metadata table (apache#3968)

- Metadata table today has virtual keys disabled, thereby populating the metafields
  for each record written out and increasing the overall storage space used. Hereby
  adding virtual keys support for metadata table so that metafields are disabled
  for metadata table records.

- Adding a custom KeyGenerator for Metadata table so as to not rely on the
  default Base/SimpleKeyGenerators which currently look for record key
  and partition field set in the table config.

- AbstractHoodieLogRecordReader's version of processing next data block and
  createHoodieRecord() will be a generic version and making the derived class
  HoodieMetadataMergedLogRecordReader take care of the special creation of
  records from explictly passed in partition names.

* [HUDI-2472] Enabling metadata table for TestHoodieMergeOnReadTable and TestHoodieCompactor (apache#4023)

* [HUDI-2796] Metadata table support for Restore action to first commit (apache#4039)

 - Adding support for the metadata table to restore to first commit and
   take proper action for the bootstrap on subequent commits.

* [HUDI-2242] Add configuration inference logic for few options (apache#3359)


Co-authored-by: Wenning Ding <wenningd@amazon.com>

* Remove the aws packages from hudi flink bundle jar (apache#4050)

* [HUDI-2742] Added S3 object filter to support multiple S3EventsHoodieIncrSources single S3 meta table (apache#4025)

* [HUDI-2795] Add mechanism to safely update,delete and recover table properties (apache#4038)

* [HUDI-2795] Add mechanism to safely update,delete and recover table properties

  - Fail safe mechanism, that lets queries succeed off a backup file
  - Readers who are not upgraded to this version of code will just fail until recovery is done.
  - Added unit tests that exercises all these scenarios.
  - Adding CLI for recovery, updation to table command.
  - [Pending] Add some hash based verfication to ensure any rare partial writes for HDFS

* Fixing upgrade/downgrade infrastructure to use new updation method

* [MINOR] Claim RFC number for RFC for debezium source for deltastreamer (apache#4047)

* [MINOR] optimize in constructor of inputbatch class (apache#4040)

Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>

* [HUDI-2813] Claim RFC number for RFC for spark datasource V2 Integration (apache#4059)

* [HUDI-2804] Add option to skip compaction instants for streaming read (apache#4051)

* [HUDI-2392] Make flink parquet reader compatible with decimal BINARY encoding (apache#4057)

* [HUDI-1932] Update Hive sync timestamp when change detected (apache#3053)

* Update Hive sync timestamp when change detected

Only update the last commit timestamp on the Hive table when the table schema
has changed or a partition is created/updated.

When using AWS Glue Data Catalog as the metastore for Hive this will ensure
that table versions are substantive (including schema and/or partition
changes). Prior to this change when a Hive sync is performed without schema
or partition changes the table in the Glue Data Catalog would have a new
version published with the only change being the timestamp property.

https://issues.apache.org/jira/browse/HUDI-1932

* add conditional sync flag

* fix testSyncWithoutDiffs

* fix HiveSyncConfig

Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>

* [MINOR] Fix typos (apache#4053)

* [HUDI-2799] Fix the classloader of flink write task (apache#4042)

* [HUDI-1870] Add more Spark CI build tasks  (apache#4022)

* [HUDI-1870] Add more Spark CI build tasks

- build for spark3.0.x
- build for spark-shade-unbundle-avro
- fix build failures
  - delete unnecessary assertion for spark 3.0.x
  - use AvroConversionUtils#convertAvroSchemaToStructType instead of calling SchemaConverters#toSqlType directly to solve the compilation failures with spark-shade-unbundle-avro (apache#5)

Co-authored-by: Yann <biyan900116@gmail.com>

* [HUDI-2533] New option for hoodieClusteringJob to check, rollback and re-execute the last failed clustering job (apache#3765)

* coding finished and need to do uts

* add uts

* code review

* code review

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2472] Enabling metadata table for TestHoodieIndex test case (apache#4045)

- Enablng the metadata table for testSimpleGlobalIndexTagLocationWhenShouldUpdatePartitionPath.
   This is more of a test issue.

* [MINOR] Fix instant parsing in HoodieClusteringJob (apache#4071)

* [HUDI-2559] Converting commit timestamp format to millisecs (apache#4024)

- Adds support for generating commit timestamps with millisecs granularity. 
- Older commit timestamps (in secs granularity) will be suffixed with 999 and parsed with millisecs format.

* [HUDI-2599] Make addFilesToview and fetchLatestBaseFiles public (apache#4066)

* [HUDI-2550] Expand File-Group candidates list for appending for MOR tables (apache#3986)

* [HUDI-2737] Use earliest instant by default for async compaction and clustering jobs (apache#3991)

Address review comments

Fix test failures

Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>

* [MINOR] Fix typo,'multipe' corrected to 'multiple' (apache#4068)

* [HUDI-1937] Rollback unfinished replace commit to allow updates (apache#3869)

* [HUDI-1937] Rollback unfinished replace commit to allow updates while clustering

* Revert and delete requested replacecommit too

* Rollback pending clustering instants transactionally

* No double locking and add a config to enable rollback

* Update config to be clear about rollback only on conflict

* [MINOR] Add more configuration to Kafka setup script (apache#3992)

* [MINOR] Add more configuration to Kafka setup script

* Add option to reuse Kafka topic

* Minor fixes to README

* [HUDI-2743] Assume path exists and defer fs.exists() in AbstractTableFileSystemView (apache#4002)

* [HUDI-2778] Optimize statistics collection related codes and add some docs for z-order add fix some bugs (apache#4013)

* [HUDI-2778] Optimize statistics collection related codes and add more docs for z-order.

* add test code for multi-thread parquet footer read

* [HUDI-2409] Using HBase shaded jars in Hudi presto bundle (apache#3623)

* using hbase-shaded-jars-in-hudi-presto-hundle

* test

* add hudi-common-bundle

* code review

* code review

* code review

* code review

* test

* test

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2332] Add clustering and compaction in Kafka Connect Sink (apache#3857)

* [HUDI-2332] Add clustering and compaction in Kafka Connect Sink

* Disable validation check on instant time for compaction and adjust configs

* Add javadocs

* Add clustering and compaction config

* Fix transaction causing missing records in the target table

* Add debugging logs

* Fix kafka offset sync in participant

* Adjust how clustering and compaction are configured in kafka-connect

* Fix clustering strategy

* Remove irrelevant changes from other published PRs

* Update clustering logic and others

* Update README

* Fix test failures

* Fix indentation

* Fix clustering config

* Add JavaCustomColumnsSortPartitioner and make async compaction enabled by default

* Add test for JavaCustomColumnsSortPartitioner

* Add more changes after IDE sync

* Update README with clarification

* Fix clustering logic after rebasing

* Remove unrelated changes

* [MINOR] Fix typo,rename 'HooodieAvroDeserializer' to 'HoodieAvroDeserializer' (apache#4064)

* [HUDI-2325] Add hive sync support to kafka connect (apache#3660)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2831] Securing usages of `SimpleDateFormat` to be thread-safe (apache#4073)

* [HUDI-2818] Fix 2to3 upgrade when set `hoodie.table.keygenerator.class` (apache#4077)

* [HUDI-2838] refresh table after drop partition (apache#4084)

* Revert "[HUDI-2799] Fix the classloader of flink write task (apache#4042)" (apache#4069)

This reverts commit 8281cbf.

* [HUDI-2847] Flink metadata table supports virtual keys (apache#4096)

* [HUDI-2759] extract HoodieCatalogTable to coordinate spark catalog table and hoodie table (apache#3998)

* [HUDI-2688] Claim the next rfc 40 for Hudi connector for Trino (apache#4105)

* [HUDI-2671] Fix kafka offset handling in Kafka Connect protocol (apache#4021)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2443] Hudi KVComparator for all HFile writer usages (apache#3889)

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Hudi relies on custom class shading for Hbase's KeyValue.KVComparator to
  avoid versioning and class loading issues. There are few places which are
  still using the Hbase's comparator class directly and version upgrades
  would make them obsolete. Refactoring the HoodieKVComparator and making
  all HFile writer creation using the same shaded class.

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Moving HoodieKVComparator from common.bootstrap.index to common.util

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

- Retaining the old HoodieKVComparatorV2 for boostrap case. Adding the
  new comparator as HoodieKVComparatorV2 to differentiate from the old
  one.

* [HUDI-2443] Hudi KVComparator for all HFile writer usages

 - Renamed HoodieKVComparatorV2 to HoodieMetadataKVComparator and moved it
   under the package org.apache.hudi.metadata.

* Make comparator classname configurable

* Revert new config and address other review comments

Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>

* [HUDI-2788] Fixing issues w/ Z-order Layout Optimization (apache#4026)

* Simplyfying, tidying up

* Fixed packaging for `TestOptimizeTable`

* Cleaned up `HoodiFileIndex` file filtering seq;
Removed optimization manually reading Parquet table circumventing Spark

* Refactored `DataSkippingUtils`:
  - Fixed checks to validate all statistics cols are present
  - Fixed some predicates being constructed incorrectly
  - Rewrote comments for easier comprehension, added more notes
  - Tidying up

* Tidying up tests

* `lint`

* Fixing compilation

* `TestOptimizeTable` > `TestTableLayoutOptimization`;
Added assertions to test data skipping paths

* Fixed tests to properly hit data-skipping path

* Fixed pruned files candidates lookup seq to conservatively included all non-indexed files

* Added java-doc

* Fixed compilation

* [HUDI-2766] Cluster update strategy should not be fenced by write config (apache#4093)

Fix pending clustering rollback test

* [HUDI-2793] Fixing deltastreamer checkpoint fetch/copy over (apache#4034)

- Removed the copy over logic in transaction utils. Deltastreamer will go back to previous commits and get the checkpoint value.

* [HUDI-2853] Add JMX deps in hudi utilities and kafka connect bundles (apache#4108)


Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2844][CLI] Fixing archived Timeline crashing if timeline contains REPLACE_COMMIT (apache#4091)

* [MINOR] Fix build failure due to checkstyle issues (apache#4111)

* [HUDI-1290] [RFC-39] Deltastreamer avro source for Debezium CDC (apache#4048)

* Add RFC entry for deltastreamer source for debezium

* Add RFC for debezium source

* Add RFC for debezium source

* Add RFC for debezium source

* fix hyperlink issue and rebase

* Update progress

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-1290] Add Debezium Source for deltastreamer (apache#4063)

* add source for postgres debezium

* Add tests for debezium payload

* Fix test

* Fix test

* Add tests for debezium source

* Add tests for debezium source

* Fix schema for debezium

* Fix checkstyle issues

* Fix config issue for schema registry

* Add mysql source for debezium

* Fix checkstyle issues an tests

* Improve code for merging toasted values

* Improve code for merging toasted values

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2792] Configure metadata payload consistency check (apache#4035)

- Relax metadata payload consistency check to consider spark task failures with spurious deletes

* [HUDI-2855] Change the default value of 'PAYLOAD_CLASS_NAME' to 'DefaultHoodieRecordPayload' (apache#4115)

* [HUDI-2480] FileSlice after pending compaction-requested instant-time… (apache#3703)

* [HUDI-2480] FileSlice after pending compaction-requested instant-time is ignored by MOR snapshot reader

* include file slice after a pending compaction for spark reader

Co-authored-by: garyli1019 <yanjia.gary.li@gmail.com>

* [HUDI-1290] fixing mysql debezium source (apache#4119)

* [HUDI-2800] Remove rdd.isEmpty() validation to prevent CreateHandle being called twice (apache#4121)

* [HUDI-2794] Guarding table service commits within a single lock to commit to both data table and metadata table (apache#4037)

* Fixing a single lock to commit table services across metadata table and data table

* Addressing comments

* rebasing with master

* [HUDI-2671] Making error -> warn logs from timeline server with concurrent writers for inconsistent state (apache#4088)

* Making error -> warn logs from timeline server with concurrent writers for inconsistent state

* Fixing bad request response exception for timeline out of sync

* Addressing feedback. removed write concurrency mode depedency

* [HUDI-2858] Fixing handling of cluster update reject exception in deltastreamer (apache#4120)

* [HUDI-2841] Fixing lazy rollback for MOR with list based strategy (apache#4110)

* [HUDI-2801] Add Amazon CloudWatch metrics reporter (apache#4081)

* [HUDI-2840] Fixed DeltaStreaemer to properly respect configuration passed t/h properties file (apache#4090)

* Rebased `DFSPropertiesConfiguration` to access Hadoop config in liue of FS to avoid confusion

* Fixed `readConfig` to take Hadoop's `Configuration` instead of FS;
Fixing usages

* Added test for local FS access

* Rebase to use `FSUtils.getFs`

* Combine properties provided as a file along w/ overrides provided from the CLI

* Added helper utilities to `HoodieClusteringConfig`;
Make sure corresponding config methods fallback to defaults;

* Fixed DeltaStreamer usage to respect properly combined configuration;
Abstracted `HoodieClusteringConfig.from` convenience utility to init Clustering config from `Properties`

* Tidying up

* `lint`

* Reverting changes to `HoodieWriteConfig`

* Tdiying up

* Fixed incorrect merge of the props

* Converted `HoodieConfig` to wrap around `Properties` into `TypedProperties`

* Fixed compilation

* Fixed compilation

* [HUDI-2005] Removing direct fs call in HoodieLogFileReader (apache#3865)

* [HUDI-2851] Shade org.apache.hadoop.hive.ql.optimizer package for flink bundle jar (apache#4104)

* [MINOR] Include hudi-aws in flink bundle jar (apache#4127)

HUDI-2801 makes this jar as required.

* [HUDI-2852] Table metadata returns empty for non-exist partition (apache#4117)

* [HUDI-2852] Table metadata returns empty for non-exist partition

* add unit test

* fix code checkstyle

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-2863] Rename option 'hoodie.parquet.page.size' to 'write.parquet.page.size' (apache#4128)

* [HUDI-2850] Fixing Clustering CLI - schedule and run command fixes to avoid NumberFormatException (apache#4101)

* [HUDI-2814] Addressing issues w/ Z-order Layout Optimization (apache#4060)

* `ZCurveOptimizeHelper` > `ZOrderingIndexHelper`;
Moved Z-index helper under `hudi.index.zorder` package

* Tidying up `ZOrderingIndexHelper`

* Fixing compilation

* Fixed index new/original table merging sequence to always prefer values from new index;
Cleaned up `HoodieSparkUtils`

* Added test for `mergeIndexSql`

* Abstracted Z-index name composition w/in `ZOrderingIndexHelper`;

* Fixed `DataSkippingUtils` to interrupt prunning in case data filter contains non-indexed column reference

* Properly handle exceptions origination during pruning in `HoodieFileIndex`

* Make sure no errors are logged upon encountering `AnalysisException`

* Cleaned up Z-index updating sequence;
Tidying up comments, java-docs;

* Fixed Z-index to properly handle changes of the list of clustered columns

* Tidying up

* `lint`

* Suppressing `JavaDocStyle` first sentence check

* Fixed compilation

* Fixing incorrect `DecimalType` conversion

* Refactored test `TestTableLayoutOptimization`
  - Added Z-index table composition test (against fixtures)
  - Separated out GC test;
Tidying up

* Fixed tests re-shuffling column order for Z-Index table `DataFrame` to align w/ the one by one loaded from JSON

* Scaffolded `DataTypeUtils` to do basic checks of Spark types;
Added proper compatibility checking b/w old/new index-tables

* Added test for Z-index tables merging

* Fixed import being shaded by creating internal `hudi.util` package

* Fixed packaging for `TestOptimizeTable`

* Revised `updateMetadataIndex` seq to provide Z-index updating process w/ source table schema

* Make sure existing Z-index table schema is sync'd to source table's one

* Fixed shaded refs

* Fixed tests

* Fixed type conversion of Parquet provided metadata values into Spark expected schemas

* Fixed `composeIndexSchema` utility to propose proper schema

* Added more tests for Z-index:
  - Checking that Z-index table is built correctly
  - Checking that Z-index tables are merged correctly (during update)

* Fixing source table

* Fixing tests to read from Parquet w/ proper schema

* Refactored `ParquetUtils` utility reading stats from Parquet footers

* Fixed incorrect handling of Decimals extracted from Parquet footers

* Worked around issues in javac failign to compile stream's collection

* Fixed handling of `Date` type

* Fixed handling of `DateType` to be parsed as `LocalDate`

* Updated fixture;
Make sure test loads Z-index fixture using proper schema

* Removed superfluous scheme adjusting when reading from Parquet, since Spark is actually able to perfectly restore schema (given Parquet was previously written by Spark as well)

* Fixing race-condition in Parquet's `DateStringifier` trying to share `SimpleDataFormat` object which is inherently not thread-safe

* Tidying up

* Make sure schema is used upon reading to validate input files are in the appropriate format;
Tidying up;

* Worked around javac (1.8) inability to infer expression type properly

* Updated fixtures;
Tidying up

* Fixing compilation after rebase

* Assert clustering have in Z-order layout optimization testing

* Tidying up exception messages

* XXX

* Added test validating Z-index lookup filter correctness

* Added more test-cases;
Tidying up

* Added tests for string expressions

* Fixed incorrect Z-index filter lookup translations

* Added more test-cases

* Added proper handling on complex negations of AND/OR expressions by pushing NOT operator down into inner expressions for appropriate handling

* Added `-target:jvm-1.8` for `hudi-spark` module

* Adding more tests

* Added tests for non-indexed columns

* Properly handle non-indexed columns by falling back to a re-write of containing expression as  `TrueLiteral` instead

* Fixed tests

* Removing the parquet test files and disabling corresponding tests

Co-authored-by: Vinoth Chandar <vinoth@apache.org>

* [MINOR] Fixing test failure to fix CI build failure (apache#4132)

* [HUDI-2861] Re-use same rollback instant time for failed rollbacks (apache#4123)

* [HUDI-2767] Enabling timeline-server-based marker as default (apache#4112)

- Changes the default config of marker type (HoodieWriteConfig.MARKERS_TYPE or hoodie.write.markers.type) from DIRECT to TIMELINE_SERVER_BASED for Spark Engine.
- Adds engine-specific marker type configs: Spark -> TIMELINE_SERVER_BASED, Flink -> DIRECT, Java -> DIRECT.
- Uses DIRECT markers as well for Spark structured streaming due to timeline server only available for the first mini-batch.
- Fixes the marker creation method for non-partitioned table in TimelineServerBasedWriteMarkers.
- Adds the fallback to direct markers even when TIMELINE_SERVER_BASED is configured, in WriteMarkersFactory: when HDFS is used, or embedded timeline server is disabled, the fallback to direct markers happens.
- Fixes the closing of timeline service.
- Fixes tests that depend on markers, mainly by starting the timeline service for each test.

* [HUDI-2845] Metadata CLI - files/partition file listing fix and new validate option (apache#4092)

- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2848] Excluse guava from hudi-cli pom (apache#4100)

* [HUDI-2864] Fix README and scripts with current limitations of hive sync (apache#4129)

* Fix README with current limitations of hive sync

* Fix README with current limitations of hive sync

* Fix dep issue

* Fix Copy on Write flow

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2856] Bit cask disk map delete modified (apache#4116)

* modified BitCaskDiskMap_close_function

* change iterators location to finally

* Update BitCaskDiskMap.java

* [MINOR] Follow ups from HUDI-2861 (re-use same rollback instant for failed rollback) (apache#4133)

* [HUDI-2868] Fix skipped HoodieSparkSqlWriterSuite (apache#4125)

- Co-authored-by: Yann Byron <biyan900116@gmail.com>

* [HUDI-2475] [HUDI-2862] Metadata table creation and avoid bootstrapping race for write client & add locking for upgrade (apache#4114)

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2102] Support hilbert curve for hudi (apache#3952)

Co-authored-by: Y Ethan Guo <ethan.guoyihua@gmail.com>

* Moving to 0.11.0-SNAPSHOT on master branch.

* [MINOR] fix typo (apache#4140)

* [MINOR] Fixing integ test suite for hudi-aws and archival validation (apache#4142)

* Removing rfc from release package and fixing release validation script (apache#4147)

* [MINOR] Fix syntax error in create_source_release.sh (apache#4150)

* [MINOR] Fix typo,rename 'getUrlEncodePartitoning' to 'getUrlEncodePartitioning' (apache#4130)

* [HUDI-2642] Add support ignoring case in update sql operation (apache#3882)

* [HUDI-2891] Fix write configs for Java engine in Kafka Connect Sink (apache#4161)

* Revert "[HUDI-2855] Change the default value of 'PAYLOAD_CLASS_NAME' to 'DefaultHoodieRecordPayload' (apache#4115)" (apache#4169)

This reverts commit 88067f5.

* Revert "[HUDI-2856] Bit cask disk map delete modified (apache#4116)" (apache#4171)

This reverts commit 257a6a7.

* [HUDI-2880] Fixing loading of props from default dir (apache#4167)

* Fixing loading of props from default dir

* addressing comments

* [HUDI-2881] Compact the file group with larger log files to reduce write amplification (apache#4152)

* Fixed partitions produced by layout optimization in case order-by key is composed of a single column (apache#4183)

* [MINOR] Fix the wrong usage of timestamp length variable bug (apache#4179)

Signed-off-by: zzzhy <candle_1667@163.com>

* [HUDI-2904] Fix metadata table archival overstepping between regular writers and table services (apache#4186)

- Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2914] Fix remote timeline server config for flink (apache#4191)

* [minor] Refactor write profile to always generate fs view (apache#4198)

* [HUDI-2924] Refresh the fs view on successful checkpoints for write profile (apache#4199)

* [MINOR] use catalog schema if can not find table schema (apache#4182)

* [HUDI-2902] Fixing populate meta fields with Hfile writers and Disabling virtual keys by default for metadata table (apache#4194)

* [HUDI-2911] Removing default value for `PARTITIONPATH_FIELD_NAME` resulting in incorrect `KeyGenerator` configuration (apache#4195)

* Revert "[HUDI-2495] Resolve inconsistent key generation for timestamp types  by GenericRecord and Row (apache#3944)" (apache#4201)

* [HUDI-2894][HUDI-2905] Metadata table - avoiding key lookup failures on base files over S3 (apache#4185)

- Fetching partition files or all partitions from the metadata table is failing
   when run over S3. Metadata table uses HFile format for the base files and the
   record lookup uses HFile.Reader and HFileScanner interfaces to get records by
   partition keys. When the backing storage is S3, this record lookup from HFiles
   is failing with IOException, in turn failing the caller commit/update operations.

 - Metadata table looks up HFile records with positional read enabled so as to
   perform better for random lookups. But this positional read key lookup is
   returning with partial read sizes over S3 leading to HFile scanner throwing
   IOException. This doesn't happen over HDFS. Metadata table though uses the HFile
   for random key lookups, the positional read is not mandatory as we sort the keys
   when doing a lookup for multiple keys.

 - The fix is to disable HFile positional read for all HFile scanner based
   key lookups.

* Revert "[HUDI-2489]Tuning HoodieROTablePathFilter by caching hoodieTableFileSystemView, aiming to reduce unnecessary list/get requests"

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [MINOR] Mitigate CI jobs timeout issues (apache#4173)

* skip shutdown zookeeper in `@AfterAll` in TestHBaseIndex

* rebalance CI tests

* [HUDI-2933] DISABLE Metadata table by default (apache#4213)

* [HUDI-2890] Kafka Connect: Fix failed writes and avoid table service concurrent operations (apache#4211)

* Fix kafka connect readme

* Fix handling of errors in write records for kafka connect

* By default, ensure we skip error records and keep the pipeline alive

* Fix indentation

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight (apache#4206)

* [HUDI-2923] Fixing metadata table reader when metadata compaction is inflight

* Fixing retry of pending compaction in metadata table and enhancing tests

* [HUDI-2934] Optimize RequestHandler code style

close apache#4215

* [HUDI-2935] Remove special casing of clustering in deltastreamer checkpoint retrival (apache#4216)

- We now seek backwards to find the checkpoint
 - No need to return empty anymore

* [HUDI-2877] Support flink catalog to help user use flink table conveniently (apache#4153)

* [HUDI-2877] Support flink catalog to help user use flink table conveniently

* Fix comment

* fix comment2

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit … (apache#4217)

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2937] Introduce a pulsar implementation of hoodie write commit callback

* [HUDI-2418] Support HiveSchemaProvider (apache#3671)


Co-authored-by: jian.feng <fengjian428@gmial.com>

* [HUDI-2916] Add IssueNavigationLink for IDEA (apache#4192)

* [HUDI-2900] Fix corrupt block end position (apache#4181)

* [HUDI-2900] Fix corrupt block end position

* add a test

* [HUDI-2876] for hive/presto hudi should remove the temp file which created by HoodieMergedLogRecordSanner  when the query finished. (apache#4139)

* [MINOR] Fix partition path formatting in error log (apache#4168)

* [MINOR] Use maven-shade-plugin version for hudi-timeline-server-bundle from main pom.xml (apache#4209)

Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [MINOR] Remove redundant and conflicting spark-hive dependency (apache#4228)

Disable TestHiveSchemaProvider

* [HUDI-2951] Disable remote view storage config for flink (apache#4237)

* [HUDI-2942] add error message log in HoodieCombineHiveInputFormat (apache#4224)

* [MINOR] Update DOAP with 0.10.0 Release (apache#4246)

* [HUDI-2832][RFC-41] Proposal to integrate Hudi on Snowflake platform (apache#4074)

* [HUDI-2832][RFC-40] Proposal to integrate Hudi on Snowflake platform

* rebased and addressed review comments

* [HUDI-2964] Fixing aws lock configs to inherit from HoodieConfig (apache#4258)

* [HUDI-2957] Shade kryo jar for flink bundle jar (apache#4251)

* [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter (apache#3912)

Co-authored-by: guanziyue.gzy <guanziyue.gzy@bytedance.com>

* [MINOR] Fix Compile broken (apache#4263)

* [HUDI-2779] Cache BaseDir if HudiTableNotFound Exception thrown (apache#4014)

* [HUDI-2966] Add TaskCompletionListener for HoodieMergeOnReadRDD to close logScaner when the query finished. (apache#4265)

* [HUDI-2966] Add TaskCompletionListener for HoodieMergeOnReadRDD to close logScaner when the query finished.

* [MINOR] FAQ link in SUPPORT_REQUEST template (apache#4266)

* Claiming RFC for data skipping index for updated version (apache#4271)

* Revert "Claiming RFC for data skipping index for updated version (apache#4271)" (apache#4272)

This reverts commit 8321d20.

* [HUDI-2901] Fixed the bug clustering jobs cannot running in parallel (apache#4178)

* [HUDI-2936] Add data count checks in async clustering tests (apache#4236)

* [HUDI-2849] Improve SparkUI job description for write path (apache#4222)

* [HUDI-2952] Fixing metadata table for non-partitioned dataset (apache#4243)

* [HUDI-2912] Fix CompactionPlanOperator typo (apache#4187)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* Adding verbose output for metadata validate files command (apache#4166)

* [HUDI-2892][BUG] Pending Clustering may stain the ActiveTimeLine and lead to incomplete query results (apache#4172)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2784] Add a hudi-trino-bundle for Trino (apache#4279)

* [HUDI-2814] Make Z-index more generic Column-Stats Index (apache#4106)

* [HUDI-2527] Multi writer test with conflicting async table services (apache#4046)

* [HUDI-2974] Make the prefix for metrics name configurable (apache#4274)

Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>

* [HUDI-2959] Fix the thread leak of cleaning service (apache#4252)

* [HUDI-2985] Shade jackson for hudi flink bundle jar (apache#4284)

* [HUDI-2906] Add a repair util to clean up dangling data and log files (apache#4278)

* [HUDI-2984] Implement #close for AbstractTableFileSystemView (apache#4285)

* [HUDI-2946] Upgrade maven plugins to be compatible with higher Java versions (apache#4232)

Co-authored-by: Wenning Ding <wenningd@amazon.com>

* [HUDI-2938] Metadata table util to get latest file slices for reader/writers (apache#4218)

* [HUDI-2990] Sync to HMS when deleting partitions (apache#4291)

* [HUDI-2994] Add judgement to existed partitionPath in the catch code block for HU… (apache#4294)

* [HUDI-2994] Add judgement to existed partition path in the catch code block for HUDI-2743

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-2996] Flink streaming reader 'skip_compaction' option does not work (apache#4304)

close apache#4304

* [HUDI-2997] Skip the corrupt meta file for pending rollback action (apache#4296)

* [HUDI-2995] Enabling metadata table by default (apache#4295)

- Enabling metadata table by default

* [HUDI-3022] Fix NPE for isDropPartition method (apache#4319)

* [HUDI-3022] Fix NPE for isDropPartition method

* [HUDI-3024] Add explicit write handler for flink (apache#4329)

Co-authored-by: wangminchao <wangminchao@asinking.com>

* [HUDI-3025] Add additional wait time for namenode availability during IT tests initiatialization  (apache#4328)

- Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-3028] Use blob storage to speed up CI downloads (apache#4331)

Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>

* [HUDI-2998] claiming rfc number for consistent hashing index (apache#4303)

Co-authored-by: xiaoyuwei <xiaoyuwei.yw@alibaba-inc.com>

* [HUDI-3015] Implement #reset and #sync for metadata filesystem view (apache#4307)

* [Minor] Catch and ignore all the exceptions in quietDeleteMarkerDir (apache#4301)

Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-3001] Clean up the marker directory when finish bootstrap operation. (apache#4298)

* [HUDI-3043] Revert async cleaner leak commit to unblock CI failure (apache#4343)

* Revert "[HUDI-2959] Fix the thread leak of cleaning service (apache#4252)"
Reverting to unblock CI failure for now. will revisit this with the right fix

* [HUDI-3037] Add back remote view storage config for flink (apache#4338)

* [HUDI-3046] Claim RFC number for RFC for Compaction / Clustering Service (apache#4347)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-2958] Automatically set spark.sql.parquet.writelegacyformat, when using bulkinsert to insert data which contains decimalType (apache#4253)

* [HUDI-3043] Adding some test fixes to continuous mode multi writer tests (apache#4356)

* [HUDI-2962] InProcess lock provider to guard single writer process with async table operations (apache#4259)

 - Adding Local JVM process based lock provider implementation

 - This local lock provider can be used by a single writer process with async
   table operations to guard the metadata tabl against concurrent updates.

* [HUDI-3043] De-coupling multi writer tests (apache#4362)

* [HUDI-3029]  Transaction manager: avoid deadlock when doing begin and end transactions (apache#4363)

* [HUDI-3029] Transaction manager: avoid deadlock when doing begin and end transactions

 - Transaction manager has begin and end transactions as synchronized methods.
   Based on the lock provider implementaion, this can lead to deadlock
   situation when the underlying lock() calls are blocking or with a long timeout.

 - Fixing transaction manager begin and end transactions to not get to deadlock
   and to not assume anything on the lock provider implementation.

* [HUDI-3029]  Transaction manager: avoid deadlock when doing begin and end transactions (apache#4373)

* [HUDI-3064] Fixing a bug in TransactionManager and FileSystemTestLock (apache#4372)

* [HUDI-3054] Fixing default lock configs for FileSystemBasedLock and fixing a flaky test (apache#4374)

* [MINOR] Azure CI IT tasks clean up (apache#4337)

* [HUDI-3052] Fix flaky testJsonKafkaSourceResetStrategy (apache#4381)

* [minor] fix NetworkUtils#getHostname (apache#4355)

* [HUDI-2970] Adding tests for archival of replace commit actions (apache#4268)

* [HUDI-3064][HUDI-3054] FileSystemBasedLockProviderTestClass tryLock fix and TestHoodieClientMultiWriter test fixes (apache#4384)

 - Made FileSystemBasedLockProviderTestClass thread safe and fixed the
   tryLock retry logic.

 - Made TestHoodieClientMultiWriter. testHoodieClientBasicMultiWriter
   deterministic in verifying the HoodieWriteConflictException.

* remove unused import (apache#4349)

* [MINOR] Remove unused method in HoodieActiveTimeline (apache#4401)

* [MINOR] Increasing CI timeout to 90 mins (apache#4407)

* [HUDI-3070] Add rerunFailingTestsCount for flakly testes (apache#4398)



Co-authored-by: yuezhang <yuezhang@freewheel.tv>

* [HUDI-2970] Add test for archiving replace commit (apache#4345)

* [HUDI-3008] Fixing HoodieFileIndex partition column parsing for nested fields

* [HUDI-3027] Update hudi-examples README.md (apache#4330)

* [HUDI-3032] Do not clean the log files right after compaction for metadata table (apache#4336)

* [HUDI-2547] Schedule Flink compaction in service (apache#4254)

Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>

* [HUDI-3011] Adding ability to read entire data with HoodieIncrSource with empty checkpoint (apache#4334)

* Adding ability to read entire data with HoodieIncrSource with empty checkpoint

* Addressing comments

* [HUDI-3060] drop table for spark sql (apache#4364)

* [MINOR] Fix DedupeSparkJob typo (apache#4418)

* [HUDI-3014] Add table option to set utc timezone (apache#4306)

* [MINOR] Remove unused method in HoodieActiveTimeline (apache#4435)

* [HUDI-3101] Excluding compaction instants from pending rollback info (apache#4443)

* [HUDI-3102] Do not store rollback plan in inflight instant (apache#4445)

* [HUDI-3099] Purge drop partition for spark sql (apache#4436)

* [HUDI-2374] Fixing AvroDFSSource does not use the overridden schema to deserialize Avro binaries (apache#4353)

* [HUDI-3093] fix spark-sql query table that write with TimestampBasedKeyGenerator (apache#4416)

* [HUDI-3106] Fix HiveSyncTool not sync schema (apache#4452)

* [HUDI-2811] Support Spark 3.2 (apache#4270)

* Fixing dynamoDbLockConfig required prop check (apache#4422)

* [HUDI-2983] Remove Log4j2 transitive dependencies (apache#4281)

Co-authored-by: Danny Chan <yuzhao.cyz@gmail.com>
Co-authored-by: Genmao Yu <hustyugm@gmail.com>
Co-authored-by: dylonyu <dylonyu@tencent.com>
Co-authored-by: manasaks <manasas2004@gmail.com>
Co-authored-by: Shawy Geng <gengxiaoyu1996@gmail.com>
Co-authored-by: yuzhaojing <32435329+yuzhaojing@users.noreply.github.com>
Co-authored-by: yuzhaojing <yuzhaojing@bytedance.com>
Co-authored-by: Sivabalan Narayanan <sivabala@uber.com>
Co-authored-by: Prashant Wason <pwason@uber.com>
Co-authored-by: davehagman <73851873+davehagman@users.noreply.github.com>
Co-authored-by: Sagar Sumit <sagarsumit09@gmail.com>
Co-authored-by: xiarixiaoyao <mengtao0326@qq.com>
Co-authored-by: Sivabalan Narayanan <n.siva.b@gmail.com>
Co-authored-by: Yann Byron <biyan900116@gmail.com>
Co-authored-by: Manoj Govindassamy <manoj.govindassamy@gmail.com>
Co-authored-by: dufeng1010 <dufeng1010@126.com>
Co-authored-by: 闫杜峰 <yandufeng@sinochem.com>
Co-authored-by: zhangyue19921010 <69956021+zhangyue19921010@users.noreply.github.com>
Co-authored-by: yuezhang <yuezhang@freewheel.tv>
Co-authored-by: Alexey Kudinkin <alexey@infinilake.com>
Co-authored-by: 0x574C <761604382@qq.com>
Co-authored-by: 董可伦 <dongkelun01@inspur.com>
Co-authored-by: 卢波 <26039470+lubo212@users.noreply.github.com>
Co-authored-by: lubo <bollu@tencent.com>
Co-authored-by: wenningd <wenningding95@gmail.com>
Co-authored-by: Wenning Ding <wenningd@amazon.com>
Co-authored-by: Udit Mehrotra <udit.mehrotra90@gmail.com>
Co-authored-by: Ron <ldliulsy@163.com>
Co-authored-by: Harsha Teja Kanna <h7kanna@users.noreply.github.com>
Co-authored-by: vinoth chandar <vinothchandar@users.noreply.github.com>
Co-authored-by: rmahindra123 <76502047+rmahindra123@users.noreply.github.com>
Co-authored-by: leesf <490081539@qq.com>
Co-authored-by: Nate Radtke <5672085+nateradtke@users.noreply.github.com>
Co-authored-by: Raymond Xu <2701446+xushiyan@users.noreply.github.com>
Co-authored-by: Y Ethan Guo <ethan.guoyihua@gmail.com>
Co-authored-by: Jimmy.Zhou <zhouyongjin@inspur.com>
Co-authored-by: Rajesh Mahindra <rmahindra@Rajeshs-MacBook-Pro.local>
Co-authored-by: garyli1019 <yanjia.gary.li@gmail.com>
Co-authored-by: satishm <84978833+data-storyteller@users.noreply.github.com>
Co-authored-by: mincwang <33626973+mincwang@users.noreply.github.com>
Co-authored-by: wangminchao <wangminchao@asinking.com>
Co-authored-by: Vinoth Chandar <vinoth@apache.org>
Co-authored-by: huleilei <584620569@qq.com>
Co-authored-by: xuzifu666 <1206332514@qq.com>
Co-authored-by: vortual <1039505040@qq.com>
Co-authored-by: zzzhy <candle_1667@163.com>
Co-authored-by: ForwardXu <forwardxu315@gmail.com>
Co-authored-by: 冯健 <fengjian428@gmail.com>
Co-authored-by: jian.feng <fengjian428@gmial.com>
Co-authored-by: Vinoth Govindarajan <vinothg@uber.com>
Co-authored-by: guanziyue <30882822+guanziyue@users.noreply.github.com>
Co-authored-by: guanziyue.gzy <guanziyue.gzy@bytedance.com>
Co-authored-by: RexAn <anh131@126.com>
Co-authored-by: arunkc <arunkc91@gmail.com>
Co-authored-by: Yuwei XIAO <ywxiaozero@gmail.com>
Co-authored-by: Fugle666 <30539368+Fugle666@users.noreply.github.com>
Co-authored-by: xiaoyuwei <xiaoyuwei.yw@alibaba-inc.com>
Co-authored-by: xuzifu666 <xuyu@zepp.com>
Co-authored-by: harshal patil <harshal.j.patil@gmail.com>
Co-authored-by: Aimiyoo <aimiyooo@gmail.com>
@vinishjail97 vinishjail97 mentioned this pull request Jan 24, 2022
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants