Skip to content

Using the default value of estimate record size at the averageBytesPerRecord() when estimation threshold is less than 0 #15586

@hudi-bot

Description

@hudi-bot

Currently, hudi obtains the average record size based on records written during previous commits. Used for estimating how many records pack into one file, and the code is about UpsertPartitioner.averageBytesPerRecord().

But we found that the single data file could become 600~700M and most other files are less than 200M.

  •  Reason

** the result of totalBytesWritten/totalRecordsWritten is very small when the last commit, but the next commit record is very large, then the data files will become very large. 

  • Solve plan
    ** Plan1: calculate avgSize of the past several commit not just only one, but the getCommitMetadata costs a lot of time, then this function might be slow, so we did not choose this.
    ** Plan2: Use the estimated record size considering our data size is fixed in some sense, more the hudi community did not encourage adding a more boolean variable to control whether to use the last commit avgSize, then we use the estimation threshold, when it is less than 0, we use the default estimate record size.

 

JIRA info

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions