-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Open
Labels
from-jirapriority:highSignificant impact; potential bugsSignificant impact; potential bugsstatus:pr-availablePull request availablePull request availabletype:improvementImprovements to existing functionalityImprovements to existing functionality
Description
Currently, hudi obtains the average record size based on records written during previous commits. Used for estimating how many records pack into one file, and the code is about UpsertPartitioner.averageBytesPerRecord().
But we found that the single data file could become 600~700M and most other files are less than 200M.
-
Reason
** the result of totalBytesWritten/totalRecordsWritten is very small when the last commit, but the next commit record is very large, then the data files will become very large.
- Solve plan
** Plan1: calculate avgSize of the past several commit not just only one, but the getCommitMetadata costs a lot of time, then this function might be slow, so we did not choose this.
** Plan2: Use the estimated record size considering our data size is fixed in some sense, more the hudi community did not encourage adding a more boolean variable to control whether to use the last commit avgSize, then we use the estimation threshold, when it is less than 0, we use the default estimate record size.
JIRA info
- Link: https://issues.apache.org/jira/browse/HUDI-5250
- Type: Improvement
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
from-jirapriority:highSignificant impact; potential bugsSignificant impact; potential bugsstatus:pr-availablePull request availablePull request availabletype:improvementImprovements to existing functionalityImprovements to existing functionality