Skip to content

Comments

[MINOR] Change MINI_BATCH_SIZE to 2048#4862

Merged
danny0405 merged 1 commit intoapache:masterfrom
cuibo01:minor-default-value
Feb 28, 2022
Merged

[MINOR] Change MINI_BATCH_SIZE to 2048#4862
danny0405 merged 1 commit intoapache:masterfrom
cuibo01:minor-default-value

Conversation

@cuibo01
Copy link
Contributor

@cuibo01 cuibo01 commented Feb 21, 2022

ParquetColumnarRowSplitReader#batchSize is 2048, so Changing MINI_BATCH_SIZE to 2048 will reduce memory cache.

Tips

What is the purpose of the pull request

(For example: This pull request adds quick-start document.)

Brief change log

(for example:)

  • Modify AnnotationLocation checkstyle rule in checkstyle.xml

Verify this pull request

(Please pick either of the following options)

This pull request is a trivial rework / code cleanup without any test coverage.

(or)

This pull request is already covered by existing tests, such as (please describe tests).

(or)

This change added tests and can be verified as follows:

(example:)

  • Added integration tests for end-to-end.
  • Added HoodieClientWriteTest to verify the change.
  • Manually verified the change by running a job locally.

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

ParquetColumnarRowSplitReader#batchSize is 2048, so Changing MINI_BATCH_SIZE to 2048 will reduce memory cache.
@cuibo01
Copy link
Contributor Author

cuibo01 commented Feb 21, 2022

@hudi-bot run azure

1 similar comment
@cuibo01
Copy link
Contributor Author

cuibo01 commented Feb 22, 2022

@hudi-bot run azure

@hudi-bot
Copy link
Collaborator

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@cuibo01
Copy link
Contributor Author

cuibo01 commented Feb 22, 2022

@danny0405 pls review :)


private static final int MINI_BATCH_SIZE = 1000;
private static final int MINI_BATCH_SIZE = 2048;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your comment:

ParquetColumnarRowSplitReader#batchSize is 2048, so Changing MINI_BATCH_SIZE to 2048 will reduce memory cache

Thanks, do we have some metrics to illustrate the benefit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now i dont have any metrics, but from the code,the time that data is stroed in the memory should be reduce to avoid entering the Old Generation.

int num = (int) Math.min(batchSize, totalCountLoadedSoFar - rowsReturned);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our production environment, flink hudi jobs require a lot of memory, and I'm checking memory usage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danny0405 from memory dump:
image

@danny0405 danny0405 merged commit 1932152 into apache:master Feb 28, 2022
rkkalluri pushed a commit to rkkalluri/hudi that referenced this pull request Mar 6, 2022
ParquetColumnarRowSplitReader#batchSize is 2048, so Changing MINI_BATCH_SIZE to 2048 will reduce memory cache.
vingov pushed a commit to vingov/hudi that referenced this pull request Apr 3, 2022
ParquetColumnarRowSplitReader#batchSize is 2048, so Changing MINI_BATCH_SIZE to 2048 will reduce memory cache.
stayrascal pushed a commit to stayrascal/hudi that referenced this pull request Apr 12, 2022
ParquetColumnarRowSplitReader#batchSize is 2048, so Changing MINI_BATCH_SIZE to 2048 will reduce memory cache.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants