Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-23707][streaming-java] Use consistent managed memory weights for StreamNode #16771

Closed
wants to merge 2 commits into from

Conversation

twalthr
Copy link
Contributor

@twalthr twalthr commented Aug 10, 2021

What is the purpose of the change

This synchronizes the weights between Table API and DataStream API for managed memory. Otherwise, keyed operators in DataStream API that are used in a unified pipeline would not get enough resources when using a weight of 1. The weight is declared as a kibibyte value.

Brief change log

  • Use the same default value in keyed operators as for Table API sorting.
  • Allow advanced users to change the default value via option execution.sorted-inputs.memory.

Verifying this change

This change added tests and can be verified as follows: StreamGraphGeneratorBatchExecutionTest

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): yes
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no
  • If yes, how is the feature documented? not applicable

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 1ccaf39 (Tue Aug 10 14:57:45 UTC 2021)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Aug 10, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Comment on lines 74 to 79
long memoryBytes = configuration.get(ExecutionOptions.SORTED_INPUTS_MEMORY).getBytes();
if (memoryBytes <= 0) {
memoryBytes = ExecutionOptions.SORTED_INPUTS_MEMORY.defaultValue().getBytes();
}
// convert to kibibytes
return (int) Math.max(1, memoryBytes >> 10);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be simplified as follow.

  • Memory type config option does not accept negative values.
  • If not specified, it automatically falls back to the default value.
  • You can get the KB value directly from a MemorySize.
Suggested change
long memoryBytes = configuration.get(ExecutionOptions.SORTED_INPUTS_MEMORY).getBytes();
if (memoryBytes <= 0) {
memoryBytes = ExecutionOptions.SORTED_INPUTS_MEMORY.defaultValue().getBytes();
}
// convert to kibibytes
return (int) Math.max(1, memoryBytes >> 10);
return (int)
Math.max(
1, configuration.get(ExecutionOptions.SORTED_INPUTS_MEMORY).getKibiBytes());

@@ -273,7 +273,8 @@ public ResourceSpec getPreferredResources() {
* @param managedMemoryUseCase The use case that this transformation declares needing managed
* memory for.
* @param weight Use-case-specific weights for this transformation. Used for sharing managed
* memory across transformations for OPERATOR scope use cases.
* memory across transformations for OPERATOR scope use cases. For consistency, the APIs
* declare their weights as a kibibyte value.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not entirely sure that 1kb is a good unit for weights in all the use cases. E.g., our internal Gemini state backend also uses managed memory in a per-operator bias, which sets the weight to be number of states the operator maintains.

Not saying we should change things for a special internal use case. Admittedly, having a consistent unit for all use cases is not causing problems at the moment, because there's currently only one operator scope managed memory use case. But thinking of future flexibility, maybe we should avoid having unnecessarily strong assumptions. In this case, what we really need is a consistent unit of weight for one specific managed memory use case, rather than all use cases.

In particular, I'd suggest to add the 1kb definition of weight to the JavaDoc of ManagedMemoryUseCase#OPERATOR, and add a pointer here to remind the callers to check the weight definition of the declared use case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xintongsong I'm also not happy with this PR. It is only the minimal solution to make DataStream API batch mode compatible with Table API's batch mode. A KB unit also has the downside of a potential overflow if the value is set to high. Also casting MemorySize.getKibiBytes() to int is not very nice. Shall we change this to MB instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with either KB or MB.

  • For KB, it needs 2^31 total weights to get an overflow. That's roughly 2PB, per slot, which is not likely the case.
  • For MB, I also don't see much chances that an operator needs to tune its memory as fine-grained as less than 1MB.

@twalthr
Copy link
Contributor Author

twalthr commented Aug 11, 2021

@xintongsong I updated the PR. Let me know what you think.

@xintongsong
Copy link
Contributor

@twalthr Azure failures seem to be related.

@twalthr
Copy link
Contributor Author

twalthr commented Aug 12, 2021

@xintongsong the build should succeed now. I reverted some of the changes. In the end the PR is rather minimal.

Copy link
Contributor

@xintongsong xintongsong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @twalthr. LGTM. +1 for merging once AZP gives green light.

twalthr added a commit to twalthr/flink that referenced this pull request Aug 12, 2021
twalthr added a commit to twalthr/flink that referenced this pull request Aug 12, 2021
@twalthr twalthr closed this in 3e62364 Aug 12, 2021
hhkkxxx133 pushed a commit to hhkkxxx133/flink that referenced this pull request Aug 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants