Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-21915] Optimize Execution#finishPartitionsAndUpdateConsumers #15382

Closed
wants to merge 1 commit into from

Conversation

Thesharing
Copy link
Contributor

What is the purpose of the change

Based on the scheduler benchmark PartitionReleaseInBatchJobBenchmark introduced in FLINK-20612, we find that there's another procedure that has O(N^2) computation complexity: Execution#finishPartitionsAndUpdateConsumers.

Once an execution is finished, it will finish all its BLOCKING partitions and update the partition info to all consumer vertices. The procedure can be illustrated as the following pseudo code:

for all Execution in ExecutionGraph:
  for all produced IntermediateResultPartition of the Execution:
    for all consumer ExecutionVertex of the IntermediateResultPartition:
      update or cache partition info

This procedure has O(N^2) complexity in total.

Based on FLINK-21326, the consumed partitions are grouped if they are connected to the same consumer vertices. Therefore, we can update partition info of the entire ConsumedPartitionGroup in batch, rather than one by one. This will decrease the complexity from O(N^2) to O(N).

Brief change log

  • Make Execution#updatePartitionConsumers update the partition info of IntermediateResultPartitions in batch
  • Optimize Execution#finishPartitionsAndUpdateConsumers will firstly calculate the connections between ConsumerVertexGroups and IntermediateResultPartitions, then update partition info for each pair of them

Verifying this change

Since this optimization does not change the original logic of Execution#finishPartitionsAndUpdateConsumers, we believe that this change is already covered by existing tests, like ExecutionPartitionLifecycleTest, DefaultExecutionGraphDeploymentTest, and etc.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@flinkbot
Copy link
Collaborator

flinkbot commented Mar 26, 2021

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 0e09d89 (Sat Aug 28 13:07:31 UTC 2021)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Mar 26, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@zhuzhurk zhuzhurk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Thesharing Could you share the number of the improvements with this change?

for (IntermediateResultPartition finishedPartition : newlyFinishedResults) {
final IntermediateResultPartition[] allPartitionsOfNewlyFinishedResults =
finishedPartition.getIntermediateResult().getPartitions();

for (IntermediateResultPartition partition : allPartitionsOfNewlyFinishedResults) {
updatePartitionConsumers(partition);
for (ConsumerVertexGroup consumerVertexGroup : partition.getConsumers()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be out of the scope of this PR. But I think it's better to rename partition.getConsumers() to partition.getConsumerGroups(). Also ExecutionVertex#getConsumedPartitions(...) should be renamed to ExecutionVertex#getConsumedPartitionGroups(...). This will make their invocations easier to understand.

@@ -486,8 +487,8 @@ void notifyPartitionDataAvailable(ResultPartitionID partitionId) {
partition.markDataProduced();
}

void cachePartitionInfo(PartitionInfo partitionInfo) {
getCurrentExecutionAttempt().cachePartitionInfo(partitionInfo);
void cachePartitionInfo(Collection<PartitionInfo> partitionInfos) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cachePartitionInfo -> cachePartitionInfos

@@ -1025,8 +1035,8 @@ private void finishCancellation(boolean releasePartitions) {
handlePartitionCleanup(releasePartitions, releasePartitions);
}

void cachePartitionInfo(PartitionInfo partitionInfo) {
partitionInfos.add(partitionInfo);
void cachePartitionInfo(Collection<PartitionInfo> partitionInfos) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cachePartitionInfo -> cachePartitionInfos

@Thesharing
Copy link
Contributor Author

Due to FLINK-22017, now blocking partitions will be individually consumable once it's finished. finishPartitionAndUpdateConsumer will be called every partition is finished. The proposed optimization is no longer valid. Furthermore, this function is called only when there is intra-region edges in the graph. In this case the downstream vertices are DEPLOYING/RUNNING when the upstream vertices are FINISHED. The scenario is rare. Thus for now we just close it. If there's a new idea about it, we'd like to reopen it.

@Thesharing Thesharing closed this Aug 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants