Skip to content

Conversation

@zhuzhurk
Copy link
Contributor

@zhuzhurk zhuzhurk commented Apr 9, 2020

What is the purpose of the change

This PR notifies all kinds of consumable partitions to SchedulingStrategy. In this way, the LazyFromSourcesSchedulingStrategy can be simplified a lot since it does not need to maintain result status by itself and does not need to be aware of result partition types.

Brief change log

  • Notified all kinds of consumable partitions to SchedulingStrategy
  • Simplified LazyFromSourcesSchedulingStrategy and InputDependencyConstraintChecker

Verifying this change

  • Added unit tests for consumable partition notification
  • Updated tests of LazyFromSourcesSchedulingStrategy and InputDependencyConstraintChecker

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@zhuzhurk zhuzhurk requested a review from GJL April 9, 2020 12:16
@flinkbot
Copy link
Collaborator

flinkbot commented Apr 9, 2020

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 2b5a49d (Wed Apr 15 11:39:56 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.

Details
The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@GJL GJL self-assigned this Apr 9, 2020
@flinkbot
Copy link
Collaborator

flinkbot commented Apr 9, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build


final Set<IntermediateResultPartitionID> consumableResultPartitions = new HashSet<>();
for (IntermediateResultPartition resultPartition : executionVertex.getProducedPartitions().values()) {
if (resultPartition.getResultType().isBlocking() && resultPartition.isConsumable()) {
Copy link
Member

@GJL GJL Apr 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We currently define an IntermediateResultPartition to be consumable if all of of the partitions in the intermediate result are consumable. I think one of the reasons LazyFromSourcesSchedulingStrategy is so bloated, is that we wanted to enable the strategy to (re-)define consumability. That is, a SchedulingStrategy could decide to schedule downstream operators as soon as one partition is finished (instead of waiting for the entire intermediate result). Now my questions are:

  1. Is it already possible to implement the above requirement in Flink, or does something impede us from consuming partitions of an incomplete intermediate result?
  2. How likely is it that we have to implement the above requirement? If we merge this PR and later change the contract of notifyPartitionConsumable(), LazyFromSourcesSchedulingStrategy will break. Can we postpone the decision whether LazyFromSourcesSchedulingStrategy should be simplified?
  3. What should be the behavior (consumability of partitions) for Pipelined Region Scheduling?

Copy link
Contributor Author

@zhuzhurk zhuzhurk Apr 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I think a blocking partition can be consumed once it finishes. Actually we already refined it like this in Blink.
    Waiting for the entire intermediate result to finish is not a must. I feel that it was so because Flink had wanted to schedule batch jobs stage by stage, i.e. finish one JobVertex and then schedule its consumer JobVertex.

  2. If later we would redefine blocking partition consumability, I think it's better to do it in SchedulerNG instead of in the SchedulingStrategy. This can make the concept of PIPELINED and BLOCKING much more clear. A PIPELINED result partition mean it can be consumed once any data has been produced inside. A BLOCKING result partition mean it can be consumed after all its data has been produced. The LazyFromSourcesSchedulingStrategy would not break in this way.
    The LazyFromSourcesSchedulingStrategy tests are really complex at the moment and some tests behave strangely. For example, all pipelined partitions are consumable initially while all blocking partitions are not. It has been a pain each time I had to touch them. So I hope we could simply it earlier.

  3. each time a partition becomes consumable, PipelinedRegionSchedulingStrategy finds out all its consumer regions and schedule those whose inputs are all consumable.

…ceives consumable partitions in bulks

This is needed to improve the performance of consumer input checking (see FLINK-14735) when later we change all consumer scheduling to be triggered by consumable partitions.
When a finished vertex finishes an ALL-to-ALL intermediate result, all the partitions will become consumable and they each have the same consumers. The consumer vertices can be checked multiple times for input status if we notify the consumable partition to the scheduling strategy one by one. This is quadratic complexity (O(V^2), where V is the number of vertices) and can be very slow for large scale jobs.
Passing in consumable partitions in bulks makes it possible to deduplicate consumer vertices. In this way, the computation complexity can be reduced to O(V).
…hedule consumer vertices on consumable result partitions
@zhuzhurk zhuzhurk force-pushed the FLINK_14234_notify_consumable branch from 88a83bd to 2b5a49d Compare April 14, 2020 09:54
@zhuzhurk
Copy link
Contributor Author

closing since this change is not a must at the moment

@zhuzhurk zhuzhurk closed this Apr 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants