-
Notifications
You must be signed in to change notification settings - Fork 13.8k
[FLINK-14234][runtime] Notifies all kinds of consumable partitions to SchedulingStrategy #11691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 2b5a49d (Wed Apr 15 11:39:56 UTC 2020) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
|
|
||
| final Set<IntermediateResultPartitionID> consumableResultPartitions = new HashSet<>(); | ||
| for (IntermediateResultPartition resultPartition : executionVertex.getProducedPartitions().values()) { | ||
| if (resultPartition.getResultType().isBlocking() && resultPartition.isConsumable()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We currently define an IntermediateResultPartition to be consumable if all of of the partitions in the intermediate result are consumable. I think one of the reasons LazyFromSourcesSchedulingStrategy is so bloated, is that we wanted to enable the strategy to (re-)define consumability. That is, a SchedulingStrategy could decide to schedule downstream operators as soon as one partition is finished (instead of waiting for the entire intermediate result). Now my questions are:
- Is it already possible to implement the above requirement in Flink, or does something impede us from consuming partitions of an incomplete intermediate result?
- How likely is it that we have to implement the above requirement? If we merge this PR and later change the contract of
notifyPartitionConsumable(),LazyFromSourcesSchedulingStrategywill break. Can we postpone the decision whetherLazyFromSourcesSchedulingStrategyshould be simplified? - What should be the behavior (consumability of partitions) for Pipelined Region Scheduling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
I think a blocking partition can be consumed once it finishes. Actually we already refined it like this in Blink.
Waiting for the entire intermediate result to finish is not a must. I feel that it was so because Flink had wanted to schedule batch jobs stage by stage, i.e. finish one JobVertex and then schedule its consumer JobVertex. -
If later we would redefine blocking partition consumability, I think it's better to do it in SchedulerNG instead of in the SchedulingStrategy. This can make the concept of PIPELINED and BLOCKING much more clear. A PIPELINED result partition mean it can be consumed once any data has been produced inside. A BLOCKING result partition mean it can be consumed after all its data has been produced. The LazyFromSourcesSchedulingStrategy would not break in this way.
The LazyFromSourcesSchedulingStrategy tests are really complex at the moment and some tests behave strangely. For example, all pipelined partitions are consumable initially while all blocking partitions are not. It has been a pain each time I had to touch them. So I hope we could simply it earlier. -
each time a partition becomes consumable, PipelinedRegionSchedulingStrategy finds out all its consumer regions and schedule those whose inputs are all consumable.
…ceives consumable partitions in bulks This is needed to improve the performance of consumer input checking (see FLINK-14735) when later we change all consumer scheduling to be triggered by consumable partitions. When a finished vertex finishes an ALL-to-ALL intermediate result, all the partitions will become consumable and they each have the same consumers. The consumer vertices can be checked multiple times for input status if we notify the consumable partition to the scheduling strategy one by one. This is quadratic complexity (O(V^2), where V is the number of vertices) and can be very slow for large scale jobs. Passing in consumable partitions in bulks makes it possible to deduplicate consumer vertices. In this way, the computation complexity can be reduced to O(V).
… SchedulingStrategy
…hedule consumer vertices on consumable result partitions
88a83bd to
2b5a49d
Compare
|
closing since this change is not a must at the moment |
What is the purpose of the change
This PR notifies all kinds of consumable partitions to SchedulingStrategy. In this way, the LazyFromSourcesSchedulingStrategy can be simplified a lot since it does not need to maintain result status by itself and does not need to be aware of result partition types.
Brief change log
Verifying this change
Does this pull request potentially affect one of the following parts:
@Public(Evolving): (yes / no)Documentation