Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-18832][datastream] Add compatible check for blocking partition with buffer timeout #13209

Merged
merged 2 commits into from
Sep 7, 2020

Conversation

zhijiangW
Copy link
Contributor

What is the purpose of the change

The current BoundedBlockingSubpartition in runtime does not support positive flush timeout setting by design based on the current scheduler strategy. So it is nice to check this compatibility during job graph generation and give some helpful messages for guiding users, which can avoid potential concurrent issues in runtime stack.

Brief change log

  • Remove default buffer timeout setting in StreamGraphGenerator
  • Add checkCompatible method in StreamingJobGraphGenerator

Verifying this change

Add new testNormalShuffleModeWithBufferTimeout and testConflictShuffleModeWithBufferTimeout for verifying the effect.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 92b731b (Thu Aug 20 15:32:56 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Aug 20, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@rkhachatryan rkhachatryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @zhijiangW
Please see my comments below.

@zhijiangW zhijiangW force-pushed the batchFlush branch 13 times, most recently from 1133139 to 5c5e70c Compare September 3, 2020 10:35
@zhijiangW
Copy link
Contributor Author

@rkhachatryan , thanks for reviews and I updated the codes for addressing some problems.

Copy link
Contributor

@rkhachatryan rkhachatryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating the PR @zhijiangW
LGTM in general (approving).
I commented on some nits.

zhijiangW added a commit to zhijiangW/flink that referenced this pull request Sep 7, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
zhijiangW added a commit to zhijiangW/flink that referenced this pull request Sep 7, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
zhijiangW added a commit to zhijiangW/flink that referenced this pull request Sep 7, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
@zhijiangW zhijiangW merged commit 13e0b35 into apache:master Sep 7, 2020
zhijiangW added a commit to zhijiangW/flink that referenced this pull request Sep 8, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
zhijiangW added a commit to zhijiangW/flink that referenced this pull request Sep 8, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
zhijiangW added a commit to zhijiangW/flink that referenced this pull request Sep 8, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
zhijiangW added a commit to zhijiangW/flink that referenced this pull request Sep 8, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes apache#13209.
zhijiangW added a commit that referenced this pull request Sep 8, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes #13209.
zhijiangW added a commit that referenced this pull request Sep 8, 2020
… with buffer timeout

From the requirement it is no need to enable buffer timeout for batch jobs since the downstream can only consume data when the upstream finishes.
Furthermore the current implementation of BoundedBlockingSubpartition does not consider the concurrent issue from the flusher thread by enabling
buffer timeout. So it is nice to check this compatibility during job graph generation in advance and give a friendly message hint for users.

This closes #13209.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants