Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-23457][network] The upstream sends the buffer of the right size for broadcast case #17024

Closed
wants to merge 2 commits into from

Conversation

lometheus
Copy link
Contributor

What is the purpose of the change

This PR fit NewBufferSize in broadcast case

Brief change log

  • Support create broadcast buffer consumers with desirable size

Verifying this change

  • Added unit tests for each classes which behaviour was changed

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): ( no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@lometheus lometheus changed the title FLINK-23973 The upstream sends the buffer of the right size for broadcast case [FLINK-23973] The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@lometheus
Copy link
Contributor Author

cc @pnowojski

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit e2ca022 (Fri Aug 27 12:05:36 UTC 2021)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@lometheus lometheus changed the title [FLINK-23973] The upstream sends the buffer of the right size for broadcast case [FLINK-23973](https://issues.apache.org/jira/browse/FLINK-23457). The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@lometheus lometheus changed the title [FLINK-23973](https://issues.apache.org/jira/browse/FLINK-23457). The upstream sends the buffer of the right size for broadcast case [FLINK-23973](https://issues.apache.org/jira/browse/FLINK-23457) The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@lometheus lometheus changed the title [FLINK-23973](https://issues.apache.org/jira/browse/FLINK-23457) The upstream sends the buffer of the right size for broadcast case [FLINK-23973][https://issues.apache.org/jira/browse/FLINK-23457] The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@lometheus lometheus changed the title [FLINK-23973][https://issues.apache.org/jira/browse/FLINK-23457] The upstream sends the buffer of the right size for broadcast case [FLINK-23973](https://issues.apache.org/jira/browse/FLINK-23457) The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@lometheus lometheus changed the title [FLINK-23973](https://issues.apache.org/jira/browse/FLINK-23457) The upstream sends the buffer of the right size for broadcast case [FLINK-23973] The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@flinkbot
Copy link
Collaborator

flinkbot commented Aug 27, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

@pnowojski pnowojski changed the title [FLINK-23973] The upstream sends the buffer of the right size for broadcast case [FLINK-23457] The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@pnowojski pnowojski changed the title [FLINK-23457] The upstream sends the buffer of the right size for broadcast case [FLINK-23457][network] The upstream sends the buffer of the right size for broadcast case Aug 27, 2021
@lometheus
Copy link
Contributor Author

@flinkbot run azure

@lometheus
Copy link
Contributor Author

I've updated, please have a look again.thx @pnowojski

Copy link
Contributor

@akalash akalash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lometheus, thanks for your changes. I left a couple of comments in the PR. Also, I remind you that the correct format for the commit comment is the same as you use for the PR name([FLINK-task][component] Comment). So don't forget to update it.

for (ResultSubpartition subpartition : subpartitions) {
subpartition.add(consumer.copy(), partialRecordBytes);
int subPartitionBufferSize = subpartition.add(consumer.copy(), partialRecordBytes);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If subpartition#add fails it returns the negative value(-1), so we should take into account this case and for example ignore such value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for review. I add special treatment for negative value

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe more correctly is to ignore the value for calculation of desirable buffer size rather than ignore the total value. I mean:

if(subPartitionBufferSize > 0) {
   desirableBufferSize = Math.min(desirableBufferSize, subPartitionBufferSize);
}

I think that it is better because if one of the subpartitions fails we are still able to send it to another ones. As I understand eventually they all will be closed if at least one was closed so maybe it is not so important but anyway it is better to follow the current semantic which doesn't forbid sending data to subpartitions even if one of them is closed.


// then: The buffer less or equal to configured.
assertEquals(5, subpartition0.pollBuffer().buffer().getSize());
assertEquals(5, subpartition1.pollBuffer().buffer().getSize());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, take a look more carefully at this test(testDifferentBufferSizeForSubpartitions), there are several scenarios that were tested here(send buffer less/greater/equal than the buffer size, change the buffer size and send the again). But you have only one test case which is not enough. So please, increase your test coverage.
Also I suggest extracting your test in the separated method like testDynamicBufferSizeForBroadcast or something similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I add testDynamicBufferSizeForBroadcast for less/greater/equal case

Copy link
Contributor

@akalash akalash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lometheus , I left one more comment, you can check it and then you can squash your commits to one for the final review and test.

for (ResultSubpartition subpartition : subpartitions) {
subpartition.add(consumer.copy(), partialRecordBytes);
int subPartitionBufferSize = subpartition.add(consumer.copy(), partialRecordBytes);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe more correctly is to ignore the value for calculation of desirable buffer size rather than ignore the total value. I mean:

if(subPartitionBufferSize > 0) {
   desirableBufferSize = Math.min(desirableBufferSize, subPartitionBufferSize);
}

I think that it is better because if one of the subpartitions fails we are still able to send it to another ones. As I understand eventually they all will be closed if at least one was closed so maybe it is not so important but anyway it is better to follow the current semantic which doesn't forbid sending data to subpartitions even if one of them is closed.

@lometheus
Copy link
Contributor Author

@lometheus , I left one more comment, you can check it and then you can squash your commits to one for the final review and test.

Wonderful idea ,I just fixed it with select a desirable buffer size.
exp: A question when I use
PipelinedSubpartition.bufferSize($size)
reset subpartition buffers size ,It seems only first buffer affected by $size , is this normal?

@akalash
Copy link
Contributor

akalash commented Sep 1, 2021

A question when I use PipelinedSubpartition.bufferSize($size) reset subpartition buffers size ,It seems only first buffer affected by $size , is this normal?

If I understood your question right then if one record allocates more than one buffer then the first buffer would be trimmed but the second one will be equal to the size of the rest of the record. In an ideal world, we should trim the second buffer too but in reality, it requires serious changes in the code which actually doesn't make any sense because it is a big mistake to configure buffer size less than one record.
So the answer is yes, it trims only the first buffer for one record but it will trim for each next record too. You can take a look at the comment inside of BufferWritingResultPartition#addToSubpartition.

P.S. Please, squash your commits into one(and you can rebase onto the fresh master).

? Math.min(desirableBufferSize, subPartitionBufferSize)
: desirableBufferSize;
}
if (desirableBufferSize != Integer.MAX_VALUE) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a minor comment: it is up to you but this condition is not really needed here because buffer.trim(MAX_VALUE) is not a problem it just set the buffer size to the maximum possible value which is ok. But it is not a mistake if you left this condition here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are correct, the code should be as concise as possible,I drop this condition.

@lometheus
Copy link
Contributor Author

I squash all my commits into one , thinks for your review and patiently answer again

P.S. Please, squash your commits into one(and you can rebase onto the fresh master).

Copy link
Contributor

@akalash akalash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @pnowojski, can you help with the merge, please?

Copy link
Contributor

@pnowojski pnowojski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, there is an unrelated test failure that has been fixed FLINK-24036.

@pnowojski
Copy link
Contributor

@flinkbot run azure

@pnowojski
Copy link
Contributor

@lometheus , could you rebase this PR on top of the latest master to pull in the fix for this failure so that we could have a green build?

@lometheus lometheus closed this Sep 2, 2021
@lometheus lometheus deleted the dev_20210827 branch September 2, 2021 11:38
@lometheus lometheus restored the dev_20210827 branch September 2, 2021 11:38
@lometheus lometheus deleted the dev_20210827 branch September 2, 2021 11:39
@lometheus lometheus restored the dev_20210827 branch September 2, 2021 11:40
@lometheus
Copy link
Contributor Author

restored dev_20210827

@lometheus lometheus reopened this Sep 2, 2021
@lometheus
Copy link
Contributor Author

@flinkbot run azure

1 similar comment
@lometheus
Copy link
Contributor Author

@flinkbot run azure

@lometheus
Copy link
Contributor Author

@flinkbot run azure

@lometheus
Copy link
Contributor Author

@pnowojski flinknot seems doesn't work anymore, how can I deal with this condition

@pnowojski
Copy link
Contributor

It takes some time for the flinkbot to pickup PR. It looks like it did just that and the most recent version of this PR seems to have some failures?

bfeab81 Azure: FAILURE

@@ -330,9 +330,16 @@ private BufferBuilder appendBroadcastDataForRecordContinuation(
private void createBroadcastBufferConsumers(BufferBuilder buffer, int partialRecordBytes)
throws IOException {
try (final BufferConsumer consumer = buffer.createBufferConsumerFromBeginning()) {
int desirableBufferSize = Integer.MAX_VALUE;
for (ResultSubpartition subpartition : subpartitions) {
subpartition.add(consumer.copy(), partialRecordBytes);
Copy link
Contributor

@pnowojski pnowojski Sep 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have duplicated:

subpartition.add(consumer.copy(), partialRecordBytes);

which is causing Azure failures.

(edit: I have already fixed it)

…k/partition/BufferWritingResultPartition.java
@pnowojski
Copy link
Contributor

Azure was green, merged to master manually after squashing the commits. Thanks @lometheus for your contribution :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants