-
Notifications
You must be signed in to change notification settings - Fork 13.8k
[FLINK-19766][table-runtime] Introduce File streaming compaction operators #13744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 73d3642 (Thu Oct 22 09:13:15 UTC 2020) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
| /** | ||
| * A partitioned input file. | ||
| */ | ||
| public static class InputFile implements CoordinatorInput { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing serialVersionUID
| /** | ||
| * A flag to end file input. | ||
| */ | ||
| public static class EndInputFile implements CoordinatorInput { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing serialVersionUID
| .toArray(String[]::new); | ||
| } | ||
|
|
||
| public boolean isTaskMessage(int taskId) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the purpose of this method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* {@link CompactionUnit} and {@link EndCompaction} must be sent to the downstream in an orderly
* manner, while {@link EndCompaction} is broadcast emitting, so unit and endCompaction use the
* broadcast emitting mechanism together. Since unit is broadcast, we want it to be processed by
* a single task, so we carry the ID in the unit and let the downstream task select its own unit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, there is a bug here, should be:
public boolean isTaskMessage(int taskNumber, int taskId) {
return unitId % taskNumber == taskId;
}
| if (triggerCommit) { | ||
| commitUpToCheckpoint(endInputFile.getCheckpointId()); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Throw an exception for unknown elements?
| } | ||
|
|
||
| /** | ||
| * A flag to end file input. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems more of a flag to end checkpoint rather than file input?
| } | ||
|
|
||
| @Override | ||
| public void notifyCheckpointComplete(long checkpointId) throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If something goes wrong in this method and the job fails over, will this method be called again for the same checkpointId?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, then will wait for next checkpoint notify.
|
|
||
| Assert.assertEquals(7, outputs.size()); | ||
|
|
||
| assertUnit(outputs.get(0), 0, "p0", Arrays.asList("f0", "f1", "f4")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think CompactCoordinator doesn't guarantee to generate CompactionUnit in any specific order of partitions, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean the order of partitions? There is no relationship between partitions, so there is no need to guarantee this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah... but then how could we assert the first output is for p0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean we should sort it before assert?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I'll do it
|
|
||
| // check all compacted file generated | ||
| Assert.assertTrue(fs.exists(new Path(folder, "compacted-f0"))); | ||
| Assert.assertTrue(fs.exists(new Path(folder, "compacted-f2"))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also verify f3 and f6 are not compacted at this point.
lirui-apache
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
c8f3503 to
164c8b5
Compare
a01ecf0 to
039f49a
Compare
What is the purpose of the change & Brief change log
Introduce Compaction operators:
The compaction operator graph is:
TempFileWriter|parallel ---(InputFile&EndInputFile)---> CompactCoordinator|non-parallel
---(CompactionUnit&EndCompaction)--->CompactOperator|parallel---(PartitionCommitInfo)--->
PartitionCommitter|non-parallel
Because the end message is a kind of barrier of record messages, they can only be transmitted
in the way of full broadcast in the link from coordinator to compact operator.
Introduce CompactCoordinator
This is the single (non-parallel) monitoring task which coordinate input files to compaction units.
starts coordination.
NOTE: The coordination is a stable algorithm, which can ensure that the downstream can
perform compaction at any time without worrying about fail over.
STATE: This operator stores input files in state, after the checkpoint completes successfully,
input files are taken out from the state for coordination.
Introduce CompactOperator
Receives compaction units to do compaction. Send partition commit information after
compaction finished.
Use BulkFormat to read and use BucketWriter to write.
STATE: This operator stores expired files in state, after the checkpoint completes successfully,
We can ensure that these files will not be used again and they can be deleted from the
file system.
Verifying this change
CompactOperatorsTestBinPackingTestDoes this pull request potentially affect one of the following parts:
@Public(Evolving): noDocumentation