-
Notifications
You must be signed in to change notification settings - Fork 13.8k
[FLINK-11858][table-runtime-blink] Introduce block compression to batch table runtime #7941
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ch table runtime.
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
|
cc @JingsongLi |
JingsongLi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this, LGTM, left some comments.
|
|
||
| @Override | ||
| public int getMaxCompressedSize(int srcSize) { | ||
| return Lz4BlockCompressionFactory.HEADER_LENGTH + compressor.maxCompressedLength(srcSize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import Lz4BlockCompressionFactory.HEADER_LENGTH?
| */ | ||
| public interface BlockCompressionFactory { | ||
|
|
||
| void setConfiguration(Configuration configuration); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know what this is for. Remove it?
| throw new InsufficientBufferException("Buffer length too small"); | ||
| } | ||
|
|
||
| if (src.limit() - prevSrcOff - Lz4BlockCompressionFactory.HEADER_LENGTH < compressedLen) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import Lz4BlockCompressionFactory.HEADER_LENGTH?
| public class BlockCompressionTest { | ||
|
|
||
| @Test | ||
| public void testLz4() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test DataCorruptionException & InsufficientBufferException?
|
Hi @JingsongLi , thanks for the review, i have addressed your comment. |
|
+1 LGTM |
| <version>${janino.version}</version> | ||
| </dependency> | ||
|
|
||
| <!-- Lz4 compression library --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KurtYoung please also update the LICENSE file. Here is a rough description: https://cwiki.apache.org/confluence/display/FLINK/Licensing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, will do
…ch table runtime. This closes apache#7941
What is the purpose of the change
Introduce BlockCompressor and BlockDecompressor to batch table runtime for future usage.
Brief change log
Verifying this change
Added unit tests
Does this pull request potentially affect one of the following parts:
@Public(Evolving): (no)Documentation