-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[FLINK-25820] Introduce Table Store Flink Source #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| private static void incrementCharArrayByOne(char[] array, int pos) { | ||
| char c = array[pos]; | ||
| c++; | ||
|
|
||
| if (c > '9') { | ||
| c = '0'; | ||
| incrementCharArrayByOne(array, pos - 1); | ||
| } | ||
| array[pos] = c; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pos might be -1 if this method is called too many times. In this case we should recreate array and increment its length by 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just throw readable exception is OK... There are to many splits...
| public RecordsWithSplitIds<RecordAndPosition<RowData>> fetch() throws IOException { | ||
| checkSplitOrStartNext(); | ||
|
|
||
| // pool first, avoid thread safety issues |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not thread safety. This is because batches can only be fetched one by one, so we use a pool to restrict the reader thread not to fetch too many batches at the same time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pool first, pool size is 1, the underlying implementation does not allow multiple batches to be read at the same time
| assertSplit(splits.get(0), "0000000001", 2, 0, Arrays.asList("f3", "f4", "f5")); | ||
| assertSplit(splits.get(1), "0000000002", 2, 1, Collections.singletonList("f6")); | ||
| assertSplit(splits.get(2), "0000000003", 1, 0, Arrays.asList("f0", "f1")); | ||
| assertSplit(splits.get(3), "0000000004", 1, 1, Collections.singletonList("f2")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add cases to test carrying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean add cases for example 9 -> 10 or 99 -> 100. See https://en.wikipedia.org/wiki/Carry_(arithmetic)
916136c to
e1ebf5d
Compare
e1ebf5d to
aeff369
Compare
aeff369 to
9965a9b
Compare
parent 2a62022 author Alibaba-HZY <daowu.hzy@cainiao.com> 1681226598 +0800 committer Alibaba-HZY <daowu.hzy@cainiao.com> 1681226783 +0800 # 这是一个 13 个提交的组合。 parent 2a62022 author Alibaba-HZY <daowu.hzy@cainiao.com> 1681226598 +0800 committer Alibaba-HZY <daowu.hzy@cainiao.com> 1681226730 +0800 # 这是一个 8 个提交的组合。tree 46928987599714b071489a7b6d4957049e6ded7a parent 2a62022 author Alibaba-HZY <daowu.hzy@cainiao.com> 1681226598 +0800 committer Alibaba-HZY <daowu.hzy@cainiao.com> 1681226598 +0800 [core] Add streaming read from option (apache#778) # 这是提交说明 #8: [core] add test (apache#778) # 这是提交说明 apache#11: Update paimon-core/src/main/java/org/apache/paimon/CoreOptions.java Co-authored-by: Nicholas Jiang <programgeek@163.com> # 这是提交说明 apache#12: Update paimon-core/src/main/java/org/apache/paimon/CoreOptions.java Co-authored-by: Nicholas Jiang <programgeek@163.com> # 这是提交说明 apache#13: Update paimon-core/src/main/java/org/apache/paimon/CoreOptions.java Co-authored-by: Nicholas Jiang <programgeek@163.com> # 这是提交说明 apache#15: Update paimon-flink/paimon-flink-common/src/main/java/org/apache/paimon/flink/AbstractFlinkTableFactory.java Co-authored-by: Nicholas Jiang <programgeek@163.com> # 这是提交说明 apache#16: Update paimon-flink/paimon-flink-common/src/main/java/org/apache/paimon/flink/AbstractFlinkTableFactory.java Co-authored-by: Nicholas Jiang <programgeek@163.com> # 这是提交说明 apache#17: Update paimon-flink/paimon-flink-common/src/main/java/org/apache/paimon/flink/AbstractFlinkTableFactory.java Co-authored-by: Nicholas Jiang <programgeek@163.com> # 这是提交说明 apache#18: [core] add test (apache#778) # 这是提交说明 apache#19: merger sdda 这是一个 3 个提交的组合。 [core] commit1 (apache#778) [core] commit2 (apache#778) [core] commit3 (apache#778) [core] do commit 1(apache#778) [core] do commit 2(apache#778) # 这是提交说明 apache#20: sdda 这是一个 3 个提交的组合。 [core] commit1 (apache#778) [core] commit2 (apache#778) [core] commit3 (apache#778)
[apache#15] when creating paimon table encontered with a duplicated managed hive... See merge request realtime/paimon!6
…s exception should...
[apache#15][IDEV-REALTM-44] if table exist but instead of paimon, this exception should... See merge request realtime/paimon!7
…ry_main PRODENAB-149 fix infinite loop when a field is a unparseable value us…
Introduce FLIP-27 source implementation for table file store.