-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fallback to parsing big integer from String instead of exception in Parquet format #50873
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
424242424242424242424242424242424242424242424242424242 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
#!/usr/bin/env bash | ||
# Tags: no-fasttest | ||
|
||
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) | ||
# shellcheck source=../shell_config.sh | ||
. "$CUR_DIR"/../shell_config.sh | ||
|
||
$CLICKHOUSE_LOCAL -q "select toString(424242424242424242424242424242424242424242424242424242::UInt256) as x format Parquet" | $CLICKHOUSE_LOCAL --input-format=Parquet --structure='x UInt256' -q "select * from table" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It looks suspicious, what There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's default table name in clickhouse-local There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But what the data is here? I suppose that request is reading from stdin. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Command There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok. there is some magic inside. I didn't know. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for explanation! |
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just in case to be sure.
What happens is it would be 'toString(42424242)'?
sizeof would be 8 and as I presume chunk.value_length(i) != sizeof(ValueType) will not pass however type is a string in your test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will parse it as binary representation of integer. I think it's ok and will be rare in real data (all integer values should have sizeof(ValueType)). The problem is that previously before #48126 we didn't support writng/reading big integers at all. But because of implementation, we could read big integers from strings, because we do cast to result type. And someone relied on this side effect to read big integers without using cast in query, so I decided to make this fallback to previous behaviour.
If you think it's not ok, I can add a new setting that will control the switch to previous behaviour (so, there still will be fallback in new implemenetation, but we will additionaly check the setting)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hold in my mind the case like this
When client write all his ints, which might be or might not be (according to the sizeof()) big int, as a strings.
And if I understand you right this is ok.
Till all sizeof(values) != 32. Sizeof(UInt256). If all values by accident are 32 bytes size we will parse them as binary representation. Like this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be I'm missing the point that client has to follow the rule:
"write small int as int, write only big ints as string".
If so, then there is no problem.
My case applies only to the case when client just writes all ints (which might or not be big) as strings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what you mean. We are talking about only big integers, when client specified column type
Int128/UInt128/Int256/UInt256
. If client specified other integer likeInt32/UInt64...
there will be no problem at all.So, the problem can be only when client have String column in Parquet file, specified column type
Int128/UInt128/Int256/UInt256
(so he expects that data contains big integers) and column contains all values with sizesizeof(Int128/UInt128/Int256/UInt256)
. And it shouldn't be the real problemThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm OK with this changes.
Basically we are tuning here some specific illformed convertation.
select toString(42::UInt256) as x format Parquet
That data has to be read as String.
The case when user tries to read it as UInt256 is just wrong. Here we are trying to eliminate this error by guessing a cast function. In general we has to follow with provided read schema. And only when we notice that the schema is wrong we try to guess.