New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fallback to parsing big integer from String instead of exception in Parquet format #50873
Conversation
This is an automated comment for commit 5cec4c3 with description of existing statuses. It's updated for the latest CI running
|
# shellcheck source=../shell_config.sh | ||
. "$CUR_DIR"/../shell_config.sh | ||
|
||
$CLICKHOUSE_LOCAL -q "select toString(424242424242424242424242424242424242424242424242424242::UInt256) as x format Parquet" | $CLICKHOUSE_LOCAL --input-format=Parquet --structure='x UInt256' -q "select * from table" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks suspicious, what table
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's default table name in clickhouse-local
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But what the data is here? I suppose that request is reading from stdin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Command clickhouse-local -input-format=... --structure=...
creates table with specified name (table
is default name) with specified structure and inserts data into it from stdin in specified format. This table can be used in query even in non-interactive mode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. there is some magic inside. I didn't know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for explanation!
column_type->getName(), | ||
sizeof(ValueType), | ||
chunk.value_length(i)); | ||
return readColumnWithStringData<arrow::BinaryArray>(arrow_column, column_name); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just in case to be sure.
What happens is it would be 'toString(42424242)'?
sizeof would be 8 and as I presume chunk.value_length(i) != sizeof(ValueType) will not pass however type is a string in your test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will parse it as binary representation of integer. I think it's ok and will be rare in real data (all integer values should have sizeof(ValueType)). The problem is that previously before #48126 we didn't support writng/reading big integers at all. But because of implementation, we could read big integers from strings, because we do cast to result type. And someone relied on this side effect to read big integers without using cast in query, so I decided to make this fallback to previous behaviour.
If you think it's not ok, I can add a new setting that will control the switch to previous behaviour (so, there still will be fallback in new implemenetation, but we will additionaly check the setting)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hold in my mind the case like this
clickhouse local -q "select x from
(select toString(42424242424242424242424242424242::UInt256) as x)
UNION ALL
(select toString(42424242424242424242424242424242424242424242::UInt256) as x)
format Parquet" |\
clickhouse local --input-format=Parquet --structure='x UInt256' -q "select * from table"
When client write all his ints, which might be or might not be (according to the sizeof()) big int, as a strings.
And if I understand you right this is ok.
Till all sizeof(values) != 32. Sizeof(UInt256). If all values by accident are 32 bytes size we will parse them as binary representation. Like this
clickhouse local -q "
select
toString(42424242424242424242424242424242::UInt256) as x
format Parquet" |\
clickhouse local --input-format=Parquet --structure='x UInt256' -q "select * from table"
22707864971053448441042714569797161695738549521977760418632926980540162388532
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be I'm missing the point that client has to follow the rule:
"write small int as int, write only big ints as string".
If so, then there is no problem.
My case applies only to the case when client just writes all ints (which might or not be big) as strings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be I'm missing the point that client has to follow the rule:
"write small int as int, write only big ints as string".
I don't understand what you mean. We are talking about only big integers, when client specified column type Int128/UInt128/Int256/UInt256
. If client specified other integer like Int32/UInt64...
there will be no problem at all.
So, the problem can be only when client have String column in Parquet file, specified column type Int128/UInt128/Int256/UInt256
(so he expects that data contains big integers) and column contains all values with size sizeof(Int128/UInt128/Int256/UInt256)
. And it shouldn't be the real problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm OK with this changes.
Basically we are tuning here some specific illformed convertation.
select toString(42::UInt256) as x format Parquet
That data has to be read as String.
The case when user tries to read it as UInt256 is just wrong. Here we are trying to eliminate this error by guessing a cast function. In general we has to follow with provided read schema. And only when we notice that the schema is wrong we try to guess.
…instead of exception in Parquet format
…instead of exception in Parquet format
Backport #50873 to 23.4: Fallback to parsing big integer from String instead of exception in Parquet format
Backport #50873 to 23.5: Fallback to parsing big integer from String instead of exception in Parquet format
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Fallback to parsing big integer from String instead of exception in Parquet format to fix compatibility with older versions.
Documentation entry for user-facing changes