Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fallback to parsing big integer from String instead of exception in Parquet format #50873

Merged
merged 1 commit into from Jun 20, 2023

Conversation

Avogar
Copy link
Member

@Avogar Avogar commented Jun 12, 2023

Changelog category (leave one):

  • Bug Fix (user-visible misbehavior in an official stable release)

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Fallback to parsing big integer from String instead of exception in Parquet format to fix compatibility with older versions.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

@Avogar Avogar added the pr-must-backport Pull request should be backported intentionally. Use this label with great care! label Jun 12, 2023
@robot-ch-test-poll3 robot-ch-test-poll3 added the pr-bugfix Pull request with bugfix, not backported by default label Jun 12, 2023
@robot-ch-test-poll3
Copy link
Contributor

robot-ch-test-poll3 commented Jun 12, 2023

This is an automated comment for commit 5cec4c3 with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🔴 failure

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help🟢 success
Bugfix validate checkChecks that either a new test (functional or integration) or there some changed tests that fail with the binary built on master branch🟢 success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟢 success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🟢 success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🔴 failure
Mergeable CheckChecks if all other necessary checks are successful🟢 success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🟢 success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🔴 failure
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors🟢 success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts🟢 success

# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh

$CLICKHOUSE_LOCAL -q "select toString(424242424242424242424242424242424242424242424242424242::UInt256) as x format Parquet" | $CLICKHOUSE_LOCAL --input-format=Parquet --structure='x UInt256' -q "select * from table"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks suspicious, what table here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's default table name in clickhouse-local

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But what the data is here? I suppose that request is reading from stdin.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Command clickhouse-local -input-format=... --structure=... creates table with specified name (table is default name) with specified structure and inserts data into it from stdin in specified format. This table can be used in query even in non-interactive mode

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. there is some magic inside. I didn't know.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for explanation!

column_type->getName(),
sizeof(ValueType),
chunk.value_length(i));
return readColumnWithStringData<arrow::BinaryArray>(arrow_column, column_name);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just in case to be sure.
What happens is it would be 'toString(42424242)'?
sizeof would be 8 and as I presume chunk.value_length(i) != sizeof(ValueType) will not pass however type is a string in your test case.

Copy link
Member Author

@Avogar Avogar Jun 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will parse it as binary representation of integer. I think it's ok and will be rare in real data (all integer values should have sizeof(ValueType)). The problem is that previously before #48126 we didn't support writng/reading big integers at all. But because of implementation, we could read big integers from strings, because we do cast to result type. And someone relied on this side effect to read big integers without using cast in query, so I decided to make this fallback to previous behaviour.
If you think it's not ok, I can add a new setting that will control the switch to previous behaviour (so, there still will be fallback in new implemenetation, but we will additionaly check the setting)

Copy link
Member

@CheSema CheSema Jun 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hold in my mind the case like this

clickhouse local -q "select x from 
(select toString(42424242424242424242424242424242::UInt256) as x) 
UNION ALL 
(select toString(42424242424242424242424242424242424242424242::UInt256) as x) 
format Parquet" |\
 clickhouse local --input-format=Parquet --structure='x UInt256' -q "select * from table"

When client write all his ints, which might be or might not be (according to the sizeof()) big int, as a strings.
And if I understand you right this is ok.

Till all sizeof(values) != 32. Sizeof(UInt256). If all values by accident are 32 bytes size we will parse them as binary representation. Like this

clickhouse local -q "
select 
toString(42424242424242424242424242424242::UInt256) as x 
format Parquet" |\
 clickhouse local --input-format=Parquet --structure='x UInt256' -q "select * from table"
22707864971053448441042714569797161695738549521977760418632926980540162388532

Copy link
Member

@CheSema CheSema Jun 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be I'm missing the point that client has to follow the rule:
"write small int as int, write only big ints as string".
If so, then there is no problem.

My case applies only to the case when client just writes all ints (which might or not be big) as strings.

Copy link
Member Author

@Avogar Avogar Jun 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be I'm missing the point that client has to follow the rule:
"write small int as int, write only big ints as string".

I don't understand what you mean. We are talking about only big integers, when client specified column type Int128/UInt128/Int256/UInt256. If client specified other integer like Int32/UInt64... there will be no problem at all.

So, the problem can be only when client have String column in Parquet file, specified column type Int128/UInt128/Int256/UInt256 (so he expects that data contains big integers) and column contains all values with size sizeof(Int128/UInt128/Int256/UInt256). And it shouldn't be the real problem

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with this changes.

Basically we are tuning here some specific illformed convertation.
select toString(42::UInt256) as x format Parquet
That data has to be read as String.
The case when user tries to read it as UInt256 is just wrong. Here we are trying to eliminate this error by guessing a cast function. In general we has to follow with provided read schema. And only when we notice that the schema is wrong we try to guess.

@CheSema CheSema self-assigned this Jun 12, 2023
@Avogar Avogar merged commit 0edfbb4 into ClickHouse:master Jun 20, 2023
280 of 284 checks passed
robot-clickhouse added a commit that referenced this pull request Jun 20, 2023
robot-clickhouse added a commit that referenced this pull request Jun 20, 2023
Avogar added a commit that referenced this pull request Jun 21, 2023
Backport #50873 to 23.4: Fallback to parsing big integer from String instead of exception in Parquet format
Avogar added a commit that referenced this pull request Jun 21, 2023
Backport #50873 to 23.5: Fallback to parsing big integer from String instead of exception in Parquet format
@robot-ch-test-poll robot-ch-test-poll added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Jun 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore pr-backports-created-cloud pr-bugfix Pull request with bugfix, not backported by default pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-cloud
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants