Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix distributed table with a constant sharding key #59606

Conversation

vitlibar
Copy link
Member

@vitlibar vitlibar commented Feb 5, 2024

Changelog category:

  • Bug Fix

Changelog entry:

Fix distributed table with a constant sharding key.

Closes #59589


CREATE TABLE shard_0.t_local (a Int) ENGINE = Memory;
CREATE TABLE shard_1.t_local (a Int) ENGINE = Memory;
CREATE TABLE t_distr (a Int) ENGINE = Distributed(test_cluster_two_shards_different_databases, '', 't_local', 1000);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A constant sharding key is not very useful but still valid. We must not crash on it.

@robot-ch-test-poll robot-ch-test-poll added the pr-bugfix Pull request with bugfix, not backported by default label Feb 5, 2024
@robot-ch-test-poll
Copy link
Contributor

robot-ch-test-poll commented Feb 5, 2024

This is an automated comment for commit 5962ed0 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
Bugfix validate checkChecks that either a new test (functional or integration) or there some changed tests that fail with the binary built on master branch✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success
Check nameDescriptionStatus
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR⏳ pending
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure

const IColumn * column = result.column.get();
if (const auto * col_const = typeid_cast<const ColumnConst *>(column))
column = &col_const->getDataColumn();
Copy link
Member Author

@vitlibar vitlibar Feb 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function StorageDistributed::createSelector() must return a vector specifying how to split the input block into blocks for each shard according to values of the sharding key. The size of a returned vector must be the same as the number of rows in the input block and must be equal to the number of rows in the result.column.

This replacement column = &col_const->getDataColumn() here was wrong because it caused the returned selector to have only one element. Also this replacement was unnecessary because below in this function the createBlockSelector() function is called which knows how to deal with const columns (see). Thus in this PR I'm just removing this unnecessary and wrong code.

@alexey-milovidov alexey-milovidov added pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-cloud labels Feb 5, 2024
@vitlibar vitlibar force-pushed the fix-distributed-table-with-const-sharding-key branch from f7f2dbb to 5962ed0 Compare February 6, 2024 08:43
@antonio2368 antonio2368 self-assigned this Feb 6, 2024
@alexey-milovidov alexey-milovidov merged commit 01c8542 into ClickHouse:master Feb 6, 2024
260 of 267 checks passed
robot-clickhouse-ci-1 added a commit that referenced this pull request Feb 6, 2024
…4bfdd8afa78fca304ad823a63d0f49

Cherry pick #59606 to 23.11: Fix distributed table with a constant sharding key
robot-clickhouse-ci-1 added a commit that referenced this pull request Feb 6, 2024
…4bfdd8afa78fca304ad823a63d0f49

Cherry pick #59606 to 23.12: Fix distributed table with a constant sharding key
robot-clickhouse-ci-1 added a commit that referenced this pull request Feb 6, 2024
…bfdd8afa78fca304ad823a63d0f49

Cherry pick #59606 to 24.1: Fix distributed table with a constant sharding key
@robot-ch-test-poll1 robot-ch-test-poll1 added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Feb 6, 2024
robot-ch-test-poll1 added a commit that referenced this pull request Feb 7, 2024
Backport #59606 to 23.12: Fix distributed table with a constant sharding key
@Algunenano
Copy link
Member

This has broken multiple tests with the analyzer (or at least the've become flaky):

image

@vitlibar vitlibar deleted the fix-distributed-table-with-const-sharding-key branch February 8, 2024 23:29
alexey-milovidov added a commit that referenced this pull request Feb 11, 2024
Backport #59606 to 24.1: Fix distributed table with a constant sharding key
alexey-milovidov added a commit that referenced this pull request Feb 11, 2024
Backport #59606 to 23.11: Fix distributed table with a constant sharding key
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore pr-backports-created-cloud pr-bugfix Pull request with bugfix, not backported by default pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-cloud
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fatal after update to 24.1.1.2048
7 participants