Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip unavailable replicas in parallel distributed insert select #58931

Merged
merged 5 commits into from Jan 22, 2024

Conversation

tavplubix
Copy link
Member

@tavplubix tavplubix commented Jan 17, 2024

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Skip unavailable replicas when executing parallel distributed INSERT SELECT

@tavplubix tavplubix marked this pull request as draft January 17, 2024 22:32
@robot-clickhouse robot-clickhouse added the pr-improvement Pull request with some product improvements label Jan 17, 2024
@robot-clickhouse
Copy link
Member

robot-clickhouse commented Jan 17, 2024

This is an automated comment for commit 21e089b with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success
Check nameDescriptionStatus
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR⏳ pending
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests❌ failure
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors❌ failure

@tavplubix
Copy link
Member Author

Not sure if we should add another setting (I don't like using skip_unavailable_shards for skipping unavailable replicas) or even do it by default
Also, it would be great to add a test

@tavplubix
Copy link
Member Author

Not sure if we should add another setting (I don't like using skip_unavailable_shards for skipping unavailable replicas) or even do it by default

Although, look like we already use skip_unavailable_shards for skipping unavailable replicas in s3Cluster

@Avogar
Copy link
Member

Avogar commented Jan 17, 2024

Although, look like we already use skip_unavailable_shards for skipping unavailable replicas in s3Cluster

Well, we explicitly set skip_unavailable_shards always to true in *Cluster functions:

/// Cluster table functions should always skip unavailable shards.
new_settings.skip_unavailable_shards = true;

So, we if user changes skip_unavailable_shards it doesn't affect *Cluster functions. Also, I think we can change the code in IStorageCluster and don't copy settings with changed skip_unavailable_shards=true but just use optional skip_unavailable_endpoints argument of ConnectionPoolWithFailover::getMany.

But I guess we still use skip_unavailable_shards as skip unavailable replicas in clusterAllReplicas

@tavplubix
Copy link
Member Author

But I guess we still use skip_unavailable_shards as skip unavailable replicas in clusterAllReplicas

It's fine because clusterAllReplicas means "treat each replica as a separate shard"

@tavplubix tavplubix marked this pull request as ready for review January 18, 2024 17:43
@nikitamikhaylov nikitamikhaylov self-assigned this Jan 18, 2024
@tavplubix
Copy link
Member Author

Integration tests (tsan) [3/6] - test_parallel_replicas_custom_key_failover was broken in master, reverted
Performance Comparison [1/4] - #59070
Stateful tests (tsan, ParallelReplicas) - 00165_jit_aggregate_functions is flaky (probably too long for tsan)
Stateless tests (release, analyzer) - 02901_parallel_replicas_rollup was broken in master, reverted
Stress test (msan) - #58509

@tavplubix tavplubix merged commit c2202ff into master Jan 22, 2024
257 of 264 checks passed
@tavplubix tavplubix deleted the tavplubix-patch-10 branch January 22, 2024 14:34
{
/// Skip unavailable hosts if necessary
auto try_results = replicas.pool->getMany(timeouts, current_settings, PoolMode::GET_MANY, /*async_callback*/ {}, /*skip_unavailable_endpoints*/ true);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the comment at line 1152 is correct. Then PoolMode::GET_ONE is sufficient here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the comment is either incorrect or misleading (because we get the cluster from StorageDistributed::getCluster()), we can ask @nikitamikhaylov

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's copy-paste from

for (const auto & replicas : src_cluster->getShardsAddresses())
{
/// There will be only one replica, because we consider each replica as a shard
for (const auto & node : replicas)

And looks like we have exactly the same issue in StorageReplicatedMergeTree::distributedWriteFromClusterStorage (and that's why we should avoid copy-paste when possible)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants