Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't fall-back to in-order pool when max_streams = 1 for remote fs #57334

Conversation

nickitat
Copy link
Member

Changelog category (leave one):

  • Performance Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Now we use default read pool for reading from external storage when max_streams = 1. It is beneficial when read prefetches are enabled.

@robot-clickhouse-ci-2 robot-clickhouse-ci-2 added the pr-performance Pull request with some performance improvements label Nov 28, 2023
@robot-clickhouse-ci-2
Copy link
Contributor

robot-clickhouse-ci-2 commented Nov 28, 2023

This is an automated comment for commit 626668c with description of existing statuses. It's updated for the latest CI running

✅ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@nickitat
Copy link
Member Author

clickhouse-benchmark -q "WITH RANDOM_SET AS (SELECT rand32() FROM numbers(20)) SELECT distinct _part FROM test_reading WHERE k IN RANDOM_SET and _part = '1_62735_87597_5' SETTINGS allow_experimental_parallel_reading_from_replicas=0, enable_filesystem_cache=0" --cumulative -i 1000

before:

Queries executed: 1000.

localhost:9000, queries: 1000, QPS: 5.409, RPS: 752087.598, MiB/s: 13.627, result RPS: 0.000, result MiB/s: 0.000.

0.000%		0.090 sec.
10.000%		0.142 sec.
20.000%		0.155 sec.
30.000%		0.165 sec.
40.000%		0.175 sec.
50.000%		0.182 sec.
60.000%		0.190 sec.
70.000%		0.200 sec.
80.000%		0.213 sec.
90.000%		0.228 sec.
95.000%		0.240 sec.
99.000%		0.295 sec.
99.900%		0.417 sec.
99.990%		0.443 sec.

after:

Queries executed: 1000.

localhost:9000, queries: 1000, QPS: 6.436, RPS: 891218.317, MiB/s: 16.147, result RPS: 0.000, result MiB/s: 0.000.

0.000%		0.064 sec.
10.000%		0.114 sec.
20.000%		0.127 sec.
30.000%		0.136 sec.
40.000%		0.143 sec.
50.000%		0.151 sec.
60.000%		0.160 sec.
70.000%		0.169 sec.
80.000%		0.181 sec.
90.000%		0.199 sec.
95.000%		0.216 sec.
99.000%		0.259 sec.
99.900%		0.284 sec.
99.990%		0.330 sec.

clickhouse-benchmark -q "select * from hits where CounterID < 30 SETTINGS allow_experimental_parallel_reading_from_replicas=0, enable_filesystem_cache=0" --cumulative -i 1000

before:

Queries executed: 1000.

localhost:9000, queries: 1000, QPS: 7.325, RPS: 60009.000, MiB/s: 0.268, result RPS: 109.880, result MiB/s: 0.040.

0.000%		0.061 sec.
10.000%		0.073 sec.
20.000%		0.079 sec.
30.000%		0.083 sec.
40.000%		0.086 sec.
50.000%		0.090 sec.
60.000%		0.094 sec.
70.000%		0.099 sec.
80.000%		0.107 sec.
90.000%		0.124 sec.
95.000%		0.280 sec.
99.000%		1.071 sec.
99.900%		1.082 sec.
99.990%		1.082 sec.

after:

Queries executed: 1000.

localhost:9000, queries: 1000, QPS: 12.266, RPS: 100483.770, MiB/s: 0.449, result RPS: 183.991, result MiB/s: 0.067.

0.000%		0.037 sec.
10.000%		0.045 sec.
20.000%		0.051 sec.
30.000%		0.055 sec.
40.000%		0.058 sec.
50.000%		0.061 sec.
60.000%		0.065 sec.
70.000%		0.070 sec.
80.000%		0.074 sec.
90.000%		0.084 sec.
95.000%		0.091 sec.
99.000%		1.031 sec.
99.900%		1.061 sec.
99.990%		1.068 sec.

@qoega
Copy link
Member

qoega commented Nov 28, 2023

Benchmark results do not look as good as expected. In first case you made 20 consequential reads as I understand before changes. And diff is 30-100ms only.

@nickitat
Copy link
Member Author

nickitat commented Nov 28, 2023

it represents a complex case. total number of marks is low (under 20), so there are only 2-3 tasks anyway. so new prefetches don't add match. and mark ranges are sparse within each task, so to get maximum parallelism you need to split the whole set of marks to read on very small tasks. than each one can be prefetched. I did a quick experiment and exec time indeed became less than 50ms.
added this to #54131

@yakov-olkhovskiy yakov-olkhovskiy self-assigned this Nov 29, 2023
@nickitat nickitat force-pushed the read_from_prefetched_pool_even_for_single_stream branch from 86f7af0 to ffb9fc2 Compare November 29, 2023 16:09
@nickitat nickitat merged commit 2362bb2 into ClickHouse:master Dec 1, 2023
336 checks passed
@nickitat nickitat deleted the read_from_prefetched_pool_even_for_single_stream branch December 1, 2023 20:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-performance Pull request with some performance improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants