Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speedup MIN and MAX for native types #58231

Merged
merged 2 commits into from Dec 28, 2023

Conversation

Algunenano
Copy link
Member

@Algunenano Algunenano commented Dec 26, 2023

Changelog category (leave one):

  • Performance Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

  • Speedup MIN and MAX for native types

Continuation from #40633 but instead of implementing new classes, which was deemed too dangerous because it duplicated storage code, it implements the new performance improvements by declaring subclasses that implement addBatchSinglePlace and addBatchSinglePlaceNotNull for min and max.

Note that I've only enabled SSE and AVX2 because I didn't see any improvement for AVX512 instructions, so less code to compile.

What's missing from the original PR but I intent to push in different PRs:

  • Do a similar improvement change ANY to remove the AggregateFunctionsSingleValue::is_any hack.
  • Similar improvements for ARGMIN / ARGMAX and its combinator (by bringing findNumericExtremeIndex)
  • Improvements in MIN/MAX for generic types (e.g. tuple) by using column.compareAt.

@Algunenano Algunenano added the 🎅 🎁 gift🎄 To make people wonder label Dec 26, 2023
@robot-ch-test-poll1 robot-ch-test-poll1 added the pr-performance Pull request with some performance improvements label Dec 26, 2023
@robot-ch-test-poll1
Copy link
Contributor

robot-ch-test-poll1 commented Dec 26, 2023

This is an automated comment for commit 68787ce with description of existing statuses. It's updated for the latest CI running

✅ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@Algunenano
Copy link
Member Author

I've been doing some local tests and it does show some improvement, but way less than expected. The problem with my assumption is that the if aggregate function execution was 20% of the query you can't speed it up over 20% 😄

An extract from perf:

+    1.40%     1.29%  QueryPipelineEx  clickhouse                                      [.] std::__1::optional<unsigned short> DB::findNumericExtremeImplAVX2<unsigned short, DB::MinComparator<unsigned short>, true, false>(unsigned short const*, char8_t ▒
+    1.22%     0.64%  QueryPipelineEx  clickhouse                                      [.] MemoryTracker::getSampleProbability(unsigned long)                                                                                                               ▒
     1.19%     0.86%  QueryPipelineEx  clickhouse                                      [.] DB::injection(double, double, double, double)                                                                                                                    ▒
+    1.18%     0.06%  QueryPipelineEx  [kernel.vmlinux]                                [k] wake_up_q                                                                                                                                                        ▒
+    1.11%     0.51%  QueryPipelineEx  [kernel.vmlinux]                                [k] try_to_wake_up                                                                                                                                                   ▒
+    1.11%     0.35%  QueryPipelineEx  clickhouse                                      [.] ProfileEvents::increment(StrongTypedef<unsigned long, ProfileEvents::EventTag>, unsigned long)                                                                   ▒
     1.09%     0.71%  QueryPipelineEx  clickhouse                                      [.] operator delete(void*, unsigned long)                                                                                                                            ▒
+    1.03%     0.35%  QueryPipelineEx  clickhouse                                      [.] DB::ISimpleTransform::prepare()

Now it's spending pretty much the same amount of time calculating the min as in MemoryTracker::getSampleProbability (which is off) or ProfileEvents::increment.

@Algunenano Algunenano marked this pull request as ready for review December 27, 2023 12:37
@Algunenano
Copy link
Member Author

Some perf tests:

Setup:

create table memory_numbers (u8 UInt8, i64 Int64, f32 Float32, f64 Float64, nu8 Nullable(UInt8), ni64 Nullable(Int64), nf32 Nullable(Float32), nf64 Nullable(Float64)) ENGINE=Memory;
INSERT INTO memory_numbers SELECT
    number,
    number,
    number,
    number,
    if(((number % 3) = 0) OR ((number % 5) = 1), NULL, number),
    if(((number % 3) = 0) OR ((number % 5) = 1), NULL, number),
    if(((number % 3) = 0) OR ((number % 5) = 1), NULL, number),
    if(((number % 3) = 0) OR ((number % 5) = 1), NULL, number)
FROM numbers_mt(1000000000);

MIN

for i in $(printf "u8\ni64\nf32\nf64\nnu8\nni64\nnf32\nnf64"); do echo $i; clickhouse benchmark --port 49000 --timelimit 10 -q "Select min($i) from memory_numbers; "; echo "^^ $i ^^"; done
  • u8
    • Before: localhost:49000, queries: 363, QPS: 36.114, RPS: 36114356350.544, MiB/s: 34441.334, result RPS: 36.114, result MiB/s: 0.000.
    • After: localhost:49000, queries: 395, QPS: 39.409, RPS: 39408625340.607, MiB/s: 37582.994, result RPS: 39.409, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 405, QPS: 40.311, RPS: 40311354466.735, MiB/s: 38443.903, result RPS: 40.311, result MiB/s: 0.000.
  • i64
    • Before: localhost:49000, queries: 66, QPS: 6.532, RPS: 6532466086.540, MiB/s: 49838.761, result RPS: 6.532, result MiB/s: 0.000.
    • After: localhost:49000, queries: 67, QPS: 6.648, RPS: 6647547599.454, MiB/s: 50716.763, result RPS: 6.648, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 66, QPS: 6.555, RPS: 6554882065.039, MiB/s: 50009.781, result RPS: 6.555, result MiB/s: 0.000.
  • f32
    • Before: localhost:49000, queries: 130, QPS: 12.831, RPS: 12831239058.107, MiB/s: 48947.293, result RPS: 12.831, result MiB/s: 0.000.
    • After: localhost:49000, queries: 132, QPS: 13.084, RPS: 13083748605.294, MiB/s: 49910.540, result RPS: 13.084, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 130, QPS: 12.846, RPS: 12846424404.402, MiB/s: 49005.220, result RPS: 12.846, result MiB/s: 0.000.
  • f64
    • Before: localhost:49000, queries: 66, QPS: 6.520, RPS: 6519767205.583, MiB/s: 49741.876, result RPS: 6.520, result MiB/s: 0.000.
    • After: localhost:49000, queries: 68, QPS: 6.711, RPS: 6710618523.220, MiB/s: 51197.956, result RPS: 6.711, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 66, QPS: 6.546, RPS: 6546223501.223, MiB/s: 49943.722, result RPS: 6.546, result MiB/s: 0.000.
  • nullable u8
    • Before: localhost:49000, queries: 239, QPS: 23.720, RPS: 23719665481.669, MiB/s: 45241.672, result RPS: 23.720, result MiB/s: 0.000.
    • After: localhost:49000, queries: 240, QPS: 23.886, RPS: 23886213623.950, MiB/s: 45559.337, result RPS: 23.886, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 237, QPS: 23.563, RPS: 23562716521.170, MiB/s: 44942.315, result RPS: 23.563, result MiB/s: 0.000.
  • nullable i64
    • Before: localhost:49000, queries: 60, QPS: 5.863, RPS: 5863340817.145, MiB/s: 50325.458, result RPS: 5.863, result MiB/s: 0.000.
    • After: localhost:49000, queries: 62, QPS: 6.095, RPS: 6094690408.795, MiB/s: 52311.147, result RPS: 6.095, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 59, QPS: 5.798, RPS: 5797674669.685, MiB/s: 49761.841, result RPS: 5.798, result MiB/s: 0.000.
  • nullable f32
    • Before: localhost:49000, queries: 103, QPS: 10.115, RPS: 10114767729.566, MiB/s: 48230.971, result RPS: 10.115, result MiB/s: 0.000.
    • After: localhost:49000, queries: 107, QPS: 10.592, RPS: 10592291145.637, MiB/s: 50507.980, result RPS: 10.592, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 104, QPS: 10.237, RPS: 10237331973.631, MiB/s: 48815.403, result RPS: 10.237, result MiB/s: 0.000.
  • nullable f64
    • Before: localhost:49000, queries: 58, QPS: 5.734, RPS: 5733669427.709, MiB/s: 49212.479, result RPS: 5.734, result MiB/s: 0.000.
    • After: localhost:49000, queries: 62, QPS: 6.088, RPS: 6088118360.913, MiB/s: 52254.739, result RPS: 6.088, result MiB/s: 0.000.
    • JIT: localhost:49000, queries: 59, QPS: 5.830, RPS: 5829798633.458, MiB/s: 50037.563, result RPS: 5.830, result MiB/s: 0.000.

Some thoughts:

  • The impact of the agg function in these queries is minimal, so a great improvement in exectution of the function only improves the query marginally.
  • I run this in a CPU with heterogeneous cache sizes and turbo boost activated. Not a great idea.
  • The new functions are always faster than the old. Almost always also faster than JIT. Should we just remove it for those functions?

@Algunenano
Copy link
Member Author

x86:
image

Arm:
image

Funnily enough, even thought ARM does not benefit from the multitarget code, introducing findNumericExtreme helps the compiler to optimize the process anyway.

@robot-ch-test-poll2 robot-ch-test-poll2 merged commit 04178a9 into ClickHouse:master Dec 28, 2023
273 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🎅 🎁 gift🎄 To make people wonder pr-performance Pull request with some performance improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants