Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Randomize disabled optimizations in CI #57315

Merged
merged 8 commits into from Dec 13, 2023

Conversation

Algunenano
Copy link
Member

Changelog category (leave one):

  • Not for changelog (changelog entry is not required)

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Randomize these optimizations that are disabled by default. Do they work correctly? Should they be on instead? Let's see

@robot-ch-test-poll3 robot-ch-test-poll3 added the pr-not-for-changelog This PR should not be mentioned in the changelog label Nov 28, 2023
@robot-ch-test-poll3
Copy link
Contributor

robot-ch-test-poll3 commented Nov 28, 2023

This is an automated comment for commit 89b9373 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success
Check nameDescriptionStatus
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure

@Algunenano
Copy link
Member Author

So many red tests, which is both nice (detecting issues) and bad (detecting issues).

00083_array_filter failed because of optimize_functions_to_subcolumns -> #57326

Need time to review the rest of the failures

@novikd
Copy link
Member

novikd commented Nov 28, 2023

Do they work correctly? Should they be on instead? Let's see

How would random settings help to understand that?

@novikd novikd self-assigned this Nov 28, 2023
@Algunenano
Copy link
Member Author

How would random settings help to understand that?

Yeah, I should probably have started by enabling them by default first and then we could leave them random. I'll update and keep analyzing the errors to report bugs found

@Algunenano
Copy link
Member Author

Much better results now, and by better I mean horrible 😄 Several crashes and sanitizer alerts. I'll start reporting

@Algunenano
Copy link
Member Author

Failures:

  • Performance Comparison [1/4]: Ignoring it for now, although the slowdown in map_update might be related (not sure if performance tests pick the randomness though).
  • Stateful tests (aarch64): Failing in 00083_array_filter, 00078_group_by_arrays and 00013_sorting_of_nested with optimize_functions_to_subcolumns, both with and without the analyzer. -> optimize_functions_to_subcolumns: Can't adjust last granule because it has 113 rows, but try to subtract 41073 rows #57326
  • Stateful tests SANITIZERS/DEBUG. Triggering the logical error in 00083_array_filter, which then stops the server and kills the tests.
  • Stateless tests (aarch64) . Hard crash (signal 11): Crash in 01825_type_json_in_other_types.sh. JSON + optimize_functions_to_subcolumns -> ColumnObject (experimental feature that is going to be removed): Crash with optimize_functions_to_subcolumns #57384
  • ClickHouse Stateless Tests (asan) [1/4]: Same problem.
  • ClickHouse Stateless Tests (asan) [2/4]. Minor tweaks needed for 02498_analyzer_settings_push_down.
  • ClickHouse Stateless Tests (asan) [3/4]. Logical error when running 02565_update_empty_nested.sql and optimize_functions_to_subcolumns.
  • ClickHouse Stateless Tests (asan) [4/4]: 02911_join_on_nullsafe_optimization.sql, 01030_incorrect_count_summing_merge_tree and 02892_orc_filter_pushdown are all failing with optimize_functions_to_subcolumnstoo.

Before continuing it's pretty obvious that optimize_functions_to_subcolumns is not ready for usage, so I'll disable it and retry everything to analyzer only the other settings (which have shown no issues so far)

@Algunenano
Copy link
Member Author

New batch of failures:

@Algunenano
Copy link
Member Author

I'm removing the randomization of optimize_using_constraints and I expect the rest to be clean. I'm tempted to remove broken optimizations, at least for the old infra (not analyzer), but we can do that in other PRs if and when the proper fix is done.

@Algunenano
Copy link
Member Author

Only unrelated (flaky) failures. I'll set the PR to only enable randomization of these settings for now, so at least we know they get usage continuosly and it's easier to decide whether to enable them by default or not

@Algunenano Algunenano marked this pull request as ready for review December 11, 2023 12:18
@Algunenano
Copy link
Member Author

Changes look harmless so I'm merging.

@Algunenano Algunenano merged commit a45c8bd into ClickHouse:master Dec 13, 2023
351 of 355 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-not-for-changelog This PR should not be mentioned in the changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants