Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix flaky tests caused by OPTIMIZE FINAL failing memory budget check #49764

Merged
merged 2 commits into from Jul 5, 2023

Conversation

al13n321
Copy link
Member

Changelog category (leave one):

  • Not for changelog (changelog entry is not required)

02461_prewhere_row_level_policy_lightweight_delete and 02458_relax_too_many_parts failed in the same way: their OPTIMIZE ... FINAL query didn't do the merge because Current background tasks memory usage (31.77 GiB) is more than the limit (31.01 GiB).

This PR does two things:

A lot of the 31+ GB background memory usage seems to come from tests 02581_share_big_sets_*. This PR marks 2/4 of them as no-parallel (the ones that are also marked long; I guessed that the other 2 use much less memory, but don't actually know what I'm doing).

OPTIMIZE FINAL already has a retry loop for something similar: to wait for other merges to complete. This PR moves the memory budget check into a similar retry loop. It re-checks the limit every second for up to 2 minutes (lock_acquire_timeout_for_background_operations).

@robot-ch-test-poll3 robot-ch-test-poll3 added the pr-not-for-changelog This PR should not be mentioned in the changelog label May 11, 2023
@robot-ch-test-poll3
Copy link
Contributor

robot-ch-test-poll3 commented May 11, 2023

This is an automated comment for commit a4abb81 with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🟢 success

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help🟢 success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟢 success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🟢 success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🟢 success
Mergeable CheckChecks if all other necessary checks are successful🟢 success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🟢 success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors🟢 success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts🟢 success

if (!is_background_memory_usage_ok(out_disable_reason))
{
constexpr auto poll_interval = std::chrono::seconds(1);
Int64 attempts = timeout / poll_interval;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This time is quite big (120 seconds). So we can wait here for a long time + additionally we can wait for currently_merging_mutating_parts another 120 seconds (next lines after loop). Maybe update timeout value after wait here?

@azat
Copy link
Collaborator

azat commented Jun 3, 2023

@al13n321 Maybe it worth to add optimize_throw_if_noop for the failed tests as well?

@alexey-milovidov alexey-milovidov merged commit 4527ffb into master Jul 5, 2023
258 checks passed
@alexey-milovidov alexey-milovidov deleted the bgmem branch July 5, 2023 21:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-not-for-changelog This PR should not be mentioned in the changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants