Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix RWLock inconsistency after write lock timeout #57454

Merged
merged 6 commits into from Dec 10, 2023

Conversation

vitlibar
Copy link
Member

@vitlibar vitlibar commented Dec 3, 2023

Changelog category:

  • Bug Fix

Changelog entry:

Fix RWLock inconsistency after write lock timeout.

This PR is to provide a correct fix for the issue described in the description of #38864. The fix provided in #38864 was not complete and after that fix after exclusive lock failure RWLock could become inconsistent and not be actually unlocked after releasing all locks. See #42719 (comment)

@robot-ch-test-poll robot-ch-test-poll added the pr-bugfix Pull request with bugfix, not backported by default label Dec 3, 2023
@robot-ch-test-poll
Copy link
Contributor

robot-ch-test-poll commented Dec 3, 2023

This is an automated comment for commit 179a0a2 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help❌ failure
Bugfix validate checkChecks that either a new test (functional or integration) or there some changed tests that fail with the binary built on master branch❌ error
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR⏳ pending
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts❌ failure

@vitlibar
Copy link
Member Author

vitlibar commented Dec 4, 2023

@alesapin It says Bugfix validate check — Changed tests don't reproduce the bug, but that's not true because the bug is reproduced well in unit-tests (src/unit_tests_dbms). So I suppose we should fix CI to make it consider those unit tests results too, do you know how to do that?

@Algunenano Algunenano self-assigned this Dec 5, 2023
Copy link
Member

@Algunenano Algunenano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general it looks great. I've left some comments about minor things

}


String RWLockImpl::getOwnerQueryIdsDescription() const
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend fmt::formatter to do things like this. It makes formatting containers much simpler.

Not necessary for the PR, just something to keep in mind

Copy link
Member Author

@vitlibar vitlibar Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we build our output in a cycle, fmt::formatter() doesn't seem to be much better in this case.

)

def truncate_tables():
while time.time() < end_time:
table_name = f"mydb.tbl{randint(1, num_nodes)}"
node = nodes[randint(0, num_nodes - 1)]
node.query(f"TRUNCATE TABLE IF EXISTS {table_name} SYNC")
# "TRUNCATE TABLE IF EXISTS" still can throw some errors (e.g. "WRITE locking attempt on node0 has timed out!")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this depends on the table type right? MergeTree should not require an exclusive lock for truncate as it's going to replace the old parts with an empty range.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OTOH In the case of DROP it depends on the database type (Atomic is fine)

Copy link
Member Author

@vitlibar vitlibar Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For MergeTree-based tables TRUNCATE doesn't require an exclusive lock (see) however this test can create "Log" tables too. I added a comment about that to the test.

@@ -41,8 +41,8 @@ RWLockImpl::LockHolder IStorage::tryLockTimed(
{
const String type_str = type == RWLockImpl::Type::Read ? "READ" : "WRITE";
throw Exception(ErrorCodes::DEADLOCK_AVOIDED,
"{} locking attempt on \"{}\" has timed out! ({}ms) Possible deadlock avoided. Client should retry",
type_str, getStorageID(), acquire_timeout.count());
"{} locking attempt on \"{}\" has timed out! ({}ms) Possible deadlock avoided. Client should retry. Owner query ids: {}",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although this might be inconsistent (time to check vs time to read) it's an amazing QOL improvement. It'd have been extremely useful to have it in the past.

@@ -169,11 +178,12 @@ RWLockImpl::getLock(RWLockImpl::Type type, const String & query_id, const std::c
if (rdlock_owner == readers_queue.end() && wrlock_owner == writers_queue.end())
{
(type == Read ? rdlock_owner : wrlock_owner) = it_group; /// SM2: nothrow
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This ternary conditional seems wrong, as this can only be reached if type == Read. Otherwise writers_queue won't be empty

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readers_queue and writers_queue can't be both empty here, at least they must contain it_group.
But we don't check those queues for emptyness here, we assign rdlock_owner or wrlock_owner to it_group.
This condition is not wrong.

{
if (rdlock_owner != readers_queue.end())
{
for (;;)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be, at most, only one group of readers pending right?

You can have N readers that are already owner, a writers, and then only a block of readers all together. AFAICS it's not possible to have more than one group of readers that aren't owners, but I might be missing something

Copy link
Member Author

@vitlibar vitlibar Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we append readers to the same reader group if this group doesn't have ownership yet.
We create another reading group only if the last reading group is already an owner.
But there can be multiple reader groups with ownership.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added chassert() to check that.

wrlock_owner = writers_queue.begin();
}
else
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this even possible anymore? If we removed all readers there must be a writer or nothing in the queue, but never another reader group.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There can be an active reader group, then a writer, then another (inactive) reading group.
We remove the active reader group, then we activate the writer.

if (timepoint.length() < 5)
timepoint.insert(0, 5 - timepoint.length(), ' ');
std::lock_guard lock{mutex};
std::cout << timepoint << " : " << event << std::endl;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove the prints now and leave the asserts

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@vitlibar vitlibar force-pushed the fix-rwlock branch 3 times, most recently from 49415fe to ec5348a Compare December 7, 2023 21:07
@vitlibar vitlibar merged commit a058a26 into ClickHouse:master Dec 10, 2023
326 of 337 checks passed
@vitlibar vitlibar deleted the fix-rwlock branch December 10, 2023 13:09
@alexey-milovidov
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-bugfix Pull request with bugfix, not backported by default
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants