Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Fix possible parts duplicates after ALTER TABLE MOVE PART TO SHARD #50777

Closed
wants to merge 4 commits into from

Conversation

azat
Copy link
Collaborator

@azat azat commented Jun 9, 2023

The patch is pretty icky, hence - RFC

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Fix possible parts duplicates after ALTER TABLE MOVE PART TO SHARD (not a production feature)

Before this patch ALTER TABLE MOVE PART TO SHARD has a race, and that
was the reason for flaky test.

Parts moves between shards works as follow (some steps are omitted, only
relevant is here):

  1. I: writes parts for migration written to zookeeper (/pinned_parts_uuids)
  2. I: queue request to read this parts from zookeeper (SYNC_SOURCE)
  3. R1/R2: process SYNC_SOURCE
  4. R1/R2: process SYNC_PINNED_PARTS_UUIDS (read parts from /pinned_parts_uuids)
  5. R1/R2: triggers DESTINATION_FETCH (after this destionation shard could clone the part)

Where:

  • I - initiator
  • R1/R2 - replica 1 and 2

Now imagine the following, SELECT query started before (4) finished, and
now (4) finished, and so now remote nodes could already fetch data from
the replicas, but those SELECT query may know nothing about
pinned_parts_uuids and in this case, the query will not filter out
duplicates (since it is done only if there are pinned_parts_uuids - to
avoid overhead for tables without active parts movements), and this will
lead to duplicates in the query.

So to fix this I see only one (icky) option - do not allow any queries
in parallel with updating pinned_parts_uuids. Note, that this should be
enough because SYNC_PINNED_PARTS_UUIDS processed synchronously on all
replicas, so it is not possible that parts will appears on the
destination shards before it is finished. And AFAICS dead lock should
not be possible, since acquiring locks in reverse order
(exclusive_lock, pinned_part_uuids_mutex) does not happens.

Refs: #49579 (cc @alexey-milovidov )
Fixes: #49574
Follow-up for: #17348 (cc @xjewer)
Follow-up for: #17871 (cc @nvartolomei)

Another option is to remove this feature all together, though it could be useful for automatic re-sharding

azat added 3 commits June 9, 2023 14:03
…solete-test"

This reverts commit fe00e66, reversing
changes made to 6b6504e.
This is possible for parts moves between shards:

    2023.06.08 12:22:47.764173 [ 104 ] {} <Error> default.test_move (PartMovesBetweenShardsOrchestrator): bool DB::PartMovesBetweenShardsOrchestrator::step(): Code: 27. DB::ParsingException: Cannot parse input: expected 'format version: ' at end of stream.: while reading entry: : while reading entry: . (CANNOT_PARSE_INPUT_ASSERTION_FAILED), Stack trace (when copying this message, always include the lines below):

    0. ./.cmake-llvm16/./contrib/llvm-project/libcxx/include/exception:134: Poco::Exception::Exception(String const&, int) @ 0x000000001ae703f2 in /usr/bin/clickhouse
    1. ./.cmake-llvm16/./src/Common/Exception.cpp:92: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000011ec4355 in /usr/bin/clickhouse
    2. ./.cmake-llvm16/./contrib/llvm-project/libcxx/include/string:1499: DB::ParsingException::ParsingException<String&>(int, FormatStringHelperImpl<std::type_identity<String&>::type>, String&) @ 0x0000000011f213a3 in /usr/bin/clickhouse
    3. ./.cmake-llvm16/./src/IO/ReadHelpers.cpp:101: DB::throwAtAssertionFailed(char const*, DB::ReadBuffer&) @ 0x0000000011f12d12 in /usr/bin/clickhouse
    4. ./.cmake-llvm16/./src/IO/ReadHelpers.cpp:137: ? @ 0x0000000011f12faf in /usr/bin/clickhouse
    5. ./.cmake-llvm16/./src/IO/Operators.h:0: DB::ReplicatedMergeTreeLogEntryData::readText(DB::ReadBuffer&, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>) @ 0x0000000018391af9 in /usr/bin/clickhouse
    6. ./.cmake-llvm16/./src/Storages/MergeTree/ReplicatedMergeTreeLogEntry.cpp:0: DB::ReplicatedMergeTreeLogEntry::parse(String const&, Coordination::Stat const&, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>) @ 0x0000000018392fe8 in /usr/bin/clickhouse
    7. ./.cmake-llvm16/./src/Storages/StorageReplicatedMergeTree.cpp:6070: DB::StorageReplicatedMergeTree::tryWaitForReplicaToProcessLogEntry(String const&, String const&, DB::ReplicatedMergeTreeLogEntryData const&, long) @ 0x0000000017de2627 in /usr/bin/clickhouse
    8. ./.cmake-llvm16/./src/Storages/StorageReplicatedMergeTree.cpp:5864: DB::StorageReplicatedMergeTree::tryWaitForAllReplicasToProcessLogEntry(String const&, DB::ReplicatedMergeTreeLogEntryData const&, long) @ 0x0000000017dddc0c in /usr/bin/clickhouse
    9. ./.cmake-llvm16/./contrib/llvm-project/libcxx/include/vector:543: DB::PartMovesBetweenShardsOrchestrator::stepEntry(DB::PartMovesBetweenShardsOrchestrator::Entry, std::shared_ptr<zkutil::ZooKeeper>) @ 0x000000001836eb2b in /usr/bin/clickhouse
    10. ./.cmake-llvm16/./contrib/llvm-project/libcxx/include/string:1499: DB::PartMovesBetweenShardsOrchestrator::step() @ 0x0000000018365f6e in /usr/bin/clickhouse
    11. ./.cmake-llvm16/./src/Storages/MergeTree/PartMovesBetweenShardsOrchestrator.cpp:47: DB::PartMovesBetweenShardsOrchestrator::run() @ 0x0000000018365094 in /usr/bin/clickhouse
    12. ./.cmake-llvm16/./contrib/llvm-project/libcxx/include/__functional/function.h:0: DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x00000000166d3394 in /usr/bin/clickhouse
    13. ./.cmake-llvm16/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: DB::BackgroundSchedulePool::threadFunction() @ 0x00000000166d5a96 in /usr/bin/clickhouse
    14. ./.cmake-llvm16/./src/Core/BackgroundSchedulePool.cpp:0: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000166d60ae in /usr/bin/clickhouse
    15. ./.cmake-llvm16/./base/base/../base/wide_integer_impl.h:796: ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x0000000011f7f45e in /usr/bin/clickhouse
    16. ./.cmake-llvm16/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x0000000011f82eae in /usr/bin/clickhouse
    17. ? @ 0x00007ffff7f95609 in ?
    18. __clone @ 0x00007ffff7eba133 in ?
     (version 23.5.1.1)

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Before this patch ALTER TABLE MOVE PART TO SHARD has a race, and that
was the reason for flaky test.

Parts moves between shards works as follow (some steps are omitted, only
relevant is here):

1) I: writes parts for migration written to zookeeper (/pinned_parts_uuids)
2) I: queue request to read this parts from zookeeper (SYNC_SOURCE)
3) R1/R2: process SYNC_SOURCE
4) R1/R2: process SYNC_PINNED_PARTS_UUIDS (read parts from /pinned_parts_uuids)
5) R1/R2: triggers DESTINATION_FETCH (after this destionation shard could clone the part)

Where:
- I - initiator
- R1/R2 - replica 1 and 2

Now imagine the following, SELECT query started before (4) finished, and
now (4) finished, and so now remote nodes could already fetch data from
the replicas, but those SELECT query may know nothing about
pinned_parts_uuids and in this case, the query will not filter out
duplicates (since it is done only if there are pinned_parts_uuids - to
avoid overhead for tables without active parts movements), and this will
lead to duplicates in the query.

So to fix this I see only one (icky) option - do not allow any queries
in parallel with updating pinned_parts_uuids. Note, that this should be
enough because SYNC_PINNED_PARTS_UUIDS processed synchronously on all
replicas, so it is not possible that parts will appears on the
destination shards before it is finished. And AFAICS dead lock should
not be possible, since acquiring locks in reverse order
(exclusive_lock, pinned_part_uuids_mutex) does not happens.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-bugfix Pull request with bugfix, not backported by default label Jun 9, 2023
@robot-ch-test-poll4
Copy link
Contributor

robot-ch-test-poll4 commented Jun 9, 2023

This is an automated comment for commit d89561c with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🔴 failure

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help🟢 success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟢 success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🟢 success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🟢 success
Mergeable CheckChecks if all other necessary checks are successful🟢 success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🟢 success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🔴 failure
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors🟢 success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts🟢 success

@alexey-milovidov alexey-milovidov removed the pr-bugfix Pull request with bugfix, not backported by default label Jun 9, 2023
@azat azat marked this pull request as draft June 10, 2023 06:41
@alesapin
Copy link
Member

I would prefer to remove this experimental feature.

@azat
Copy link
Collaborator Author

azat commented Jun 12, 2023

I would prefer to remove this experimental feature.

@alesapin This could be one of possible building block for automatic data re-sharding, can you explain your concerns about this feature?

@robot-ch-test-poll2 robot-ch-test-poll2 added the pr-improvement Pull request with some product improvements label Jun 12, 2023
@azat azat force-pushed the part_moves_between_shards-fix branch from e06b4b3 to d89561c Compare June 12, 2023 19:14
@azat azat marked this pull request as ready for review June 13, 2023 09:30
@alexey-milovidov
Copy link
Member

This building block is too complex and fragile, it is better to remove it entirely and write a new one.

@azat
Copy link
Collaborator Author

azat commented Jul 15, 2023

Actually I don't like this patch, since it uses the lock that is not intended to leak into such places I would say, and I think that there will be a better way to distribute parts between replicas.

However I was thinking about keeping this feature until #45766 will be resolved.

So if you also don't like this patch as well, then let's close this, and I guess that this feature should be removed completely then, since it has this problem (even though it is pretty rare, but still) and it looks like original authors are not interested in it anymore.

@alexey-milovidov
Copy link
Member

Ok. I also thought that we don't need part UUIDs. Instead, we should identify parts by their content hashes, as they are perfectly immutable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-improvement Pull request with some product improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

test_part_moves_between_shards/test.py::test_deduplication_while_move flaky
5 participants