Skip to content

Restore database replica#73100

Merged
tuanpach merged 55 commits intoClickHouse:masterfrom
k-morozov:feature/restore_database_replica
Jul 22, 2025
Merged

Restore database replica#73100
tuanpach merged 55 commits intoClickHouse:masterfrom
k-morozov:feature/restore_database_replica

Conversation

@k-morozov
Copy link
Copy Markdown
Contributor

@k-morozov k-morozov commented Dec 11, 2024

Changelog category (leave one):

  • New Feature

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

This PR introduces the restore database replica functionality for replicated databases, similar to the existing functionality for restore in ReplicatedMergeTree.

  • Implemented the ability to restore database replicas in replicated databases.
  • Useful for recovering database functionality when Zookeeper data is corrupted.
  • Lays the groundwork for future migration tools from atomic engines to replicated ones.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

CI Settings (Only check the boxes if you know what you are doing):

  • Allow: All Required Checks
  • Allow: Stateless tests
  • Allow: Stateful tests
  • Allow: Integration Tests
  • Allow: Performance tests
  • Allow: All Builds
  • Allow: batch 1, 2 for multi-batch jobs
  • Allow: batch 3, 4, 5, 6 for multi-batch jobs

  • Exclude: Style check
  • Exclude: Fast test
  • Exclude: All with ASAN
  • Exclude: All with TSAN, MSAN, UBSAN, Coverage
  • Exclude: All with aarch64, release, debug

  • Run only fuzzers related jobs (libFuzzer fuzzers, AST fuzzers, etc.)
  • Exclude: AST fuzzers

  • Do not test
  • Woolen Wolfdog
  • Upload binaries for special builds
  • Disable merge-commit
  • Disable CI cache

@k-morozov k-morozov changed the title [WIP] Restore database replica Restore database replica Dec 26, 2024
@k-morozov k-morozov marked this pull request as ready for review December 26, 2024 07:29
@alexey-milovidov alexey-milovidov added the can be tested Allows running workflows for external contributors label Dec 26, 2024
@robot-clickhouse-ci-2 robot-clickhouse-ci-2 added the pr-feature Pull request with new product feature label Dec 26, 2024
@robot-ch-test-poll4
Copy link
Copy Markdown
Contributor

robot-ch-test-poll4 commented Dec 26, 2024

This is an automated comment for commit de99f65 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure
Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
BuildsThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
BuzzHouse (asan)There's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
BuzzHouse (debug)There's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
BuzzHouse (msan)There's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
BuzzHouse (tsan)There's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
BuzzHouse (ubsan)There's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
ClickBenchRuns ClickBench with instant-attach table✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@k-morozov
Copy link
Copy Markdown
Contributor Author

we check the metadata on node1. If everything is good, we want to restore the database from node1

Yes, we should determine the first replica for restoring tables metadata. All other replicas would check their local table metadata with zookeeper metadata that was restored from node1.

However, it works well only for the RESTORE DATABASE REPLICA query without ON CLUSTER. When executing the query with ON CLUSTER we can not determine which replica will restore tables metadata in zookeeper because the execution order is not deterministic (for example node1 -> node2 or node2 -> node1)

So if we use is_initial_host we could have the problem with ON CLUSTER:

  • the query is run on node1.
  • node2 finishes query faster than other nodes.
  • This results in metadata being restored from node2.
  • But node2 is not an is_initial_host.
  • Consequently, no table metadata gets restored.

@k-morozov k-morozov requested a review from tuanpach July 2, 2025 14:27
@tuanpach
Copy link
Copy Markdown
Member

tuanpach commented Jul 2, 2025

Sorry for the confusion. I mean the replica that receives the query is the is_initial_host.

So that if node1 receives the query, only node1 will initialize the DB and restore metadata in Keeper.

@k-morozov
Copy link
Copy Markdown
Contributor Author

So that if node1 receives the query, only node1 will initialize the DB and restore metadata in Keeper.

But we don’t have any guarantees about the execution order of ON CLUSTER queries on the hosts. We might send the query to node1, but it could actually run on node2 first. In that case, the table metadata in zookeeper on node2 won’t get restored, and when DatabaseReplicatedDDLWorker initializes, those tables will be detached (recoverLostReplica).

@tuanpach
Copy link
Copy Markdown
Member

tuanpach commented Jul 8, 2025

@k-morozov I think we don't need to support restoring metadata from a specific node with ON CLUSTER. If users want to restore from a node, users can run it without ON CLUSTER.


assert ["1"] == node_1.query(
f"SELECT count(*) FROM system.tables WHERE database='{exclusive_database_name}'"
).split()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can assert with TSV:

To assert that two TSV files must be equal, wrap them in the TSV class and use the regular assert statement. Example: assert TSV(result) == TSV(reference). In case the assertion fails, pytest will automagically detect the types of variables and only the small diff of two files is printed.

https://github.com/ClickHouse/ClickHouse/blob/master/tests/integration/README.md

Or strip() to remove `\n'.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean using TSV file for expected value?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please refer to this test:

assert node1.query("SELECT * FROM mydb.tbl ORDER BY x") == TSV([1, 22])
assert node2.query("SELECT * FROM mydb.tbl2 ORDER BY y") == TSV(["a", "bb"])
assert node2.query("SELECT * FROM mydb.tbl ORDER BY x") == TSV([1, 22])
assert node1.query("SELECT * FROM mydb.tbl2 ORDER BY y") == TSV(["a", "bb"])

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

)
def test_query_after_restore_db_replica(
start_cluster,
exclusive_database_name,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use the same DB name. At the beginning of the test, we can drop the DB if it exists with SYNC to avoid flakiness.
It may help identify some potential issues.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest using unique database name for test because we would have unique database name for each test in the log and we could grep all usefull information about this database by its unique name.

look_behind_lines=1000,
)

assert [f"{count_test_table_1}"] == node_1.query(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may take some time for the tables to be restored. We can use query_with_retry to make the test to pass consistently.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@k-morozov
Copy link
Copy Markdown
Contributor Author

k-morozov commented Jul 11, 2025

@k-morozov I think we don't need to support restoring metadata from a specific node with ON CLUSTER. If users want to restore from a node, users can run it without ON CLUSTER.

@tuanpach hi.

I am confused.

You suggest using is_initial_host as a flag to determine the replica where query was started.

if(is_initial_host) 
        restoreTablesMetadataInKeeper(ctx);

There are 2 nodes

  • node1
  • node2

How it works in my mind:

  1. We run SYSTEM RESTORE DATABASE REPLICA .. ON CLUSTER ... on node1.
  2. Clickhouse chooses which replica will execut this query first. There are 2 options: node1->node2 or node2->node1. I assume that both option are possible. Please correct me if I'm wrong.
  3. For example we choose the order: node2 -> node1.
  4. node2 is not is_initial_host. So we don't restore the metadata and all tables of the Replicated database on node2 become broken.
  5. Next step - we will restore metadata when the query is executed on node1.

As a result we have broken node2.

@k-morozov k-morozov requested a review from tuanpach July 11, 2025 08:44
@tuanpach
Copy link
Copy Markdown
Member

@k-morozov I think we don't need to support restoring metadata from a specific node with ON CLUSTER. If users want to restore from a node, users can run it without ON CLUSTER.

@tuanpach hi.

I am confused.

You suggest using is_initial_host as a flag to determine the replica where query was started.

if(is_initial_host) 
        restoreTablesMetadataInKeeper(ctx);

There are 2 nodes

  • node1
  • node2

How it works in my mind:

  1. We run SYSTEM RESTORE DATABASE REPLICA .. ON CLUSTER ... on node1.
  2. Clickhouse chooses which replica will execut this query first. There are 2 options: node1->node2 or node2->node1. I assume that both option are possible. Please correct me if I'm wrong.
  3. For example we choose the order: node2 -> node1.
  4. node2 is not is_initial_host. So we don't restore the metadata and all tables of the Replicated database on node2 become broken.
  5. Next step - we will restore metadata when the query is executed on node1.

As a result we have broken node2.

Sorry for the confusion. My idea was that we use the host that receives the query as the initial host. And we restore tables' metadata from that host.

However, it might not be necessary. If users want to restore tables' metadata from a specific host, they can run the restore query without ON CLUSTER.

In the current implementation with ON CLUSTER, the restoring metadata can be from any host, whichever processes first. I think it is good enough.

@k-morozov
Copy link
Copy Markdown
Contributor Author

@tuanpach

@k-morozov I think we don't need to support restoring metadata from a specific node with ON CLUSTER. If users want to restore from a node, users can run it without ON CLUSTER.

@tuanpach hi.
I am confused.
You suggest using is_initial_host as a flag to determine the replica where query was started.

if(is_initial_host) 
        restoreTablesMetadataInKeeper(ctx);

There are 2 nodes

  • node1
  • node2

How it works in my mind:

  1. We run SYSTEM RESTORE DATABASE REPLICA .. ON CLUSTER ... on node1.
  2. Clickhouse chooses which replica will execut this query first. There are 2 options: node1->node2 or node2->node1. I assume that both option are possible. Please correct me if I'm wrong.
  3. For example we choose the order: node2 -> node1.
  4. node2 is not is_initial_host. So we don't restore the metadata and all tables of the Replicated database on node2 become broken.
  5. Next step - we will restore metadata when the query is executed on node1.

As a result we have broken node2.

Sorry for the confusion. My idea was that we use the host that receives the query as the initial host. And we restore tables' metadata from that host.

However, it might not be necessary. If users want to restore tables' metadata from a specific host, they can run the restore query without ON CLUSTER.

In the current implementation with ON CLUSTER, the restoring metadata can be from any host, whichever processes first. I think it is good enough.

So we would use current implementation, wouldn't we? Another comments were resolved.

@tuanpach
Copy link
Copy Markdown
Member

@tuanpach
Copy link
Copy Markdown
Member

Yes, we can use the current implementation.

@tuanpach tuanpach added this pull request to the merge queue Jul 22, 2025
Merged via the queue into ClickHouse:master with commit cbab61c Jul 22, 2025
120 of 123 checks passed
@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added the pr-synced-to-cloud The PR is synced to the cloud repo label Jul 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

can be tested Allows running workflows for external contributors pr-feature Pull request with new product feature pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants