Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters #48932

Merged

Conversation

wzb5212
Copy link
Contributor

@wzb5212 wzb5212 commented Apr 19, 2023

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters, may be close #48931.

@CLAassistant
Copy link

CLAassistant commented Apr 19, 2023

CLA assistant check
All committers have signed the CLA.

@robot-ch-test-poll robot-ch-test-poll added the pr-bugfix Pull request with bugfix, not backported by default label Apr 19, 2023
@nickitat nickitat added the can be tested Allows running workflows for external contributors label Apr 19, 2023
@alesapin
Copy link
Member

Can you please add an integration test?

@wzb5212
Copy link
Contributor Author

wzb5212 commented Apr 20, 2023

Can you please add an integration test?

ok.

@wzb5212
Copy link
Contributor Author

wzb5212 commented Apr 20, 2023

Can you please add an integration test?

I want to add an integration test, but find that it(tests/integration/helpers/cluster.py) doesn't support creating two sets of zookeeper clusters, Am I right?

for instance in ["zoo1", "zoo2", "zoo3"]:

I want to create two separate zookeeper clusters, as shown below, is it impossible to execute?

# zookeeper config
<clickhouse>
    <zookeeper>
        <node index="1">
            <host>zoo1</host>
            <port>2181</port>
        </node>
        <node index="2">
            <host>zoo2</host>
            <port>2181</port>
        </node>
        <node index="3">
            <host>zoo3</host>
            <port>2181</port>
        </node>
    </zookeeper>
    <auxiliary_zookeepers>
        <zookeeper2>
            <node index="1">
                <host>zoo4</host>
                <port>2181</port>
            </node>
            <node index="2">
                <host>zoo5</host>
                <port>2181</port>
            </node>
            <node index="3">
                <host>zoo6</host>
                <port>2181</port>
            </node>
        </zookeeper2>
    </auxiliary_zookeepers>
</clickhouse>

# test.py
import time

import helpers.client as client
import pytest
from helpers.cluster import ClickHouseCluster
from helpers.client import QueryRuntimeException
from helpers.test_tools import TSV

cluster = ClickHouseCluster(__file__)
node1 = cluster.add_instance(
    "node1",
    main_configs=["configs/zookeeper_config.xml", "configs/remote_servers.xml"],
    with_zookeeper=True,
    use_keeper=False,
)
node2 = cluster.add_instance(
    "node2",
    main_configs=["configs/zookeeper_config.xml", "configs/remote_servers.xml"],
    with_zookeeper=True,
    use_keeper=False,
)


@pytest.fixture(scope="module")
def started_cluster():
    try:
        cluster.start()

        yield cluster

    except Exception as ex:
        print(ex)

    finally:
        cluster.shutdown()


def drop_table(nodes, table_name):
    for node in nodes:
        node.query("DROP TABLE IF EXISTS {} NO DELAY".format(table_name))


def test_drop_replica_in_auxiliary_zookeeper(started_cluster):
    drop_table([node1, node2], "test_auxiliary_zookeeper")
    for node in [node1, node2]:
        node.query(
            """
                CREATE TABLE test_auxiliary_zookeeper(a Int32)
                ENGINE = ReplicatedMergeTree('zookeeper2:/clickhouse/tables/test/test_auxiliary_zookeeper', '{replica}')
                ORDER BY a;
            """.format(
                replica=node.name
            )
        )

    # stop node2 server
    node2.stop_clickhouse()
    time.sleep(5)

    # drop replica node2
    node1.query("SYSTEM DROP REPLICA 'node2'")

    zk = cluster.get_kazoo_client("zoo4")
    assert zk.exists("/clickhouse/tables/test/test_auxiliary_zookeeper")
    assert zk.exists("/clickhouse/tables/test/test_auxiliary_zookeeper/replicas/node2") is None


@alexey-milovidov alexey-milovidov marked this pull request as draft April 21, 2023 10:20
@robot-ch-test-poll2 robot-ch-test-poll2 added pr-improvement Pull request with some product improvements and removed pr-bugfix Pull request with bugfix, not backported by default labels Apr 21, 2023
@wzb5212 wzb5212 marked this pull request as ready for review April 21, 2023 11:27
@wzb5212 wzb5212 marked this pull request as draft April 21, 2023 11:35
@wzb5212
Copy link
Contributor Author

wzb5212 commented Apr 21, 2023

@alexey-milovidov How to solve this problem?

@wzb5212 wzb5212 changed the title multiple zookeeper drop replica bug fix Implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters Apr 21, 2023
@alexey-milovidov
Copy link
Member

I see at least some of the integration tests have auxiliary ZooKeeper clusters, let's do something similar to make a test for this PR.

@wzb5212
Copy link
Contributor Author

wzb5212 commented Apr 21, 2023

I see at least some of the integration tests have auxiliary ZooKeeper clusters, let's do something similar to make a test for this PR.

I looked at some integration tests for auxiliary ZooKeeper clusters, �such as test_reload_auxiliary_zookeepers, test_fetch_partition_from_auxiliary_zookeeper and test_replicated_merge_tree_with_auxiliary_zookeepers, but found that although they used auxiliary ZooKeeper clusters, they were actually the same set of zookeeper clusters, and I looked at the fact that tests/integration/helpers/cluster.py only created one set of zookeeper clusters.

for instance in ["zoo1", "zoo2", "zoo3"]:

This pr requires two separate zookeeper clusters for testing.

@wzb5212
Copy link
Contributor Author

wzb5212 commented Apr 21, 2023

Can you please add an integration test?

I want to add an integration test, but find that it(tests/integration/helpers/cluster.py) doesn't support creating two sets of zookeeper clusters, Am I right?

for instance in ["zoo1", "zoo2", "zoo3"]:

I want to create two separate zookeeper clusters, as shown below, is it impossible to execute?

# zookeeper config
<clickhouse>
    <zookeeper>
        <node index="1">
            <host>zoo1</host>
            <port>2181</port>
        </node>
        <node index="2">
            <host>zoo2</host>
            <port>2181</port>
        </node>
        <node index="3">
            <host>zoo3</host>
            <port>2181</port>
        </node>
    </zookeeper>
    <auxiliary_zookeepers>
        <zookeeper2>
            <node index="1">
                <host>zoo4</host>
                <port>2181</port>
            </node>
            <node index="2">
                <host>zoo5</host>
                <port>2181</port>
            </node>
            <node index="3">
                <host>zoo6</host>
                <port>2181</port>
            </node>
        </zookeeper2>
    </auxiliary_zookeepers>
</clickhouse>

# test.py
import time

import helpers.client as client
import pytest
from helpers.cluster import ClickHouseCluster
from helpers.client import QueryRuntimeException
from helpers.test_tools import TSV

cluster = ClickHouseCluster(__file__)
node1 = cluster.add_instance(
    "node1",
    main_configs=["configs/zookeeper_config.xml", "configs/remote_servers.xml"],
    with_zookeeper=True,
    use_keeper=False,
)
node2 = cluster.add_instance(
    "node2",
    main_configs=["configs/zookeeper_config.xml", "configs/remote_servers.xml"],
    with_zookeeper=True,
    use_keeper=False,
)


@pytest.fixture(scope="module")
def started_cluster():
    try:
        cluster.start()

        yield cluster

    except Exception as ex:
        print(ex)

    finally:
        cluster.shutdown()


def drop_table(nodes, table_name):
    for node in nodes:
        node.query("DROP TABLE IF EXISTS {} NO DELAY".format(table_name))


def test_drop_replica_in_auxiliary_zookeeper(started_cluster):
    drop_table([node1, node2], "test_auxiliary_zookeeper")
    for node in [node1, node2]:
        node.query(
            """
                CREATE TABLE test_auxiliary_zookeeper(a Int32)
                ENGINE = ReplicatedMergeTree('zookeeper2:/clickhouse/tables/test/test_auxiliary_zookeeper', '{replica}')
                ORDER BY a;
            """.format(
                replica=node.name
            )
        )

    # stop node2 server
    node2.stop_clickhouse()
    time.sleep(5)

    # drop replica node2
    node1.query("SYSTEM DROP REPLICA 'node2'")

    zk = cluster.get_kazoo_client("zoo4")
    assert zk.exists("/clickhouse/tables/test/test_auxiliary_zookeeper")
    assert zk.exists("/clickhouse/tables/test/test_auxiliary_zookeeper/replicas/node2") is None

@alexey-milovidov

@tavplubix tavplubix self-assigned this Apr 24, 2023
@tavplubix
Copy link
Member

This pr requires two separate zookeeper clusters for testing.

It does not, it's okay to do it the same way as in existing tests. See also root config parameter for zookeeper client (it allows to specify arbitrary root path)

@wzb5212
Copy link
Contributor Author

wzb5212 commented Apr 26, 2023

This pr requires two separate zookeeper clusters for testing.

It does not, it's okay to do it the same way as in existing tests. See also root config parameter for zookeeper client (it allows to specify arbitrary root path)

ok, I add an integration test, please take a look.

@wzb5212 wzb5212 marked this pull request as ready for review April 27, 2023 09:23
@robot-ch-test-poll
Copy link
Contributor

robot-ch-test-poll commented May 4, 2023

This is an automated comment for commit 7654771 with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🔴 failure

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help🟢 success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟢 success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🟢 success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🟢 success
Mergeable CheckChecks if all other necessary checks are successful🟢 success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🟢 success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🔴 failure
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors🟢 success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts🟢 success

@tavplubix tavplubix merged commit f704c0d into ClickHouse:master May 5, 2023
256 of 257 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
can be tested Allows running workflows for external contributors pr-improvement Pull request with some product improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters
8 participants