You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The source cluster table details are as follows: DDL queries
Create database: create database on cluster 'src-cluster'
Create local replicated on src cluster: CREATE TABLE database-name.local-table-name ON CLUSTER 'src-cluster' (id UInt32, flightName String, socurce String, destination String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}//', '{replica}') ORDER BY id;
Create Distributed table: CREATE TABLE database-name.dist-table-name ON CLUSTER 'src-cluster' AS database-name.local-table-name ENGINE = Distributed('src-cluster','database-name,'local-table-name, id);
Data generation Insert query:
INSERT INTO database-name.dist-table-name SELECT number, randomPrintableASCII(randUniform(5, 25)), randomPrintableASCII(randUniform(5, 25)), randomPrintableASCII(randUniform(5, 25))FROM numbers(2000000000)
row count on src cluster: 2005242245 (query: select count(*) from database-name.dist-table-name)
(Note: We ran the insert command once with 2B rows and second run with ~5M rows)
The destination cluster table details are as follows: DDL queries
Create database: create database database-name on cluster 'dest-cluster'
Create local repliacted table on dest cluster: These were created by copier
Create Distributed table: CREATE TABLE database-name.dist-table-name ON CLUSTER 'dest-cluster' AS database-name.local-table-name ENGINE = Distributed('dest-cluster','database-name,'local-table-name, id);
row count on dest cluster: 2319920618 (query: select count(*) from database-name.dist-table-name)
A clear and concise description of what works not as it is supposed to.
After running copier to move and re-balance the data on destination cluster, we see that the row count on destination cluster distributed table was ~300M more than the table in source cluster.
We are not sure why is there a row count mismatch between source and destination cluster. Can somebody please explain the cause of this mismatch?
We are trying to re-shard and re-balance a existing 2 shard/2 replica clickhouse cluster by moving data using clickhouse copier to a 3 shard/2 replica cluster
We used following links as reference for using clickhouse copier
https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/operations/utilities/clickhouse-copier.md
https://altinity.com/blog/2018-8-22-clickhouse-copier-in-practice
In our tests, we are observing that after successful copier run, row count on the destination cluster does not match with the source cluster.
The source cluster table details are as follows:
DDL queries
Data generation Insert query:
INSERT INTO database-name.dist-table-name SELECT number, randomPrintableASCII(randUniform(5, 25)), randomPrintableASCII(randUniform(5, 25)), randomPrintableASCII(randUniform(5, 25))FROM numbers(2000000000)
row count on src cluster: 2005242245 (query: select count(*) from database-name.dist-table-name)
(Note: We ran the insert command once with 2B rows and second run with ~5M rows)
The destination cluster table details are as follows:
DDL queries
row count on dest cluster: 2319920618 (query: select count(*) from database-name.dist-table-name)
After running copier to move and re-balance the data on destination cluster, we see that the row count on destination cluster distributed table was ~300M more than the table in source cluster.
We are not sure why is there a row count mismatch between source and destination cluster. Can somebody please explain the cause of this mismatch?
How to reproduce
CH copier.zip
Expected behavior
The row count after querying distributed table on destination cluster should match with the row count query on distributed table in source cluster.
The text was updated successfully, but these errors were encountered: