Skip to content

Commit

Permalink
fix flaky rocksdb.rocksdb_table_stats_sampling_pct_change
Browse files Browse the repository at this point in the history
Upstream commit ID : fb-mysql-5.6.35/7d1be2321e2
PS-7921 : Merge percona-202105

Summary:
The testcase sometime fail due to table t2 data length value is much smaller than table t1 data length value. since table t2 data length value is calculated in L2 sst file while table t1 data length value is calculated in L1 sst file.

BTW, I think the testcase doesn't make sense. since table.data_length(and row number) value isn't sampled. only distinct key(cardinality) is sampled. Thus we should test cardinality value instead of table.data_length value.

Reviewed By: lth

Differential Revision: D30506323

fbshipit-source-id: 3016234435f
  • Loading branch information
Luis Donoso committed Nov 1, 2021
1 parent bd40a94 commit 4e4773f
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ SET @ORIG_PCT = @@ROCKSDB_TABLE_STATS_SAMPLING_PCT;
SET @@global.ROCKSDB_TABLE_STATS_SAMPLING_PCT = 100;
create table t1 (pk int primary key) engine=rocksdb;
set global rocksdb_force_flush_memtable_now = true;
set global rocksdb_compact_cf='default';
select table_rows from information_schema.tables
where table_schema = database() and table_name = 't1';
TABLE_ROWS
Expand All @@ -10,6 +11,7 @@ drop table t1;
SET @@global.ROCKSDB_TABLE_STATS_SAMPLING_PCT = 10;
create table t2 (pk int primary key) engine=rocksdb;
set global rocksdb_force_flush_memtable_now = true;
set global rocksdb_compact_cf='default';
select table_rows from information_schema.tables
where table_schema = database() and table_name = 't2';
TABLE_ROWS
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ while ($i < $n)
--enable_query_log

set global rocksdb_force_flush_memtable_now = true;
set global rocksdb_compact_cf='default';

# This should return 10K rows.
select table_rows from information_schema.tables
Expand Down Expand Up @@ -50,6 +51,7 @@ while ($i < $n)
--enable_query_log

set global rocksdb_force_flush_memtable_now = true;
set global rocksdb_compact_cf='default';

# This should return 10K rows as well.
select table_rows from information_schema.tables
Expand Down

0 comments on commit 4e4773f

Please sign in to comment.