Skip to content

Commit

Permalink
MDEV-30786 SIGFPE in cost_group_min_max on EXP
Browse files Browse the repository at this point in the history
The problem was that for merge tables without any underlaying tables the
block size was 0, which the code could not handle.

Fixed by ensuring that MERGE tables never have a block size of 0.
Fixed also a wrong assumption of number of seeks needed to scan
merge tables.
  • Loading branch information
montywi committed May 3, 2023
1 parent 3bdc554 commit 0099c2f
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 17 deletions.
14 changes: 14 additions & 0 deletions mysql-test/main/merge.result
Original file line number Diff line number Diff line change
Expand Up @@ -3965,5 +3965,19 @@ SELECT DISTINCT f FROM tm WHERE f IN (47, 126, 97, 48, 73, 0);
f
DROP TABLE tm, t1, t2;
#
# MDEV-30786 SIGFPE in cost_group_min_max on EXP
#
SET use_stat_tables='preferably';
CREATE TABLE t1 (a INT,b INT,KEY i1 (a),KEY i2 (b)) ENGINE=MRG_MyISAM;
ANALYZE LOCAL TABLE t1;
Table Op Msg_type Msg_text
test.t1 analyze status Engine-independent statistics collected
test.t1 analyze note The storage engine for the table doesn't support analyze
EXPLAIN SELECT DISTINCT a FROM t1;
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t1 range NULL i1 5 NULL 1 Using index for group-by
drop table t1;
set use_stat_tables=default;
#
# End of 11.0 tests
#
10 changes: 10 additions & 0 deletions mysql-test/main/merge.test
Original file line number Diff line number Diff line change
Expand Up @@ -2922,6 +2922,16 @@ CREATE TABLE tm (f INT, KEY(f)) ENGINE=MERGE UNION = (t1, t2);
SELECT DISTINCT f FROM tm WHERE f IN (47, 126, 97, 48, 73, 0);
DROP TABLE tm, t1, t2;

--echo #
--echo # MDEV-30786 SIGFPE in cost_group_min_max on EXP
--echo #
SET use_stat_tables='preferably';
CREATE TABLE t1 (a INT,b INT,KEY i1 (a),KEY i2 (b)) ENGINE=MRG_MyISAM;
ANALYZE LOCAL TABLE t1;
EXPLAIN SELECT DISTINCT a FROM t1;
drop table t1;
set use_stat_tables=default;

--echo #
--echo # End of 11.0 tests
--echo #
25 changes: 8 additions & 17 deletions storage/myisammrg/ha_myisammrg.cc
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ IO_AND_CPU_COST ha_myisammrg::rnd_pos_time(ha_rows rows)
{
IO_AND_CPU_COST cost= handler::rnd_pos_time(rows);
/*
Row data is notcached. costs.row_lookup_cost includes the cost of
Row data is not cached. costs.row_lookup_cost includes the cost of
the reading the row from system (probably cached by the OS).
*/
cost.io= 0;
Expand Down Expand Up @@ -1300,26 +1300,17 @@ int ha_myisammrg::info(uint flag)
table->s->keys_in_use.set_prefix(table->s->keys);
stats.mean_rec_length= mrg_info.reclength;

/*
/*
The handler::block_size is used all over the code in index scan cost
calculations. It is used to get number of disk seeks required to
retrieve a number of index tuples.
If the merge table has N underlying tables, then (assuming underlying
tables have equal size, the only "simple" approach we can use)
retrieving X index records from a merge table will require N times more
disk seeks compared to doing the same on a MyISAM table with equal
number of records.
In the edge case (file_tables > myisam_block_size) we'll get
block_size==0, and index calculation code will act as if we need one
disk seek to retrieve one index tuple.
TODO: In 5.2 index scan cost calculation will be factored out into a
virtual function in class handler and we'll be able to remove this hack.
If the merge table has N underlying tables, there will be
N more disk seeks compared to a scanning a normal MyISAM table.
The number of bytes read is the rougly the same for a normal MyISAM
and a MyISAM merge tables.
*/
stats.block_size= 0;
if (file->tables)
stats.block_size= myisam_block_size / file->tables;

stats.block_size= myisam_block_size;

stats.update_time= 0;
#if SIZEOF_OFF_T > 4
ref_length=6; // Should be big enough
Expand Down

0 comments on commit 0099c2f

Please sign in to comment.