Skip to content

Commit

Permalink
Fix integer overflow in batch sorted merge costs
Browse files Browse the repository at this point in the history
Also remove a leftover debug print.
  • Loading branch information
akuzm committed Dec 14, 2023
1 parent 06867af commit e0a3e30
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 6 deletions.
7 changes: 1 addition & 6 deletions tsl/src/nodes/decompress_chunk/decompress_chunk.c
Original file line number Diff line number Diff line change
Expand Up @@ -411,15 +411,10 @@ cost_batch_sorted_merge(PlannerInfo *root, CompressionInfo *compression_info,
* we often read a small subset of columns in analytical queries. The
* compressed chunk is never projected so we can't use it for that.
*/
const double work_mem_bytes = work_mem * 1024;
const double work_mem_bytes = work_mem * (double) 1024.0;
const double needed_memory_bytes = open_batches_clamped * DECOMPRESS_CHUNK_BATCH_SIZE *
dcpath->custom_path.path.pathtarget->width;

fprintf(stderr,
"open batches %lf, needed_bytes %lf\n",
open_batches_clamped,
needed_memory_bytes);

/*
* Next, calculate the cost penalty. It is a smooth step, starting at 75% of
* work_mem, and ending at 125%. We want to effectively disable this plan
Expand Down
17 changes: 17 additions & 0 deletions tsl/test/expected/compression_sorted_merge_distinct.out
Original file line number Diff line number Diff line change
Expand Up @@ -116,3 +116,20 @@ explain (costs off) select * from t where high_card < 10 order by ts;
(5 rows)

reset work_mem;
-- Test for large values of memory limit bytes that don't fit into an int.
-- Note that on i386 the max value is 2GB which is not enough to trigger the
-- overflow we had on 64-bit systems, so we have to use different values based
-- on the architecture.
select least(4194304, max_val::bigint) large_work_mem from pg_settings where name = 'work_mem' \gset
set work_mem to :large_work_mem;
explain (costs off) select * from t where high_card < 10 order by ts;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Custom Scan (DecompressChunk) on _hyper_1_1_chunk
-> Sort
Sort Key: compress_hyper_2_2_chunk._ts_meta_min_1
-> Index Scan using compress_hyper_2_2_chunk__compressed_hypertable_2_low_card_high on compress_hyper_2_2_chunk
Index Cond: (high_card < 10)
(5 rows)

reset work_mem;
9 changes: 9 additions & 0 deletions tsl/test/sql/compression_sorted_merge_distinct.sql
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,12 @@ explain (costs off) select * from t where high_card < 500 order by ts;
set work_mem to 64;
explain (costs off) select * from t where high_card < 10 order by ts;
reset work_mem;

-- Test for large values of memory limit bytes that don't fit into an int.
-- Note that on i386 the max value is 2GB which is not enough to trigger the
-- overflow we had on 64-bit systems, so we have to use different values based
-- on the architecture.
select least(4194304, max_val::bigint) large_work_mem from pg_settings where name = 'work_mem' \gset
set work_mem to :large_work_mem;
explain (costs off) select * from t where high_card < 10 order by ts;
reset work_mem;

0 comments on commit e0a3e30

Please sign in to comment.