Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Inconsistent and Incorrect Aggregations #4922

Closed
aboucher-r7 opened this issue Nov 4, 2022 · 13 comments · Fixed by #5172
Closed

[Bug]: Inconsistent and Incorrect Aggregations #4922

aboucher-r7 opened this issue Nov 4, 2022 · 13 comments · Fixed by #5172
Assignees

Comments

@aboucher-r7
Copy link

aboucher-r7 commented Nov 4, 2022

What type of bug is this?

Incorrect result

What subsystems and features are affected?

Multi-node

What happened?

Queries utilising partial aggregation return incorrect and inconsistent results.

Example:
Query 1
SELECT sum(value) FROM table;

Query 2
SELECT SUM(value) FROM ( SELECT value FROM table ORDER BY 1) a;

Query 1 will produce incorrect results, where Query 2 will bypass partial aggregation on the data nodes and return correctly.

TimescaleDB version affected

2.8.1

PostgreSQL version used

14.5

What operating system did you use?

Ubuntu 22.04.1 x64

What installation method did you use?

Docker

What platform did you run on?

Amazon Web Services (AWS)

Relevant log output and stack trace

No response

How can we reproduce the bug?

Running timescaledb-ha pg14.5-ts2.8.1-p0 on 1 access node with 4 data nodes using EKS with all nodes in the same availability zone.

create table example_table_name
(
    time  timestamp not null,
    key_1 text,
    key_2 text,
    key_3 text,
    value double precision
);

SELECT create_distributed_hypertable(example_table_name, "time");

SELECT set_chunk_time_interval(example_table_name, INTERVAL '1h');

SELECT add_retention_policy(example_table_name, INTERVAL '30d');

Data of the form (1664496000, 'a', '1efa7394-5540-410d-a468-b5f204d66ff9', 'b', 1)

I had to insert 2M rows across 400 chunks before I saw this bug appear. Note that I have seen this issue occur on multiple tables now, all of which have the same partitioning and retention policies, but different schemas. AVG is also affected, the issue is not limited to SUM.

Editing to add: this also happens when the value is an int, and the string columns are unnecessary

@konskov
Copy link
Contributor

konskov commented Nov 8, 2022

Hi @aboucher-r7 , thank you for reporting the issue and taking the time to provide reproduction steps.
Will try to reproduce and get back to you

@konskov
Copy link
Contributor

konskov commented Nov 8, 2022

Updating the reproduction steps with what I did:

-- create a MN setup with 4 DNs
SELECT add_data_node('dn1', host => 'localhost', database => 'db1');
SELECT add_data_node('dn2', host => 'localhost', database => 'db2');
SELECT add_data_node('dn3', host => 'localhost', database => 'db3');
SELECT add_data_node('dn4', host => 'localhost', database => 'db4');

create table mytab
(
    time  timestamp not null,
    key_1 text,
    value double precision
);

SELECT create_distributed_hypertable('mytab', 'time');

SELECT set_chunk_time_interval('mytab', INTERVAL '1h');

SELECT add_retention_policy('mytab', INTERVAL '30d');

INSERT INTO mytab (time, key_1, value)
SELECT time, md5(random()::text) as key_1, (random()*30) as value
FROM generate_series(now() - interval '401 hour', now(), '0.5 sec') AS time;

@konskov
Copy link
Contributor

konskov commented Nov 8, 2022

@aboucher-r7 we have tried to reproduce, but have not been able to.
In the case of double precision values, there is a slight difference (slight meaning 43327780.36092031 vs 43327780.36091902 for the SUM aggregate)but that is not unexpected since double precision is inexact. That even happens for regular Postgres tables, not only hypertables:

So for double precision type:

-- with a regular Postgres table
create table regular_pg
(
    time  timestamptz not null,
    key_1 text,
    value double precision
);

INSERT INTO regular_pg (time, key_1, value)
SELECT time, md5(random()::text) as key_1, (random()*30) as value
FROM generate_series(now() - interval '401 hour', now(), '0.5 sec') AS time;

SELECT sum(value) FROM regular_pg;
        sum        
-------------------
 43283113.97278631
 
 SELECT SUM(value) FROM ( SELECT value FROM regular_pg ORDER BY 1) a;
        sum        
-------------------
 43283113.97278841

For integer values though, the results I got were the same for both the queries, with the SUM aggregate, both using regular Postgres tables and hypertables.

Might if be possible for you to provide some additional information to help determine if this is expected behavior or an actual bug?

  • how different are the results you get? do you also experience that using regular PG tables?
  • would it be possible to paste your queries and results for the integer case in particular?

Thank you!

@aboucher-r7
Copy link
Author

aboucher-r7 commented Nov 8, 2022

Using the two example queries above for the integers, the correct value is 1000243 gained by running query 2, examples of results from query 1 would be 402612, 442536, 387521, 454780, or 443951.

How would I go about testing regular PG tables in a distributed fashion?

Editing to add in case it's relevant, I've been using a python script to generate random data and all of the data points within each chunk will have the same timestamp. Also, I haven't yet tried this on a clean database. I can do so tomorrow, which might prove that it's our database that is in some kind of error state.

@konskov
Copy link
Contributor

konskov commented Nov 9, 2022

@aboucher-r7 please do get back to us after retrying on a clean database, thank you! If you are able to reproduce it there, could you also share the python script you are using? It would be very helpful.

How would I go about testing regular PG tables in a distributed fashion?

Please disregard that part of my previous comment, I did not mean to try creating a distributed hypertable, was rather asking if you would also notice the difference between the two queries by inserting the same data into and then querying a regular PG table. Please disregard it.

It would also be useful if you could share the EXPLAIN VERBOSE output for both of the queries. Thank you!

@aboucher-r7
Copy link
Author

@konskov Was able to reproduce on a clean install this morning. I also tested use different timestamps for each data point and still had the issue. Interestingly, I cannot reproduce the issue with the steps you provided in your earlier comment. Python script is as follows:

import datetime
import io
import random
import time

import psycopg2

conn_details = {
    # Insert connection details here
}

days_ago = 25
num_days = 20
num_hours = 20
num_per_hour = 5000
max_int = 1
num_queries = 10

minute = 60
hour = minute * 60
day = hour * 24
start_time = int(time.time()) - day*days_ago

data = ''
for i in range(num_days):
    for j in range(num_hours):
        date = start_time + day*i + hour*j
        
        formatted_date = datetime.datetime.fromtimestamp(date).strftime('%Y-%m-%dT%H:%M:%S.%fZ')

        for _ in range(num_per_hour):
            data += f'{formatted_date}\t{random.randint(0, max_int)}\n'

print(f'{datetime.datetime.now()} - Inserting')

conn = None
cur = None
while True:
    try:
        conn = psycopg2.connect(**conn_details)
        cur = conn.cursor()

        cur.execute('''DROP TABLE IF EXISTS example_table_name;''')

        cur.execute('''create table example_table_name
        (
            time  timestamp not null,
            value int
        );''')

        cur.execute('''SELECT create_distributed_hypertable('example_table_name'::regclass, 'time');''')

        cur.execute('''SELECT set_chunk_time_interval('example_table_name'::regclass, INTERVAL '1h');''')

        cur.execute('''SELECT add_retention_policy('example_table_name', INTERVAL '30d');''')

        cur.copy_from(io.StringIO(data), 'example_table_name', columns=['time', 'value'])

        conn.commit()

        cur.close()
        conn.close()

        break
    except Exception as e:
        print(repr(e))

        if cur is not None:
            cur.close()

        if conn is not None:
            conn.close()

conn = psycopg2.connect(**conn_details)
cur = conn.cursor()

print(f'{datetime.datetime.now()} - Insert complete')

for _ in range(num_queries):
    cur.execute('''SELECT SUM(value) FROM example_table_name;''')
    print(cur.fetchone())

conn.commit()

cur.close()
conn.close()

Query plan for query1:

Finalize Aggregate  (cost=1534415498.55..1534415498.56 rows=1 width=8)
Output: sum(value)
->  Custom Scan (AsyncAppend)  (cost=60398077.50..1534415498.54 rows=4 width=8)
Output: (PARTIAL sum(value))
->  Append  (cost=60398077.50..1534415498.54 rows=4 width=8)
->  Custom Scan (DataNodeScan)  (cost=60398077.50..384565287.60 rows=1 width=8)
Output: (PARTIAL sum(example_table_name.value))
Relations: Aggregate on (public.example_table_name)
Data node: timescaledb-distributed-bug-test-data-0
"Chunks: _dist_hyper_6_2427_chunk, _dist_hyper_6_2431_chunk, _dist_hyper_6_2435_chunk, _dist_hyper_6_2439_chunk, _dist_hyper_6_2443_chunk, _dist_hyper_6_2447_chunk, _dist_hyper_6_2451_chunk, _dist_hyper_6_2455_chunk, _dist_hyper_6_2459_chunk, _dist_hyper_6_2463_chunk, _dist_hyper_6_2467_chunk, _dist_hyper_6_2471_chunk, _dist_hyper_6_2475_chunk, _dist_hyper_6_2479_chunk, _dist_hyper_6_2483_chunk, _dist_hyper_6_2487_chunk, _dist_hyper_6_2491_chunk, _dist_hyper_6_2495_chunk, _dist_hyper_6_2499_chunk, _dist_hyper_6_2503_chunk, _dist_hyper_6_2507_chunk, _dist_hyper_6_2511_chunk, _dist_hyper_6_2515_chunk, _dist_hyper_6_2519_chunk, _dist_hyper_6_2523_chunk, _dist_hyper_6_2527_chunk, _dist_hyper_6_2531_chunk, _dist_hyper_6_2535_chunk, _dist_hyper_6_2539_chunk, _dist_hyper_6_2543_chunk, _dist_hyper_6_2547_chunk, _dist_hyper_6_2551_chunk, _dist_hyper_6_2555_chunk, _dist_hyper_6_2559_chunk, _dist_hyper_6_2563_chunk, _dist_hyper_6_2567_chunk, _dist_hyper_6_2571_chunk, _dist_hyper_6_2575_chunk, _dist_hyper_6_2579_chunk, _dist_hyper_6_2583_chunk, _dist_hyper_6_2587_chunk, _dist_hyper_6_2591_chunk, _dist_hyper_6_2595_chunk, _dist_hyper_6_2599_chunk, _dist_hyper_6_2603_chunk, _dist_hyper_6_2607_chunk, _dist_hyper_6_2611_chunk, _dist_hyper_6_2615_chunk, _dist_hyper_6_2619_chunk, _dist_hyper_6_2623_chunk, _dist_hyper_6_2627_chunk, _dist_hyper_6_2631_chunk, _dist_hyper_6_2635_chunk, _dist_hyper_6_2639_chunk, _dist_hyper_6_2643_chunk, _dist_hyper_6_2647_chunk, _dist_hyper_6_2651_chunk, _dist_hyper_6_2655_chunk, _dist_hyper_6_2659_chunk, _dist_hyper_6_2663_chunk, _dist_hyper_6_2667_chunk, _dist_hyper_6_2671_chunk, _dist_hyper_6_2675_chunk, _dist_hyper_6_2679_chunk, _dist_hyper_6_2683_chunk, _dist_hyper_6_2687_chunk, _dist_hyper_6_2691_chunk, _dist_hyper_6_2695_chunk, _dist_hyper_6_2699_chunk, _dist_hyper_6_2703_chunk, _dist_hyper_6_2707_chunk, _dist_hyper_6_2711_chunk, _dist_hyper_6_2715_chunk, _dist_hyper_6_2719_chunk, _dist_hyper_6_2723_chunk, _dist_hyper_6_2727_chunk, _dist_hyper_6_2731_chunk, _dist_hyper_6_2735_chunk, _dist_hyper_6_2739_chunk, _dist_hyper_6_2743_chunk, _dist_hyper_6_2747_chunk, _dist_hyper_6_2751_chunk, _dist_hyper_6_2755_chunk, _dist_hyper_6_2759_chunk, _dist_hyper_6_2763_chunk, _dist_hyper_6_2767_chunk, _dist_hyper_6_2771_chunk, _dist_hyper_6_2775_chunk, _dist_hyper_6_2779_chunk, _dist_hyper_6_2783_chunk, _dist_hyper_6_2787_chunk, _dist_hyper_6_2791_chunk, _dist_hyper_6_2795_chunk, _dist_hyper_6_2799_chunk, _dist_hyper_6_2803_chunk, _dist_hyper_6_2807_chunk, _dist_hyper_6_2811_chunk, _dist_hyper_6_2815_chunk, _dist_hyper_6_2819_chunk, _dist_hyper_6_2823_chunk"
"Remote SQL: SELECT _timescaledb_internal.partialize_agg(sum(value)) FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706])"
->  Custom Scan (DataNodeScan)  (cost=59794097.73..380719635.72 rows=1 width=8)
Output: (PARTIAL sum(example_table_name_1.value))
Relations: Aggregate on (public.example_table_name)
Data node: timescaledb-distributed-bug-test-data-1
"Chunks: _dist_hyper_6_2428_chunk, _dist_hyper_6_2432_chunk, _dist_hyper_6_2436_chunk, _dist_hyper_6_2440_chunk, _dist_hyper_6_2444_chunk, _dist_hyper_6_2448_chunk, _dist_hyper_6_2452_chunk, _dist_hyper_6_2456_chunk, _dist_hyper_6_2460_chunk, _dist_hyper_6_2464_chunk, _dist_hyper_6_2468_chunk, _dist_hyper_6_2472_chunk, _dist_hyper_6_2476_chunk, _dist_hyper_6_2480_chunk, _dist_hyper_6_2484_chunk, _dist_hyper_6_2488_chunk, _dist_hyper_6_2492_chunk, _dist_hyper_6_2496_chunk, _dist_hyper_6_2500_chunk, _dist_hyper_6_2504_chunk, _dist_hyper_6_2508_chunk, _dist_hyper_6_2512_chunk, _dist_hyper_6_2516_chunk, _dist_hyper_6_2520_chunk, _dist_hyper_6_2524_chunk, _dist_hyper_6_2528_chunk, _dist_hyper_6_2532_chunk, _dist_hyper_6_2536_chunk, _dist_hyper_6_2540_chunk, _dist_hyper_6_2544_chunk, _dist_hyper_6_2548_chunk, _dist_hyper_6_2552_chunk, _dist_hyper_6_2556_chunk, _dist_hyper_6_2560_chunk, _dist_hyper_6_2564_chunk, _dist_hyper_6_2568_chunk, _dist_hyper_6_2572_chunk, _dist_hyper_6_2576_chunk, _dist_hyper_6_2580_chunk, _dist_hyper_6_2584_chunk, _dist_hyper_6_2588_chunk, _dist_hyper_6_2592_chunk, _dist_hyper_6_2596_chunk, _dist_hyper_6_2600_chunk, _dist_hyper_6_2604_chunk, _dist_hyper_6_2608_chunk, _dist_hyper_6_2612_chunk, _dist_hyper_6_2616_chunk, _dist_hyper_6_2620_chunk, _dist_hyper_6_2624_chunk, _dist_hyper_6_2628_chunk, _dist_hyper_6_2632_chunk, _dist_hyper_6_2636_chunk, _dist_hyper_6_2640_chunk, _dist_hyper_6_2644_chunk, _dist_hyper_6_2648_chunk, _dist_hyper_6_2652_chunk, _dist_hyper_6_2656_chunk, _dist_hyper_6_2660_chunk, _dist_hyper_6_2664_chunk, _dist_hyper_6_2668_chunk, _dist_hyper_6_2672_chunk, _dist_hyper_6_2676_chunk, _dist_hyper_6_2680_chunk, _dist_hyper_6_2684_chunk, _dist_hyper_6_2688_chunk, _dist_hyper_6_2692_chunk, _dist_hyper_6_2696_chunk, _dist_hyper_6_2700_chunk, _dist_hyper_6_2704_chunk, _dist_hyper_6_2708_chunk, _dist_hyper_6_2712_chunk, _dist_hyper_6_2716_chunk, _dist_hyper_6_2720_chunk, _dist_hyper_6_2724_chunk, _dist_hyper_6_2728_chunk, _dist_hyper_6_2732_chunk, _dist_hyper_6_2736_chunk, _dist_hyper_6_2740_chunk, _dist_hyper_6_2744_chunk, _dist_hyper_6_2748_chunk, _dist_hyper_6_2752_chunk, _dist_hyper_6_2756_chunk, _dist_hyper_6_2760_chunk, _dist_hyper_6_2764_chunk, _dist_hyper_6_2768_chunk, _dist_hyper_6_2772_chunk, _dist_hyper_6_2776_chunk, _dist_hyper_6_2780_chunk, _dist_hyper_6_2784_chunk, _dist_hyper_6_2788_chunk, _dist_hyper_6_2792_chunk, _dist_hyper_6_2796_chunk, _dist_hyper_6_2800_chunk, _dist_hyper_6_2804_chunk, _dist_hyper_6_2808_chunk, _dist_hyper_6_2812_chunk, _dist_hyper_6_2816_chunk, _dist_hyper_6_2820_chunk"
"Remote SQL: SELECT _timescaledb_internal.partialize_agg(sum(value)) FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705])"
->  Custom Scan (DataNodeScan)  (cost=60398077.50..384565287.60 rows=1 width=8)
Output: (PARTIAL sum(example_table_name_2.value))
Relations: Aggregate on (public.example_table_name)
Data node: timescaledb-distributed-bug-test-data-2
"Chunks: _dist_hyper_6_2425_chunk, _dist_hyper_6_2429_chunk, _dist_hyper_6_2433_chunk, _dist_hyper_6_2437_chunk, _dist_hyper_6_2441_chunk, _dist_hyper_6_2445_chunk, _dist_hyper_6_2449_chunk, _dist_hyper_6_2453_chunk, _dist_hyper_6_2457_chunk, _dist_hyper_6_2461_chunk, _dist_hyper_6_2465_chunk, _dist_hyper_6_2469_chunk, _dist_hyper_6_2473_chunk, _dist_hyper_6_2477_chunk, _dist_hyper_6_2481_chunk, _dist_hyper_6_2485_chunk, _dist_hyper_6_2489_chunk, _dist_hyper_6_2493_chunk, _dist_hyper_6_2497_chunk, _dist_hyper_6_2501_chunk, _dist_hyper_6_2505_chunk, _dist_hyper_6_2509_chunk, _dist_hyper_6_2513_chunk, _dist_hyper_6_2517_chunk, _dist_hyper_6_2521_chunk, _dist_hyper_6_2525_chunk, _dist_hyper_6_2529_chunk, _dist_hyper_6_2533_chunk, _dist_hyper_6_2537_chunk, _dist_hyper_6_2541_chunk, _dist_hyper_6_2545_chunk, _dist_hyper_6_2549_chunk, _dist_hyper_6_2553_chunk, _dist_hyper_6_2557_chunk, _dist_hyper_6_2561_chunk, _dist_hyper_6_2565_chunk, _dist_hyper_6_2569_chunk, _dist_hyper_6_2573_chunk, _dist_hyper_6_2577_chunk, _dist_hyper_6_2581_chunk, _dist_hyper_6_2585_chunk, _dist_hyper_6_2589_chunk, _dist_hyper_6_2593_chunk, _dist_hyper_6_2597_chunk, _dist_hyper_6_2601_chunk, _dist_hyper_6_2605_chunk, _dist_hyper_6_2609_chunk, _dist_hyper_6_2613_chunk, _dist_hyper_6_2617_chunk, _dist_hyper_6_2621_chunk, _dist_hyper_6_2625_chunk, _dist_hyper_6_2629_chunk, _dist_hyper_6_2633_chunk, _dist_hyper_6_2637_chunk, _dist_hyper_6_2641_chunk, _dist_hyper_6_2645_chunk, _dist_hyper_6_2649_chunk, _dist_hyper_6_2653_chunk, _dist_hyper_6_2657_chunk, _dist_hyper_6_2661_chunk, _dist_hyper_6_2665_chunk, _dist_hyper_6_2669_chunk, _dist_hyper_6_2673_chunk, _dist_hyper_6_2677_chunk, _dist_hyper_6_2681_chunk, _dist_hyper_6_2685_chunk, _dist_hyper_6_2689_chunk, _dist_hyper_6_2693_chunk, _dist_hyper_6_2697_chunk, _dist_hyper_6_2701_chunk, _dist_hyper_6_2705_chunk, _dist_hyper_6_2709_chunk, _dist_hyper_6_2713_chunk, _dist_hyper_6_2717_chunk, _dist_hyper_6_2721_chunk, _dist_hyper_6_2725_chunk, _dist_hyper_6_2729_chunk, _dist_hyper_6_2733_chunk, _dist_hyper_6_2737_chunk, _dist_hyper_6_2741_chunk, _dist_hyper_6_2745_chunk, _dist_hyper_6_2749_chunk, _dist_hyper_6_2753_chunk, _dist_hyper_6_2757_chunk, _dist_hyper_6_2761_chunk, _dist_hyper_6_2765_chunk, _dist_hyper_6_2769_chunk, _dist_hyper_6_2773_chunk, _dist_hyper_6_2777_chunk, _dist_hyper_6_2781_chunk, _dist_hyper_6_2785_chunk, _dist_hyper_6_2789_chunk, _dist_hyper_6_2793_chunk, _dist_hyper_6_2797_chunk, _dist_hyper_6_2801_chunk, _dist_hyper_6_2805_chunk, _dist_hyper_6_2809_chunk, _dist_hyper_6_2813_chunk, _dist_hyper_6_2817_chunk, _dist_hyper_6_2821_chunk"
"Remote SQL: SELECT _timescaledb_internal.partialize_agg(sum(value)) FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706])"
->  Custom Scan (DataNodeScan)  (cost=60398077.50..384565287.60 rows=1 width=8)
Output: (PARTIAL sum(example_table_name_3.value))
Relations: Aggregate on (public.example_table_name)
Data node: timescaledb-distributed-bug-test-data-3
"Chunks: _dist_hyper_6_2426_chunk, _dist_hyper_6_2430_chunk, _dist_hyper_6_2434_chunk, _dist_hyper_6_2438_chunk, _dist_hyper_6_2442_chunk, _dist_hyper_6_2446_chunk, _dist_hyper_6_2450_chunk, _dist_hyper_6_2454_chunk, _dist_hyper_6_2458_chunk, _dist_hyper_6_2462_chunk, _dist_hyper_6_2466_chunk, _dist_hyper_6_2470_chunk, _dist_hyper_6_2474_chunk, _dist_hyper_6_2478_chunk, _dist_hyper_6_2482_chunk, _dist_hyper_6_2486_chunk, _dist_hyper_6_2490_chunk, _dist_hyper_6_2494_chunk, _dist_hyper_6_2498_chunk, _dist_hyper_6_2502_chunk, _dist_hyper_6_2506_chunk, _dist_hyper_6_2510_chunk, _dist_hyper_6_2514_chunk, _dist_hyper_6_2518_chunk, _dist_hyper_6_2522_chunk, _dist_hyper_6_2526_chunk, _dist_hyper_6_2530_chunk, _dist_hyper_6_2534_chunk, _dist_hyper_6_2538_chunk, _dist_hyper_6_2542_chunk, _dist_hyper_6_2546_chunk, _dist_hyper_6_2550_chunk, _dist_hyper_6_2554_chunk, _dist_hyper_6_2558_chunk, _dist_hyper_6_2562_chunk, _dist_hyper_6_2566_chunk, _dist_hyper_6_2570_chunk, _dist_hyper_6_2574_chunk, _dist_hyper_6_2578_chunk, _dist_hyper_6_2582_chunk, _dist_hyper_6_2586_chunk, _dist_hyper_6_2590_chunk, _dist_hyper_6_2594_chunk, _dist_hyper_6_2598_chunk, _dist_hyper_6_2602_chunk, _dist_hyper_6_2606_chunk, _dist_hyper_6_2610_chunk, _dist_hyper_6_2614_chunk, _dist_hyper_6_2618_chunk, _dist_hyper_6_2622_chunk, _dist_hyper_6_2626_chunk, _dist_hyper_6_2630_chunk, _dist_hyper_6_2634_chunk, _dist_hyper_6_2638_chunk, _dist_hyper_6_2642_chunk, _dist_hyper_6_2646_chunk, _dist_hyper_6_2650_chunk, _dist_hyper_6_2654_chunk, _dist_hyper_6_2658_chunk, _dist_hyper_6_2662_chunk, _dist_hyper_6_2666_chunk, _dist_hyper_6_2670_chunk, _dist_hyper_6_2674_chunk, _dist_hyper_6_2678_chunk, _dist_hyper_6_2682_chunk, _dist_hyper_6_2686_chunk, _dist_hyper_6_2690_chunk, _dist_hyper_6_2694_chunk, _dist_hyper_6_2698_chunk, _dist_hyper_6_2702_chunk, _dist_hyper_6_2706_chunk, _dist_hyper_6_2710_chunk, _dist_hyper_6_2714_chunk, _dist_hyper_6_2718_chunk, _dist_hyper_6_2722_chunk, _dist_hyper_6_2726_chunk, _dist_hyper_6_2730_chunk, _dist_hyper_6_2734_chunk, _dist_hyper_6_2738_chunk, _dist_hyper_6_2742_chunk, _dist_hyper_6_2746_chunk, _dist_hyper_6_2750_chunk, _dist_hyper_6_2754_chunk, _dist_hyper_6_2758_chunk, _dist_hyper_6_2762_chunk, _dist_hyper_6_2766_chunk, _dist_hyper_6_2770_chunk, _dist_hyper_6_2774_chunk, _dist_hyper_6_2778_chunk, _dist_hyper_6_2782_chunk, _dist_hyper_6_2786_chunk, _dist_hyper_6_2790_chunk, _dist_hyper_6_2794_chunk, _dist_hyper_6_2798_chunk, _dist_hyper_6_2802_chunk, _dist_hyper_6_2806_chunk, _dist_hyper_6_2810_chunk, _dist_hyper_6_2814_chunk, _dist_hyper_6_2818_chunk, _dist_hyper_6_2822_chunk"
"Remote SQL: SELECT _timescaledb_internal.partialize_agg(sum(value)) FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706])"

And for query 2:

Aggregate  (cost=9942624535.36..9942624535.37 rows=1 width=8)
Output: sum(example_table_name.value)
->  Custom Scan (AsyncAppend)  (cost=400.04..9005449250.60 rows=74974022781 width=12)
"Output: example_table_name.""time"", example_table_name.value"
->  Merge Append  (cost=400.04..9005449250.60 rows=74974022781 width=12)
"Sort Key: example_table_name_1.""time"""
->  Custom Scan (DataNodeScan) on public.example_table_name example_table_name_1  (cost=100.00..1975147595.95 rows=18790481900 width=12)
"Output: example_table_name_1.""time"", example_table_name_1.value"
Data node: timescaledb-distributed-bug-test-data-0
"Chunks: _dist_hyper_6_2427_chunk, _dist_hyper_6_2431_chunk, _dist_hyper_6_2435_chunk, _dist_hyper_6_2439_chunk, _dist_hyper_6_2443_chunk, _dist_hyper_6_2447_chunk, _dist_hyper_6_2451_chunk, _dist_hyper_6_2455_chunk, _dist_hyper_6_2459_chunk, _dist_hyper_6_2463_chunk, _dist_hyper_6_2467_chunk, _dist_hyper_6_2471_chunk, _dist_hyper_6_2475_chunk, _dist_hyper_6_2479_chunk, _dist_hyper_6_2483_chunk, _dist_hyper_6_2487_chunk, _dist_hyper_6_2491_chunk, _dist_hyper_6_2495_chunk, _dist_hyper_6_2499_chunk, _dist_hyper_6_2503_chunk, _dist_hyper_6_2507_chunk, _dist_hyper_6_2511_chunk, _dist_hyper_6_2515_chunk, _dist_hyper_6_2519_chunk, _dist_hyper_6_2523_chunk, _dist_hyper_6_2527_chunk, _dist_hyper_6_2531_chunk, _dist_hyper_6_2535_chunk, _dist_hyper_6_2539_chunk, _dist_hyper_6_2543_chunk, _dist_hyper_6_2547_chunk, _dist_hyper_6_2551_chunk, _dist_hyper_6_2555_chunk, _dist_hyper_6_2559_chunk, _dist_hyper_6_2563_chunk, _dist_hyper_6_2567_chunk, _dist_hyper_6_2571_chunk, _dist_hyper_6_2575_chunk, _dist_hyper_6_2579_chunk, _dist_hyper_6_2583_chunk, _dist_hyper_6_2587_chunk, _dist_hyper_6_2591_chunk, _dist_hyper_6_2595_chunk, _dist_hyper_6_2599_chunk, _dist_hyper_6_2603_chunk, _dist_hyper_6_2607_chunk, _dist_hyper_6_2611_chunk, _dist_hyper_6_2615_chunk, _dist_hyper_6_2619_chunk, _dist_hyper_6_2623_chunk, _dist_hyper_6_2627_chunk, _dist_hyper_6_2631_chunk, _dist_hyper_6_2635_chunk, _dist_hyper_6_2639_chunk, _dist_hyper_6_2643_chunk, _dist_hyper_6_2647_chunk, _dist_hyper_6_2651_chunk, _dist_hyper_6_2655_chunk, _dist_hyper_6_2659_chunk, _dist_hyper_6_2663_chunk, _dist_hyper_6_2667_chunk, _dist_hyper_6_2671_chunk, _dist_hyper_6_2675_chunk, _dist_hyper_6_2679_chunk, _dist_hyper_6_2683_chunk, _dist_hyper_6_2687_chunk, _dist_hyper_6_2691_chunk, _dist_hyper_6_2695_chunk, _dist_hyper_6_2699_chunk, _dist_hyper_6_2703_chunk, _dist_hyper_6_2707_chunk, _dist_hyper_6_2711_chunk, _dist_hyper_6_2715_chunk, _dist_hyper_6_2719_chunk, _dist_hyper_6_2723_chunk, _dist_hyper_6_2727_chunk, _dist_hyper_6_2731_chunk, _dist_hyper_6_2735_chunk, _dist_hyper_6_2739_chunk, _dist_hyper_6_2743_chunk, _dist_hyper_6_2747_chunk, _dist_hyper_6_2751_chunk, _dist_hyper_6_2755_chunk, _dist_hyper_6_2759_chunk, _dist_hyper_6_2763_chunk, _dist_hyper_6_2767_chunk, _dist_hyper_6_2771_chunk, _dist_hyper_6_2775_chunk, _dist_hyper_6_2779_chunk, _dist_hyper_6_2783_chunk, _dist_hyper_6_2787_chunk, _dist_hyper_6_2791_chunk, _dist_hyper_6_2795_chunk, _dist_hyper_6_2799_chunk, _dist_hyper_6_2803_chunk, _dist_hyper_6_2807_chunk, _dist_hyper_6_2811_chunk, _dist_hyper_6_2815_chunk, _dist_hyper_6_2819_chunk, _dist_hyper_6_2823_chunk"
"Remote SQL: SELECT ""time"", value FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706]) ORDER BY ""time"" ASC NULLS LAST"
->  Custom Scan (DataNodeScan) on public.example_table_name example_table_name_2  (cost=100.00..1955396120.99 rows=18602577081 width=12)
"Output: example_table_name_2.""time"", example_table_name_2.value"
Data node: timescaledb-distributed-bug-test-data-1
"Chunks: _dist_hyper_6_2428_chunk, _dist_hyper_6_2432_chunk, _dist_hyper_6_2436_chunk, _dist_hyper_6_2440_chunk, _dist_hyper_6_2444_chunk, _dist_hyper_6_2448_chunk, _dist_hyper_6_2452_chunk, _dist_hyper_6_2456_chunk, _dist_hyper_6_2460_chunk, _dist_hyper_6_2464_chunk, _dist_hyper_6_2468_chunk, _dist_hyper_6_2472_chunk, _dist_hyper_6_2476_chunk, _dist_hyper_6_2480_chunk, _dist_hyper_6_2484_chunk, _dist_hyper_6_2488_chunk, _dist_hyper_6_2492_chunk, _dist_hyper_6_2496_chunk, _dist_hyper_6_2500_chunk, _dist_hyper_6_2504_chunk, _dist_hyper_6_2508_chunk, _dist_hyper_6_2512_chunk, _dist_hyper_6_2516_chunk, _dist_hyper_6_2520_chunk, _dist_hyper_6_2524_chunk, _dist_hyper_6_2528_chunk, _dist_hyper_6_2532_chunk, _dist_hyper_6_2536_chunk, _dist_hyper_6_2540_chunk, _dist_hyper_6_2544_chunk, _dist_hyper_6_2548_chunk, _dist_hyper_6_2552_chunk, _dist_hyper_6_2556_chunk, _dist_hyper_6_2560_chunk, _dist_hyper_6_2564_chunk, _dist_hyper_6_2568_chunk, _dist_hyper_6_2572_chunk, _dist_hyper_6_2576_chunk, _dist_hyper_6_2580_chunk, _dist_hyper_6_2584_chunk, _dist_hyper_6_2588_chunk, _dist_hyper_6_2592_chunk, _dist_hyper_6_2596_chunk, _dist_hyper_6_2600_chunk, _dist_hyper_6_2604_chunk, _dist_hyper_6_2608_chunk, _dist_hyper_6_2612_chunk, _dist_hyper_6_2616_chunk, _dist_hyper_6_2620_chunk, _dist_hyper_6_2624_chunk, _dist_hyper_6_2628_chunk, _dist_hyper_6_2632_chunk, _dist_hyper_6_2636_chunk, _dist_hyper_6_2640_chunk, _dist_hyper_6_2644_chunk, _dist_hyper_6_2648_chunk, _dist_hyper_6_2652_chunk, _dist_hyper_6_2656_chunk, _dist_hyper_6_2660_chunk, _dist_hyper_6_2664_chunk, _dist_hyper_6_2668_chunk, _dist_hyper_6_2672_chunk, _dist_hyper_6_2676_chunk, _dist_hyper_6_2680_chunk, _dist_hyper_6_2684_chunk, _dist_hyper_6_2688_chunk, _dist_hyper_6_2692_chunk, _dist_hyper_6_2696_chunk, _dist_hyper_6_2700_chunk, _dist_hyper_6_2704_chunk, _dist_hyper_6_2708_chunk, _dist_hyper_6_2712_chunk, _dist_hyper_6_2716_chunk, _dist_hyper_6_2720_chunk, _dist_hyper_6_2724_chunk, _dist_hyper_6_2728_chunk, _dist_hyper_6_2732_chunk, _dist_hyper_6_2736_chunk, _dist_hyper_6_2740_chunk, _dist_hyper_6_2744_chunk, _dist_hyper_6_2748_chunk, _dist_hyper_6_2752_chunk, _dist_hyper_6_2756_chunk, _dist_hyper_6_2760_chunk, _dist_hyper_6_2764_chunk, _dist_hyper_6_2768_chunk, _dist_hyper_6_2772_chunk, _dist_hyper_6_2776_chunk, _dist_hyper_6_2780_chunk, _dist_hyper_6_2784_chunk, _dist_hyper_6_2788_chunk, _dist_hyper_6_2792_chunk, _dist_hyper_6_2796_chunk, _dist_hyper_6_2800_chunk, _dist_hyper_6_2804_chunk, _dist_hyper_6_2808_chunk, _dist_hyper_6_2812_chunk, _dist_hyper_6_2816_chunk, _dist_hyper_6_2820_chunk"
"Remote SQL: SELECT ""time"", value FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705]) ORDER BY ""time"" ASC NULLS LAST"
->  Custom Scan (DataNodeScan) on public.example_table_name example_table_name_3  (cost=100.00..1975147595.95 rows=18790481900 width=12)
"Output: example_table_name_3.""time"", example_table_name_3.value"
Data node: timescaledb-distributed-bug-test-data-2
"Chunks: _dist_hyper_6_2425_chunk, _dist_hyper_6_2429_chunk, _dist_hyper_6_2433_chunk, _dist_hyper_6_2437_chunk, _dist_hyper_6_2441_chunk, _dist_hyper_6_2445_chunk, _dist_hyper_6_2449_chunk, _dist_hyper_6_2453_chunk, _dist_hyper_6_2457_chunk, _dist_hyper_6_2461_chunk, _dist_hyper_6_2465_chunk, _dist_hyper_6_2469_chunk, _dist_hyper_6_2473_chunk, _dist_hyper_6_2477_chunk, _dist_hyper_6_2481_chunk, _dist_hyper_6_2485_chunk, _dist_hyper_6_2489_chunk, _dist_hyper_6_2493_chunk, _dist_hyper_6_2497_chunk, _dist_hyper_6_2501_chunk, _dist_hyper_6_2505_chunk, _dist_hyper_6_2509_chunk, _dist_hyper_6_2513_chunk, _dist_hyper_6_2517_chunk, _dist_hyper_6_2521_chunk, _dist_hyper_6_2525_chunk, _dist_hyper_6_2529_chunk, _dist_hyper_6_2533_chunk, _dist_hyper_6_2537_chunk, _dist_hyper_6_2541_chunk, _dist_hyper_6_2545_chunk, _dist_hyper_6_2549_chunk, _dist_hyper_6_2553_chunk, _dist_hyper_6_2557_chunk, _dist_hyper_6_2561_chunk, _dist_hyper_6_2565_chunk, _dist_hyper_6_2569_chunk, _dist_hyper_6_2573_chunk, _dist_hyper_6_2577_chunk, _dist_hyper_6_2581_chunk, _dist_hyper_6_2585_chunk, _dist_hyper_6_2589_chunk, _dist_hyper_6_2593_chunk, _dist_hyper_6_2597_chunk, _dist_hyper_6_2601_chunk, _dist_hyper_6_2605_chunk, _dist_hyper_6_2609_chunk, _dist_hyper_6_2613_chunk, _dist_hyper_6_2617_chunk, _dist_hyper_6_2621_chunk, _dist_hyper_6_2625_chunk, _dist_hyper_6_2629_chunk, _dist_hyper_6_2633_chunk, _dist_hyper_6_2637_chunk, _dist_hyper_6_2641_chunk, _dist_hyper_6_2645_chunk, _dist_hyper_6_2649_chunk, _dist_hyper_6_2653_chunk, _dist_hyper_6_2657_chunk, _dist_hyper_6_2661_chunk, _dist_hyper_6_2665_chunk, _dist_hyper_6_2669_chunk, _dist_hyper_6_2673_chunk, _dist_hyper_6_2677_chunk, _dist_hyper_6_2681_chunk, _dist_hyper_6_2685_chunk, _dist_hyper_6_2689_chunk, _dist_hyper_6_2693_chunk, _dist_hyper_6_2697_chunk, _dist_hyper_6_2701_chunk, _dist_hyper_6_2705_chunk, _dist_hyper_6_2709_chunk, _dist_hyper_6_2713_chunk, _dist_hyper_6_2717_chunk, _dist_hyper_6_2721_chunk, _dist_hyper_6_2725_chunk, _dist_hyper_6_2729_chunk, _dist_hyper_6_2733_chunk, _dist_hyper_6_2737_chunk, _dist_hyper_6_2741_chunk, _dist_hyper_6_2745_chunk, _dist_hyper_6_2749_chunk, _dist_hyper_6_2753_chunk, _dist_hyper_6_2757_chunk, _dist_hyper_6_2761_chunk, _dist_hyper_6_2765_chunk, _dist_hyper_6_2769_chunk, _dist_hyper_6_2773_chunk, _dist_hyper_6_2777_chunk, _dist_hyper_6_2781_chunk, _dist_hyper_6_2785_chunk, _dist_hyper_6_2789_chunk, _dist_hyper_6_2793_chunk, _dist_hyper_6_2797_chunk, _dist_hyper_6_2801_chunk, _dist_hyper_6_2805_chunk, _dist_hyper_6_2809_chunk, _dist_hyper_6_2813_chunk, _dist_hyper_6_2817_chunk, _dist_hyper_6_2821_chunk"
"Remote SQL: SELECT ""time"", value FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706]) ORDER BY ""time"" ASC NULLS LAST"
->  Custom Scan (DataNodeScan) on public.example_table_name example_table_name_4  (cost=100.00..1975147595.95 rows=18790481900 width=12)
"Output: example_table_name_4.""time"", example_table_name_4.value"
Data node: timescaledb-distributed-bug-test-data-3
"Chunks: _dist_hyper_6_2426_chunk, _dist_hyper_6_2430_chunk, _dist_hyper_6_2434_chunk, _dist_hyper_6_2438_chunk, _dist_hyper_6_2442_chunk, _dist_hyper_6_2446_chunk, _dist_hyper_6_2450_chunk, _dist_hyper_6_2454_chunk, _dist_hyper_6_2458_chunk, _dist_hyper_6_2462_chunk, _dist_hyper_6_2466_chunk, _dist_hyper_6_2470_chunk, _dist_hyper_6_2474_chunk, _dist_hyper_6_2478_chunk, _dist_hyper_6_2482_chunk, _dist_hyper_6_2486_chunk, _dist_hyper_6_2490_chunk, _dist_hyper_6_2494_chunk, _dist_hyper_6_2498_chunk, _dist_hyper_6_2502_chunk, _dist_hyper_6_2506_chunk, _dist_hyper_6_2510_chunk, _dist_hyper_6_2514_chunk, _dist_hyper_6_2518_chunk, _dist_hyper_6_2522_chunk, _dist_hyper_6_2526_chunk, _dist_hyper_6_2530_chunk, _dist_hyper_6_2534_chunk, _dist_hyper_6_2538_chunk, _dist_hyper_6_2542_chunk, _dist_hyper_6_2546_chunk, _dist_hyper_6_2550_chunk, _dist_hyper_6_2554_chunk, _dist_hyper_6_2558_chunk, _dist_hyper_6_2562_chunk, _dist_hyper_6_2566_chunk, _dist_hyper_6_2570_chunk, _dist_hyper_6_2574_chunk, _dist_hyper_6_2578_chunk, _dist_hyper_6_2582_chunk, _dist_hyper_6_2586_chunk, _dist_hyper_6_2590_chunk, _dist_hyper_6_2594_chunk, _dist_hyper_6_2598_chunk, _dist_hyper_6_2602_chunk, _dist_hyper_6_2606_chunk, _dist_hyper_6_2610_chunk, _dist_hyper_6_2614_chunk, _dist_hyper_6_2618_chunk, _dist_hyper_6_2622_chunk, _dist_hyper_6_2626_chunk, _dist_hyper_6_2630_chunk, _dist_hyper_6_2634_chunk, _dist_hyper_6_2638_chunk, _dist_hyper_6_2642_chunk, _dist_hyper_6_2646_chunk, _dist_hyper_6_2650_chunk, _dist_hyper_6_2654_chunk, _dist_hyper_6_2658_chunk, _dist_hyper_6_2662_chunk, _dist_hyper_6_2666_chunk, _dist_hyper_6_2670_chunk, _dist_hyper_6_2674_chunk, _dist_hyper_6_2678_chunk, _dist_hyper_6_2682_chunk, _dist_hyper_6_2686_chunk, _dist_hyper_6_2690_chunk, _dist_hyper_6_2694_chunk, _dist_hyper_6_2698_chunk, _dist_hyper_6_2702_chunk, _dist_hyper_6_2706_chunk, _dist_hyper_6_2710_chunk, _dist_hyper_6_2714_chunk, _dist_hyper_6_2718_chunk, _dist_hyper_6_2722_chunk, _dist_hyper_6_2726_chunk, _dist_hyper_6_2730_chunk, _dist_hyper_6_2734_chunk, _dist_hyper_6_2738_chunk, _dist_hyper_6_2742_chunk, _dist_hyper_6_2746_chunk, _dist_hyper_6_2750_chunk, _dist_hyper_6_2754_chunk, _dist_hyper_6_2758_chunk, _dist_hyper_6_2762_chunk, _dist_hyper_6_2766_chunk, _dist_hyper_6_2770_chunk, _dist_hyper_6_2774_chunk, _dist_hyper_6_2778_chunk, _dist_hyper_6_2782_chunk, _dist_hyper_6_2786_chunk, _dist_hyper_6_2790_chunk, _dist_hyper_6_2794_chunk, _dist_hyper_6_2798_chunk, _dist_hyper_6_2802_chunk, _dist_hyper_6_2806_chunk, _dist_hyper_6_2810_chunk, _dist_hyper_6_2814_chunk, _dist_hyper_6_2818_chunk, _dist_hyper_6_2822_chunk"
"Remote SQL: SELECT ""time"", value FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706]) ORDER BY ""time"" ASC NULLS LAST"

@konskov
Copy link
Contributor

konskov commented Nov 14, 2022

@aboucher-r7 thank you for providing the script, I was able to reproduce the issue on cloud. It seems the issue is the row-by-row fetcher. (If you do EXPLAIN (ANALYZE, VERBOSE) SELECT SUM(value) FROM example_table_name) you will see the Fetcher type. Can you try setting timescaledb.remote_data_fetcher=cursor and let us know if that fixes the issue?

@aboucher-r7
Copy link
Author

@konskov that does indeed fix our issue, thank you. Does that have any performance implications or should that be enabled across the board?

@konskov
Copy link
Contributor

konskov commented Nov 15, 2022

The cursor fetcher could potentially be slower. That is because it does not allow the execution of remote parallel plans. So that is something to keep in mind.

@nikkhils nikkhils removed their assignment Nov 25, 2022
@robfranolic
Copy link

robfranolic commented Dec 15, 2022

We have seen a similar problem of inconsistent aggregations. This only occurs with SET enable_partitionwise_aggregate TO ON; and is remedied by SET timescaledb.remote_data_fetcher=cursor;. We see it with COUNT as well as SUM. We are concerned about relying on a non-default parameter setting, is a fix being worked on?

@erimatnor
Copy link
Contributor

@robfranolic yes, we are looking at fixing this issue. The row-by-row fetcher that was implicated as causing the issue is actually being replaced by another implementation that uses the COPY protocol. We are trying to determine if the new implementation solves the issue.

@erimatnor
Copy link
Contributor

From some more investigation, it seems the fetcher is not the issue. The issue seems to be that the partial_agg function does not work with parallel queries, It returns wrong results when parallel queries are enabled.

Here's a test with part of a query that gets executed on a data node.

First, execute the query without partials:

data_node_1=# select sum(value) FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]);
  sum   
--------
 249765
(1 row)

Then execute the same query with partial_agg:

data_node_1=# select _timescaledb_internal.finalize_agg('sum(integer)', null, null, null, partial, cast('1' as int8)) from (SELECT _timescaledb_internal.partialize_agg(sum(value)) partial FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101])) a;
 finalize_agg 
--------------
       109798
(1 row)

(The above query actually returns different results every time it is queried.)

Now turn off parallel queries:

data_node_1=# set max_parallel_workers=0;
SET
data_node_1=# select _timescaledb_internal.finalize_agg('sum(integer)', null, null, null, partial, cast('1' as int8)) from (SELECT _timescaledb_internal.partialize_agg(sum(value)) partial FROM public.example_table_name WHERE _timescaledb_internal.chunks_in(public.example_table_name.*, ARRAY[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101])) a;
 finalize_agg 
--------------
       249765
(1 row)

@robfranolic
Copy link

Thanks. As I understand it, timescaledb.remote_data_fetcher=cursor prevents parallel queries on the data nodes, so that explains why that also works. Needless to say this is problem is very concerning because the error is silent, so we have no way of knowing for sure that is hasn't occurred elsewhere and incorrect results have been returned to clients.

@jfjoly jfjoly assigned pmwkaa and unassigned konskov Jan 5, 2023
fabriziomello added a commit to fabriziomello/timescaledb that referenced this issue Jan 11, 2023
Previous PR timescale#4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes timescale#4922
fabriziomello added a commit to fabriziomello/timescaledb that referenced this issue Jan 11, 2023
Previous PR timescale#4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes timescale#4922
fabriziomello added a commit to fabriziomello/timescaledb that referenced this issue Jan 12, 2023
Previous PR timescale#4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes timescale#4922
fabriziomello added a commit to fabriziomello/timescaledb that referenced this issue Jan 12, 2023
Previous PR timescale#4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes timescale#4922
fabriziomello added a commit to fabriziomello/timescaledb that referenced this issue Jan 12, 2023
Previous PR timescale#4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes timescale#4922
fabriziomello added a commit to fabriziomello/timescaledb that referenced this issue Jan 12, 2023
Previous PR timescale#4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes timescale#4922
fabriziomello added a commit to fabriziomello/timescaledb that referenced this issue Jan 12, 2023
Previous PR timescale#4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes timescale#4922
fabriziomello added a commit that referenced this issue Jan 13, 2023
Previous PR #4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes #4922
sb230132 pushed a commit that referenced this issue Jan 24, 2023
Previous PR #4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes #4922
sb230132 pushed a commit that referenced this issue Jan 24, 2023
Previous PR #4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes #4922
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants