Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LIMIT 1 clause causes timescaledb to request a very reasonable 16ZB of ram, which I don't have #3498

Closed
benchub opened this issue Aug 18, 2021 · 8 comments

Comments

@benchub
Copy link

benchub commented Aug 18, 2021

Relevant system information:

  • Linux 5.4.0-1051-aws Add error when timescaledb library is not preloaded. #53~18.04.1-Ubuntu SMP Fri Jun 18 14:54:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
  • PostgreSQL 12.6 (Ubuntu 12.6-1.pgdg18.04+1) on aarch64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit
  • TimescaleDB version 2.3.1
  • Installation method: apt install

Describe the bug
We've noticed one one of our timescale dbs with ~30GB of data, we adding a LIMIT clause to a query will (sometimes!) cause it to die:

`=> explain SELECT DISTINCT time FROM cloudwatch_data WHERE resource_name = 'c320' ORDER BY time DESC LIMIT 1;
QUERY PLAN
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Limit (cost=0.56..13.40 rows=1 width=8)
-> Unique (cost=0.56..2569.07 rows=200 width=8)
-> Custom Scan (ChunkAppend) on cloudwatch_data (cost=0.56..2565.07 rows=1600 width=8)
Order: cloudwatch_data."time" DESC
-> Custom Scan (SkipScan) on _hyper_1_9_chunk (cost=0.56..320.29 rows=200 width=8)
-> Index Only Scan using _hyper_1_9_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_9_chunk (cost=0.56..26235.73 rows=25191 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_8_chunk (cost=0.56..320.32 rows=200 width=8)
-> Index Only Scan using _hyper_1_8_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_8_chunk (cost=0.56..29602.63 rows=28421 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_6_chunk (cost=0.56..320.33 rows=200 width=8)
-> Index Only Scan using _hyper_1_6_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_6_chunk (cost=0.56..28272.04 rows=27142 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_5_chunk (cost=0.56..320.41 rows=200 width=8)
-> Index Only Scan using _hyper_1_5_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_5_chunk (cost=0.56..28067.25 rows=26935 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_4_chunk (cost=0.56..319.30 rows=200 width=8)
-> Index Only Scan using _hyper_1_4_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_4_chunk (cost=0.56..13111.97 rows=12620 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_3_chunk (cost=0.56..320.25 rows=200 width=8)
-> Index Only Scan using _hyper_1_3_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_3_chunk (cost=0.56..10634.97 rows=10189 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_2_chunk (cost=0.56..322.95 rows=200 width=8)
-> Index Only Scan using _hyper_1_2_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_2_chunk (cost=0.56..33802.93 rows=32048 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_1_chunk (cost=0.56..321.23 rows=200 width=8)
-> Index Only Scan using _hyper_1_1_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_1_chunk (cost=0.56..6250.07 rows=5946 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
(28 rows)

=> SELECT DISTINCT time FROM cloudwatch_data WHERE resource_name = 'c320' ORDER BY time DESC LIMIT 1;
ERROR: invalid memory alloc request size 18446744071273811808

=> explain SELECT DISTINCT time FROM cloudwatch_data WHERE resource_name = 'c320' ORDER BY time DESC LIMIT 1;
ERROR: invalid memory alloc request size 18446744071272821040`

For reasons I don't understand, if I do a slightly different query, I can get a great result using a plan that includes a limit step:
`=> explain select max(time) from cloudwatch_data where resource_name = 'c320';
QUERY PLAN
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Result (cost=0.74..0.75 rows=1 width=8)
InitPlan 1 (returns $0)
-> Limit (cost=0.56..0.74 rows=1 width=8)
-> Custom Scan (ChunkAppend) on cloudwatch_data (cost=0.56..30850.21 rows=29540 width=8)
Order: cloudwatch_data."time" DESC
-> Index Only Scan using _hyper_1_9_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_9_chunk (cost=0.56..30850.21 rows=29540 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_8_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_8_chunk (cost=0.56..29654.20 rows=28406 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_6_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_6_chunk (cost=0.56..28197.27 rows=26986 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_5_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_5_chunk (cost=0.56..28289.11 rows=27077 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_4_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_4_chunk (cost=0.56..13355.70 rows=12782 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_3_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_3_chunk (cost=0.56..11179.41 rows=10625 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_2_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_2_chunk (cost=0.56..33930.68 rows=32096 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_1_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_1_chunk (cost=0.56..6258.18 rows=5940 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
(21 rows)

=> select max(time) from cloudwatch_data where resource_name = 'c320';
max
────────
[null]
(1 row)
`

To Reproduce
I'm not quite sure how to repro this in a test case - we have this same schema installed in several places but it's only this instance with the most data that is failing. I'm happy to extract any debugging data you'd like.

Expected behavior
I would expect the ORDER BY time DESC LIMIT 1 to behave the same as max(time).

Actual behavior
Timescaledb thinks it can make it happen, but when I try it asks for a mere 16ZB of ram. Asking to explain the plan asks for the same ram.

Screenshots
See above.

Additional context
n/a

@zcattacz
Copy link

zcattacz commented Aug 19, 2021

You can try remove order by ? maybe do post ordering in a CTE.

In my ticket #3483 without order by the query returns quickly, but with order by it is basically unusable.

@mkindahl
Copy link
Contributor

Thank you for the bug report @benchub . In hex, the number is 0xFFFFFFFF6ED18F60, which looks like a negative int64 that for some reason is passed down to the allocation. It is hard to pinpoint without a stack trace. Is it possible for you to attach a debugger and put a breakpoint on the line that gives the error and check the stack trace?

@benchub
Copy link
Author

benchub commented Aug 19, 2021

No @zcattacz , removing the order by does not help.

mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
@benchub
Copy link
Author

benchub commented Aug 19, 2021

@mkindahl That sounds like something I could probably do, but I have no idea how to know what line to break at, or how to find out.

mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
mkindahl added a commit to mkindahl/timescaledb that referenced this issue Aug 19, 2021
This release contains bug fixes since the 2.4.0 release. We deem it
high priority to upgrade since it is needed to support PostgreSQL 12.8
and 13.4.

**Bugfixes**
* timescale#3430 Fix havingqual processing for continuous aggregates
* timescale#3468 Disable tests by default if tools are not found
* timescale#3468 Fix crash while tracking alter table commands
* timescale#3494 Improve error message when adding data nodes
* timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
@gayyappan
Copy link
Contributor

gayyappan commented Sep 1, 2021

@benchub You can break at errmsg (postgres function defined in elog.c).
It would help to see a stack trace.
Please also post the table definition \d+ cloudwatch_data.

@phemmer
Copy link

phemmer commented Sep 24, 2021

Received a similar error:

Sep 24 12:33:10 ded4077 postgres[8485]: [134-1] pid=8485 db=edgestats user=postgres rhost=1.2.3.4 app=timescaledb tid=21/59707 sid=614bf0c8.2125 ERROR:  invalid memory alloc request size 18446744072119764192
Sep 24 12:33:10 ded4077 postgres[8485]: [134-2] pid=8485 db=edgestats user=postgres rhost=1.2.3.4 app=timescaledb tid=21/59707 sid=614bf0c8.2125 STATEMENT:  DECLARE c1 CURSOR FOR
Sep 24 12:33:10 ded4077 postgres[8485]: [134-3]         SELECT DISTINCT host FROM public._haproxy_agg_instance_host WHERE _timescaledb_internal.chunks_in(public._haproxy_agg_instance_host.*, ARRAY[7978, 7982, 8006, 8014, 8040, 8048, 8074, 8082, 8107, 8116, 8142])

When attempting to run query:

select distinct host from _haproxy_agg_instance_host;

In my case the table _haproxy_agg_instance_host is a distributed hypertable, and the error is coming from one of the data nodes.

Attaching gdb to errmsg didn't trigger. I attached to errstart instead and got:

#0  errstart (elevel=elevel@entry=13, domain=domain@entry=0x0) at ./build/../src/backend/utils/error/elog.c:245
        edata = <optimized out>
        output_to_server = <optimized out>
        output_to_client = false
        i = <optimized out>
        __func__ = "errstart"
#1  0x000055a7cd5b5fa6 in exec_parse_message (numParams=<optimized out>, paramTypes=<optimized out>, stmt_name=0x55a7cf270b68 "", 
    query_string=0x55a7cf270b69 "DECLARE c1 CURSOR FOR\nSELECT DISTINCT host FROM public._haproxy_agg_instance_host WHERE _timescaledb_internal.chunks_in(public._haproxy_agg_instance_host.*, ARRAY[7978, 7982, 8006, 8014, 8040, 8048, 8"...) at ./build/../src/backend/tcop/postgres.c:1376
        __errno_location = <optimized out>
        oldcontext = <optimized out>
        parsetree_list = <optimized out>
        psrc = <optimized out>
        is_named = <optimized out>
        save_log_statement_stats = false
        msec_str = "Pt2\370\377\177\000\000Pt2\370\377\177\000\000\060t2\370\377\177\000\000\006\220rͧU\000"
        unnamed_stmt_context = 0x0
        raw_parse_tree = <optimized out>
        querytree_list = <optimized out>
        unnamed_stmt_context = <optimized out>
        oldcontext = <optimized out>
        parsetree_list = <optimized out>
        raw_parse_tree = <optimized out>
        querytree_list = <optimized out>
        psrc = <optimized out>
        is_named = <optimized out>
        save_log_statement_stats = <optimized out>
        msec_str = <optimized out>
        __func__ = "exec_parse_message"
        __errno_location = <optimized out>
        __errno_location = <optimized out>
        query = <optimized out>
        snapshot_set = <optimized out>
        __errno_location = <optimized out>
        i = <optimized out>
        ptype = <optimized out>
        __errno_location = <optimized out>
        __errno_location = <optimized out>
        __errno_location = <optimized out>
#2  PostgresMain (argc=<optimized out>, argv=argv@entry=0x55a7cf2e8510, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4368
        stmt_name = 0x55a7cf270b68 ""
        query_string = 0x55a7cf270b69 "DECLARE c1 CURSOR FOR\nSELECT DISTINCT host FROM public._haproxy_agg_instance_host WHERE _timescaledb_internal.chunks_in(public._haproxy_agg_instance_host.*, ARRAY[7978, 7982, 8006, 8014, 8040, 8048, 8"...
        numParams = 0
        paramTypes = 0x0
        firstchar = <optimized out>
        input_message = {data = 0x55a7cf270b68 "", len = 233, maxlen = 1024, cursor = 233}
        local_sigjmp_buf = {{__jmpbuf = {140737357443760, -1200120875002018864, 1, 94179223330184, 3, 94179223739664, -1200120874855218224, -4906607204497931312}, __mask_was_saved = 1, __saved_mask = {__val = {0, 139637976727552, 140416345695672, 140737357444256, 16162316468834811904, 140737357444224, 94179194763567, 206158430240, 140737357444240, 140737357444048, 94179223341648, 1024, 140737357444336, 94179223831824, 94179223739664, 140737357444112}}}}
        send_ready_for_query = false
        disable_idle_in_transaction_timeout = false
        __func__ = "PostgresMain"
#3  0x000055a7cd536bcd in BackendRun (port=0x55a7cf2d1d10, port=0x55a7cf2d1d10) at ./build/../src/backend/postmaster/postmaster.c:4526
        av = 0x55a7cf2e8510
        maxac = <optimized out>
        ac = 1
        i = 1
        av = <optimized out>
        maxac = <optimized out>
        ac = <optimized out>
        i = <optimized out>
        __func__ = "BackendRun"
        __errno_location = <optimized out>
        __errno_location = <optimized out>
        __errno_location = <optimized out>
#4  BackendStartup (port=0x55a7cf2d1d10) at ./build/../src/backend/postmaster/postmaster.c:4210
        bn = <optimized out>
        pid = <optimized out>
        bn = <optimized out>
        pid = <optimized out>
        __func__ = "BackendStartup"
        __errno_location = <optimized out>
        __errno_location = <optimized out>
        save_errno = <optimized out>
        __errno_location = <optimized out>
        __errno_location = <optimized out>
#5  ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1739
        port = 0x55a7cf2d1d10
        i = <optimized out>
        rmask = {fds_bits = {64, 0 <repeats 15 times>}}
        selres = <optimized out>
        now = <optimized out>
        readmask = {fds_bits = {192, 0 <repeats 15 times>}}
        nSockets = 8
        last_lockfile_recheck_time = 1632366744
        last_touch_time = 1632364084
        __func__ = "ServerLoop"
#6  0x000055a7cd537b41 in PostmasterMain (argc=9, argv=<optimized out>) at ./build/../src/backend/postmaster/postmaster.c:1412
        opt = <optimized out>
        status = <optimized out>
        userDoption = <optimized out>
        listen_addr_saved = true
        i = <optimized out>
        output_config_variable = <optimized out>
        __func__ = "PostmasterMain"
#7  0x000055a7cd281f4f in main (argc=9, argv=0x55a7cf26bc20) at ./build/../src/backend/main/main.c:210
edgestats=# \d+ _haproxy_agg_instance_host
                                        Table "public._haproxy_agg_instance_host"
      Column       |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description 
-------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
 time              | timestamp without time zone |           | not null |         | plain    |              | 
 instance          | text                        |           |          |         | extended |              | 
 host              | text                        |           |          |         | extended |              | 
 http_response     | bigint                      |           |          |         | plain    |              | 
 http_response.5xx | bigint                      |           |          |         | plain    |              | 
 bin               | bigint                      |           |          |         | plain    |              | 
 bout              | bigint                      |           |          |         | plain    |              | 
Indexes:
    "_haproxy_agg_instance_host_host_time_idx" btree (host, "time" DESC)
    "_haproxy_agg_instance_host_instance_time_idx" btree (instance, "time" DESC)
    "_haproxy_agg_instance_host_time_idx" btree ("time" DESC)
Triggers:
    ts_insert_blocker BEFORE INSERT ON _haproxy_agg_instance_host FOR EACH ROW EXECUTE FUNCTION _timescaledb_internal.insert_blocker()
Child tables: _timescaledb_internal._dist_hyper_3158_7066_chunk,
              _timescaledb_internal._dist_hyper_3158_7067_chunk,
              _timescaledb_internal._dist_hyper_3158_7070_chunk,
              _timescaledb_internal._dist_hyper_3158_7071_chunk,
              _timescaledb_internal._dist_hyper_3158_7092_chunk,
              _timescaledb_internal._dist_hyper_3158_7093_chunk,
              _timescaledb_internal._dist_hyper_3158_7096_chunk,
              _timescaledb_internal._dist_hyper_3158_7097_chunk,
              _timescaledb_internal._dist_hyper_3158_7100_chunk,
              _timescaledb_internal._dist_hyper_3158_7101_chunk,
              _timescaledb_internal._dist_hyper_3158_7104_chunk,
              _timescaledb_internal._dist_hyper_3158_7105_chunk,
              _timescaledb_internal._dist_hyper_3158_7126_chunk,
              _timescaledb_internal._dist_hyper_3158_7127_chunk,
              _timescaledb_internal._dist_hyper_3158_7130_chunk,
              _timescaledb_internal._dist_hyper_3158_7131_chunk,
              _timescaledb_internal._dist_hyper_3158_7134_chunk,
              _timescaledb_internal._dist_hyper_3158_7135_chunk,
              _timescaledb_internal._dist_hyper_3158_7138_chunk,
              _timescaledb_internal._dist_hyper_3158_7139_chunk,
              _timescaledb_internal._dist_hyper_3158_7160_chunk,
              _timescaledb_internal._dist_hyper_3158_7161_chunk

Timescaledb version 2.4.1 on postgres 13

@nikkhils
Copy link
Contributor

@phemmer and @benchub #3629 has been committed quite some time ago. Can you please let us know if that has fixed your problems with DISTINCT? We would like to close this if so.

@nikkhils
Copy link
Contributor

Actually will close this. Please open a new issue if the problem still persists. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants