-
Notifications
You must be signed in to change notification settings - Fork 853
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LIMIT 1 clause causes timescaledb to request a very reasonable 16ZB of ram, which I don't have #3498
Comments
You can try remove In my ticket #3483 without |
Thank you for the bug report @benchub . In hex, the number is 0xFFFFFFFF6ED18F60, which looks like a negative int64 that for some reason is passed down to the allocation. It is hard to pinpoint without a stack trace. Is it possible for you to attach a debugger and put a breakpoint on the line that gives the error and check the stack trace? |
No @zcattacz , removing the |
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
@mkindahl That sounds like something I could probably do, but I have no idea how to know what line to break at, or how to find out. |
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade since it is needed to support PostgreSQL 12.8 and 13.4. **Bugfixes** * timescale#3430 Fix havingqual processing for continuous aggregates * timescale#3468 Disable tests by default if tools are not found * timescale#3468 Fix crash while tracking alter table commands * timescale#3494 Improve error message when adding data nodes * timescale#3498 Fix continuous agg bgw job failure for PG 12.8 and 13.4 **Thanks** * @brianbenns for reporting a segfault with continuous aggregates
@benchub You can break at errmsg (postgres function defined in elog.c). |
Received a similar error:
When attempting to run query:
In my case the table Attaching gdb to
Timescaledb version 2.4.1 on postgres 13 |
Actually will close this. Please open a new issue if the problem still persists. Thanks. |
Relevant system information:
Describe the bug
We've noticed one one of our timescale dbs with ~30GB of data, we adding a LIMIT clause to a query will (sometimes!) cause it to die:
`=> explain SELECT DISTINCT time FROM cloudwatch_data WHERE resource_name = 'c320' ORDER BY time DESC LIMIT 1;
QUERY PLAN
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Limit (cost=0.56..13.40 rows=1 width=8)
-> Unique (cost=0.56..2569.07 rows=200 width=8)
-> Custom Scan (ChunkAppend) on cloudwatch_data (cost=0.56..2565.07 rows=1600 width=8)
Order: cloudwatch_data."time" DESC
-> Custom Scan (SkipScan) on _hyper_1_9_chunk (cost=0.56..320.29 rows=200 width=8)
-> Index Only Scan using _hyper_1_9_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_9_chunk (cost=0.56..26235.73 rows=25191 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_8_chunk (cost=0.56..320.32 rows=200 width=8)
-> Index Only Scan using _hyper_1_8_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_8_chunk (cost=0.56..29602.63 rows=28421 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_6_chunk (cost=0.56..320.33 rows=200 width=8)
-> Index Only Scan using _hyper_1_6_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_6_chunk (cost=0.56..28272.04 rows=27142 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_5_chunk (cost=0.56..320.41 rows=200 width=8)
-> Index Only Scan using _hyper_1_5_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_5_chunk (cost=0.56..28067.25 rows=26935 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_4_chunk (cost=0.56..319.30 rows=200 width=8)
-> Index Only Scan using _hyper_1_4_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_4_chunk (cost=0.56..13111.97 rows=12620 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_3_chunk (cost=0.56..320.25 rows=200 width=8)
-> Index Only Scan using _hyper_1_3_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_3_chunk (cost=0.56..10634.97 rows=10189 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_2_chunk (cost=0.56..322.95 rows=200 width=8)
-> Index Only Scan using _hyper_1_2_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_2_chunk (cost=0.56..33802.93 rows=32048 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
-> Custom Scan (SkipScan) on _hyper_1_1_chunk (cost=0.56..321.23 rows=200 width=8)
-> Index Only Scan using _hyper_1_1_chunk_cloudwatch_data_resource_name_time_idx2_1 on _hyper_1_1_chunk (cost=0.56..6250.07 rows=5946 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" < NULL::timestamp with time zone))
(28 rows)
=> SELECT DISTINCT time FROM cloudwatch_data WHERE resource_name = 'c320' ORDER BY time DESC LIMIT 1;
ERROR: invalid memory alloc request size 18446744071273811808
=> explain SELECT DISTINCT time FROM cloudwatch_data WHERE resource_name = 'c320' ORDER BY time DESC LIMIT 1;
ERROR: invalid memory alloc request size 18446744071272821040`
For reasons I don't understand, if I do a slightly different query, I can get a great result using a plan that includes a limit step:
`=> explain select max(time) from cloudwatch_data where resource_name = 'c320';
QUERY PLAN
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Result (cost=0.74..0.75 rows=1 width=8)
InitPlan 1 (returns $0)
-> Limit (cost=0.56..0.74 rows=1 width=8)
-> Custom Scan (ChunkAppend) on cloudwatch_data (cost=0.56..30850.21 rows=29540 width=8)
Order: cloudwatch_data."time" DESC
-> Index Only Scan using _hyper_1_9_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_9_chunk (cost=0.56..30850.21 rows=29540 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_8_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_8_chunk (cost=0.56..29654.20 rows=28406 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_6_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_6_chunk (cost=0.56..28197.27 rows=26986 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_5_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_5_chunk (cost=0.56..28289.11 rows=27077 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_4_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_4_chunk (cost=0.56..13355.70 rows=12782 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_3_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_3_chunk (cost=0.56..11179.41 rows=10625 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_2_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_2_chunk (cost=0.56..33930.68 rows=32096 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
-> Index Only Scan using _hyper_1_1_chunk_cloudwatch_data_resource_name_time_idx on _hyper_1_1_chunk (cost=0.56..6258.18 rows=5940 width=8)
Index Cond: ((resource_name = 'c320'::text) AND ("time" IS NOT NULL))
(21 rows)
=> select max(time) from cloudwatch_data where resource_name = 'c320';
max
────────
[null]
(1 row)
`
To Reproduce
I'm not quite sure how to repro this in a test case - we have this same schema installed in several places but it's only this instance with the most data that is failing. I'm happy to extract any debugging data you'd like.
Expected behavior
I would expect the ORDER BY time DESC LIMIT 1 to behave the same as max(time).
Actual behavior
Timescaledb thinks it can make it happen, but when I try it asks for a mere 16ZB of ram. Asking to explain the plan asks for the same ram.
Screenshots
See above.
Additional context
n/a
The text was updated successfully, but these errors were encountered: