Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix now() plantime constification with BETWEEN #6491

Merged
merged 1 commit into from Jan 24, 2024

Conversation

konskov
Copy link
Contributor

@konskov konskov commented Jan 3, 2024

Previously when using BETWEEN ... AND additional constraints
in a WHERE clause, the BETWEEN was not handled correctly because
it was wrapped in a BoolExpr node, which prevented plantime
exclusion.
This commit fixes the handling of this case to allow chunk exclusion
to take place at planning time.
Also, makes sure we use our mock timestamp in all places in tests.
Previously we were using a mix of current_timestamp_mock and now(),
which was returning unexpected/incorrect results.

Copy link

codecov bot commented Jan 3, 2024

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (92e9009) 79.74% compared to head (755ae7a) 79.74%.

Files Patch % Lines
src/planner/planner.c 87.50% 0 Missing and 3 partials ⚠️
src/time_utils.c 57.14% 2 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6491      +/-   ##
==========================================
- Coverage   79.74%   79.74%   -0.01%     
==========================================
  Files         188      188              
  Lines       36739    36722      -17     
  Branches     9290     9278      -12     
==========================================
- Hits        29297    29283      -14     
- Misses       3110     3112       +2     
+ Partials     4332     4327       -5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@konskov konskov force-pushed the fix_constify_between branch 5 times, most recently from ee71889 to a07e4b2 Compare January 3, 2024 15:44
@konskov konskov changed the title [JUST TESTING] Fix constify between Fix now() plantime constification with BETWEEN Jan 3, 2024
@konskov konskov marked this pull request as ready for review January 3, 2024 15:52
Copy link

github-actions bot commented Jan 3, 2024

@gayyappan, @erimatnor: please review this pull request.

Powered by pull-review

SET timescaledb.enable_chunk_append TO true;
SET timescaledb.enable_constraint_aware_append TO true;

-- no startup exclusion should be happening for any of the queries below
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds like if the chunks shouldn't be excluded at all, might be more clear to say that they should be excluded at planning time.

Comment on lines 224 to 240
timestamp BETWEEN now() AND now() - interval '1 day'
AND rawtag_id = 1 ORDER BY "timestamp" ASC;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what are you getting here, something like

(BoolExprAnd
    (OpExpr (= rawtag_id 1))
    (BoolExprAnd
        (>= time now()) (<= time ...)))

Top-level AND should normally be flattened into a list, but ts_constify_now is called before it happens, so this is why we have this problem?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven’t yet found where the flattening happens, but yes that is how the expression looks: (pprint(node) in ts_constify_now)

{BOOLEXPR 
   :boolop and 
   :args (
      {BOOLEXPR 
      :boolop and 
      :args (
         {OPEXPR 
         :opno 1325 
         :opfuncid 1156 
         :opresulttype 16 
         :opretset false 
         :opcollid 0 
         :inputcollid 0 
         :args (
            {VAR 
            :varno 1 
            :varattno 1 
            :vartype 1184 
            :vartypmod -1 
            :varcollid 0 
            :varlevelsup 0 
            :varnosyn 1 
            :varattnosyn 1 
            :location 26
            }
            {OPEXPR 
            :opno 1329 
            :opfuncid 1190 
            :opresulttype 1184 
            :opretset false 
            :opcollid 0 
            :inputcollid 0 
            :args (
               {FUNCEXPR 
               :funcid 1299 
               :funcresulttype 1184 
               :funcretset false 
               :funcvariadic false 
               :funcformat 0 
               :funccollid 0 
               :inputcollid 0 
               :args <> 
               :location 39
               }
               {CONST 
               :consttype 1186 
               :consttypmod -1 
               :constcollid 0 
               :constlen 16 
               :constbyval false 
               :constisnull false 
               :location 56 
               :constvalue 16 [ 0 14 39 7 0 0 0 0 0 0 0 0 0 0 0 0 ]
               }
            )
            :location 45
            }
         )
         :location 31
         }
         {OPEXPR 
         :opno 1323 
         :opfuncid 1155 
         :opresulttype 16 
         :opretset false 
         :opcollid 0 
         :inputcollid 0 
         :args (
            {VAR 
            :varno 1 
            :varattno 1 
            :vartype 1184 
            :vartypmod -1 
            :varcollid 0 
            :varlevelsup 0 
            :varnosyn 1 
            :varattnosyn 1 
            :location 26
            }
            {FUNCEXPR 
            :funcid 1299 
            :funcresulttype 1184 
            :funcretset false 
            :funcvariadic false 
            :funcformat 0 
            :funccollid 0 
            :inputcollid 0 
            :args <> 
            :location 68
            }
         )
         :location 31
         }
      )
      :location 31
      }
      {OPEXPR 
      :opno 96 
      :opfuncid 65 
      :opresulttype 16 
      :opretset false 
      :opcollid 0 
      :inputcollid 0 
      :args (
         {VAR 
         :varno 1 
         :varattno 2 
         :vartype 23 
         :vartypmod -1 
         :varcollid 0 
         :varlevelsup 0 
         :varnosyn 1 
         :varattnosyn 2 
         :location 78
         }
         {CONST 
         :consttype 23 
         :consttypmod -1 
         :constcollid 0 
         :constlen 4 
         :constbyval true 
         :constisnull false 
         :location 82 
         :constvalue 4 [ 1 0 0 0 0 0 0 0 ]
         }
      )
      :location 80
      }
   )
   :location 74
   }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the flattening happens in eval_const_expressions, which gets called after ts_constify_now.

{
be->args = list_concat(be->args, additions);
}
be->args = additions;
Copy link
Member

@akuzm akuzm Jan 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? We should keep the old conditions as well. Can you please try to construct a test: first prepare the statement with the mock now() in the middle of the table, so that not all chunks are excluded. Then move the mock now() to be in the future, and check that all the chunks are excluded at run time when you execute the prepared statement. They won't be excluded if you don't keep the original conditions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are right, this test currently fails...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, it seems that this is also the current behavior wrt prepared statements in the main branch, even for the simplest condition (where time > now()) : looking at the constify_now test output, where we test prepared statements https://github.com/timescale/timescaledb/blob/main/tsl/test/shared/expected/constify_now-15.out#L396-L433 I see that two rows are returned when the current timestamp is moved into the future and only one row should actually be returned

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the issue appears to be our mock timestamp. When testing with an example actually using now(), the results returned are the expected ones.
Here is the example I used:

create table test2 (time timestamptz, a int, b int);
select create_hypertable('test2', 'time', chunk_time_interval => interval '2 min');

insert into test2 (time, a, b)
select t, a, b from
generate_series(now() - interval '15 min', now() + interval '14 min', interval '30 sec') g(t)
cross join generate_series(1,3,1) a cross join generate_series(1,2,1) b;

prepare p2 as select * from test2 where time between now() - interval '2 min' and now() and a = 1;
-- execute p2 immediately, then after two minutes:
explain execute p2;
                                             QUERY PLAN
----------------------------------------------------------------------------------------------------
 Custom Scan (ChunkAppend) on test2  (cost=4.27..119.60 rows=8 width=16)
   Chunks excluded during startup: 6
   ->  Bitmap Heap Scan on _hyper_1_8_chunk  (cost=4.27..14.95 rows=1 width=16)
         Recheck Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
         Filter: (a = 1)
         ->  Bitmap Index Scan on _hyper_1_8_chunk_test2_time_idx  (cost=0.00..4.27 rows=9 width=0)
               Index Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
   ->  Bitmap Heap Scan on _hyper_1_9_chunk  (cost=4.27..14.95 rows=1 width=16)
         Recheck Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
         Filter: (a = 1)
         ->  Bitmap Index Scan on _hyper_1_9_chunk_test2_time_idx  (cost=0.00..4.27 rows=9 width=0)
               Index Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
(12 rows)

execute p2;
             time              | a | b
-------------------------------+---+---
 2024-01-05 13:03:52.465237+02 | 1 | 1
 2024-01-05 13:03:52.465237+02 | 1 | 2
 2024-01-05 13:04:22.465237+02 | 1 | 1
 2024-01-05 13:04:22.465237+02 | 1 | 2
 2024-01-05 13:04:52.465237+02 | 1 | 1
 2024-01-05 13:04:52.465237+02 | 1 | 2
 2024-01-05 13:05:22.465237+02 | 1 | 1
 2024-01-05 13:05:22.465237+02 | 1 | 2
(8 rows)

-- after a bit:
explain execute p2;
                                             QUERY PLAN
-----------------------------------------------------------------------------------------------------
 Custom Scan (ChunkAppend) on test2  (cost=4.27..119.60 rows=8 width=16)
   Chunks excluded during startup: 6
   ->  Bitmap Heap Scan on _hyper_1_9_chunk  (cost=4.27..14.95 rows=1 width=16)
         Recheck Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
         Filter: (a = 1)
         ->  Bitmap Index Scan on _hyper_1_9_chunk_test2_time_idx  (cost=0.00..4.27 rows=9 width=0)
               Index Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
   ->  Bitmap Heap Scan on _hyper_1_10_chunk  (cost=4.27..14.95 rows=1 width=16)
         Recheck Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
         Filter: (a = 1)
         ->  Bitmap Index Scan on _hyper_1_10_chunk_test2_time_idx  (cost=0.00..4.27 rows=9 width=0)
               Index Cond: (("time" >= (now() - '00:02:00'::interval)) AND ("time" <= now()))
(12 rows)

Time: 4.078 ms
postgres@postgres[63583]=# execute p2;
             time              | a | b
-------------------------------+---+---
 2024-01-05 13:04:22.465237+02 | 1 | 1
 2024-01-05 13:04:22.465237+02 | 1 | 2
 2024-01-05 13:04:52.465237+02 | 1 | 1
 2024-01-05 13:04:52.465237+02 | 1 | 2
 2024-01-05 13:05:22.465237+02 | 1 | 1
 2024-01-05 13:05:22.465237+02 | 1 | 2
 2024-01-05 13:05:52.465237+02 | 1 | 1
 2024-01-05 13:05:52.465237+02 | 1 | 2
(8 rows)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

testing with actual now() instead of our mock timestamp, it seems the changes in this PR work as expected, but our current_timestamp_mock doesn’t play well with prepared statements. This needs to be fixed. @akuzm do you think I should do this as part of this PR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@akuzm do you think I should do this as part of this PR?

I think it would be good to fix it here because it is needed to properly test your changes.

@konskov konskov force-pushed the fix_constify_between branch 13 times, most recently from cb6e424 to 184b796 Compare January 22, 2024 13:27
@konskov konskov requested a review from akuzm January 22, 2024 15:41
@@ -397,3 +397,6 @@ CREATE FUNCTION _timescaledb_functions.constraint_clone(constraint_oid OID,targe
DROP FUNCTION IF EXISTS _timescaledb_functions.chunks_in;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunks_in;


CREATE FUNCTION _timescaledb_functions.ts_replace_now_mock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be "ts_now_mock()"? The current "ts_replace_now_mock()" sounds like the function that does the replacement, I was staring at it trying to understand why you need it in SQL 😃

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also probably we don't need it in the production schema, but not sure where to put it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a file debug_build_utils.sql that is used for utilities that should only be available in debug builds.

Not sure you need to have this function in the upgrade/downgrade scripts since it is only going to be used for regression test runs in debug builds.

Copy link
Contributor

@mkindahl mkindahl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice addition, but suggest that you only add the mock function to debug builds and configure the tests accordingly.

@@ -397,3 +397,6 @@ CREATE FUNCTION _timescaledb_functions.constraint_clone(constraint_oid OID,targe
DROP FUNCTION IF EXISTS _timescaledb_functions.chunks_in;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunks_in;


CREATE FUNCTION _timescaledb_functions.ts_replace_now_mock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a file debug_build_utils.sql that is used for utilities that should only be available in debug builds.

Not sure you need to have this function in the upgrade/downgrade scripts since it is only going to be used for regression test runs in debug builds.

@@ -770,3 +770,4 @@ AS 'BEGIN END' LANGUAGE PLPGSQL SET search_path TO pg_catalog,pg_temp;

DROP FUNCTION IF EXISTS _timescaledb_functions.get_orderby_defaults(regclass,text[]);
DROP FUNCTION IF EXISTS _timescaledb_functions.get_segmentby_defaults(regclass);
DROP FUNCTION IF EXISTS _timescaledb_functions.ts_replace_now_mock();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure you need this in the upgrade/downgrade scripts since it will only be used in regression tests for debug builds.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need this function in latest/reverse-dev, removed.

Comment on lines 50 to 51
CREATE OR REPLACE FUNCTION _timescaledb_functions.ts_replace_now_mock()
RETURNS TIMESTAMPTZ AS '@MODULE_PATHNAME@', 'ts_replace_now_mock' LANGUAGE C STABLE STRICT PARALLEL SAFE;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest to move this to debug_build_utils.sql.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Previously when using BETWEEN ... AND additional constraints in a
WHERE clause, the BETWEEN was not handled correctly because it was
wrapped in a BoolExpr node, which prevented plantime exclusion.
The flattening of such expressions happens in `eval_const_expressions`
which gets called after our constify_now code.
This commit fixes the handling of this case to allow chunk exclusion to
take place at planning time.
Also, makes sure we use our mock timestamp in all places in tests.
Previously we were using a mix of current_timestamp_mock and now(),
which was returning unexpected/incorrect results.
@konskov konskov merged commit 929ad41 into timescale:main Jan 24, 2024
44 checks passed
@konskov konskov added this to the TimescaleDB 2.14 milestone Jan 29, 2024
antekresic added a commit that referenced this pull request Feb 7, 2024
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations (100k limit, configurable)
* Helper functions for determining compression settings

**Removal notice: Multi-node support**
TimescaleDB 2.13 is the last version that includes multi-node support. Multi-node
support is effectively removed in 2.14 version. Learn more about it
[here](docs/MultiNodeDeprecation.md).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include recompress_chunk procedure. Its
function will be replaced by compress_chunk function which should work on uncompressed
and partially compressed chunk. It should be used to fully compress all chunks.

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
@antekresic antekresic mentioned this pull request Feb 7, 2024
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit to antekresic/timescaledb that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* timescale#6325 Add plan-time chunk exclusion for real-time CAggs
* timescale#6360 Remove support for creating Continuous Aggregates with old format
* timescale#6386 Add functions for determining compression defaults
* timescale#6410 Remove multinode public API
* timescale#6440 Allow SQLValueFunction pushdown into compressed scan
* timescale#6463 Support approximate hypertable size
* timescale#6513 Make compression settings per chunk
* timescale#6529 Remove reindex_relation from recompression
* timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* timescale#6545 Remove restrictions for changing compression settings
* timescale#6566 Limit tuple decompression during DML operations
* timescale#6579 Change compress_chunk and decompress_chunk to idempotent version by default
* timescale#6608 Add LWLock for OSM usage in loader
* timescale#6609 Deprecate recompress_chunk
* timescale#6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* timescale#6541 Inefficient join plans on compressed hypertables.
* timescale#6491 Enable now() plantime constification with BETWEEN
* timescale#6494 Fix create_hypertable referenced by fk succeeds
* timescale#6498 Suboptimal query plans when using time_bucket with query parameters
* timescale#6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* timescale#6509 Make extension state available through function
* timescale#6512 Log extension state changes
* timescale#6522 Disallow triggers on CAggs
* timescale#6523 Reduce locking level on compressed chunk index during segmentwise recompression
* timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* timescale#6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* timescale#6575 Fix compressed chunk not found during upserts
* timescale#6592 Fix recompression policy ignoring partially compressed chunks
* timescale#6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants