Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change compression settings to be per chunk #6513

Merged
merged 1 commit into from Jan 17, 2024

Conversation

svenklemm
Copy link
Member

This patch implements changes to the compressed hypertable to allow per chunk configuration. To enable this the compressed hypertable can no longer be in an inheritance tree as the schema of the compressed chunk is determined by the compression settings. While this patch implements all the underlying infrastructure changes the restrictions for changing compression settings remain intact and will be lifted in a followup patch.

Copy link

@mkindahl, @nikkhils: please review this pull request.

Powered by pull-review

Copy link
Contributor

@antekresic antekresic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a high level pass, the thing to look into is why the plans changed so much.

Using indexes more than before, definitely unexpected. This might uncover some oversight in the current planner implementation or could be due to the fact that we are using chunk indexes instead of the index on the compressed hypertable.

Worth looking into.

tsl/test/expected/compression_ddl.out Show resolved Hide resolved
tsl/test/expected/compression_hypertable.out Outdated Show resolved Hide resolved
@svenklemm svenklemm force-pushed the per_chunk_settings branch 2 times, most recently from 446b598 to fe97f2a Compare January 12, 2024 12:50
Copy link

codecov bot commented Jan 12, 2024

Codecov Report

Attention: 31 lines in your changes are missing coverage. Please review.

Comparison is base (55ae29c) 79.73% compared to head (a4d3a82) 79.66%.

Files Patch % Lines
tsl/src/compression/compression_storage.c 90.00% 3 Missing and 12 partials ⚠️
tsl/src/compression/create.c 80.59% 3 Missing and 10 partials ⚠️
src/process_utility.c 85.00% 0 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6513      +/-   ##
==========================================
- Coverage   79.73%   79.66%   -0.08%     
==========================================
  Files         187      188       +1     
  Lines       36720    36735      +15     
  Branches     9296     9293       -3     
==========================================
- Hits        29280    29266      -14     
- Misses       3103     3128      +25     
- Partials     4337     4341       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@svenklemm svenklemm force-pushed the per_chunk_settings branch 2 times, most recently from b9d9daf to f5ccbc1 Compare January 13, 2024 15:04
@antekresic antekresic self-requested a review January 15, 2024 11:39
Copy link
Contributor

@antekresic antekresic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just a couple minor nits about migration messaging.

RAISE USING
ERRCODE = 'feature_not_supported',
MESSAGE = 'Cannot downgrade compressed hypertables with no compressed chunks',
DETAIL = 'The following hypertable is affected: '|| ht_uncomp::text;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we suggest removing compression from the hypertable since there is nothing compressed? That way you don't need to downgrade it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

WHERE relid = hypertable OR relid = ANY(chunk_relids)
) dist_settings HAVING count(*) > 1
) THEN
RAISE EXCEPTION 'Cannot downgrade hypertables with distinct compression settings';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might need more details on how to get to a state where downgrade will be possible.

i.e. recompressing chunks to have a single compression setting.

@svenklemm svenklemm force-pushed the per_chunk_settings branch 7 times, most recently from 5abb7c8 to c3dadea Compare January 15, 2024 21:24
Copy link
Contributor

@erimatnor erimatnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly nits, but my main concern is the update/downgrade script and how it will work when people are doing upgrades on live data with lots of compressed chunks. It seems that locks could be a problem.

BEGIN
SET timescaledb.restoring TO ON;

-- Detach compressed chunks from their parent hypertables
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit worried that modifying data chunks during upgrades like this is going to be problematic. I don't know exactly the type of lock taken by NO INHERIT but it is probably a heavy one (maybe even AccessExclusive).

This could lead to major problems for customers that have lots of compressed chunks, especially when this happens during automatic upgrades.

At least I think we should clearly understand the implications and behavior of doing this in the update script.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since NO INHERIT does not rewrite the table it should finish instantly but it does indeed require AccessExclusive lock. But AccessExclusive locks are required by most non-patch releases I think this only becomes a problem when we try to do expensive data migrations like rewriting tables which isn't the case here.

Comment on lines +881 to +884
row_size += 8; /* sequence_num */
row_size += 4; /* count */
row_size += 16; /* min/max */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a canonical place (header) where we define the metadata columns, including types and sizes. If we hard-code the information like this in every place it i used, it is easy to make a mistake.

errmsg("compression cannot be used on table with row security")));

Relation rel = table_open(ht->main_table_relid, AccessShareLock);
TupleDesc tupdesc = rel->rd_att;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TupleDesc tupdesc = rel->rd_att;
TupleDesc tupdesc = RelationGetDescr(rel);

tsl/src/compression/create.c Show resolved Hide resolved
tsl/src/compression/create.c Show resolved Hide resolved
static List *get_fk_constraints(Oid reloid);

int32
compression_hypertable_create(Hypertable *ht, Oid owner, Oid tablespace_oid)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this mostly code that has been moved?

Are there any changes here to be on the lookout for?

Perhaps it would be better to do refactoring separately...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of this has been moved, most if the code that used to be done on the hypertable is now done per chunk and we no longer rely on inheritance propagation of columns and their attributes but now manually iterate over compressed chunks for these operations.


/* skip system columns */
if (col_attr->attnum <= 0)
continue;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have system columns in a physical tuple of relation? I think no, they can only appear in scan tuples.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm you might be right that those are skipped already but pg_attribute does include the system columns and we are dealing with Form_pg_attribute here

@svenklemm svenklemm force-pushed the per_chunk_settings branch 3 times, most recently from 88c77ec to 0d3bd47 Compare January 17, 2024 11:30
@svenklemm svenklemm force-pushed the per_chunk_settings branch 2 times, most recently from 3427463 to a4d3a82 Compare January 17, 2024 11:37
This patch implements changes to the compressed hypertable to allow per
chunk configuration. To enable this the compressed hypertable can no
longer be in an inheritance tree as the schema of the compressed chunk
is determined by the compression settings. While this patch implements
all the underlying infrastructure changes, the restrictions for changing
compression settings remain intact and will be lifted in a followup patch.
@svenklemm svenklemm enabled auto-merge (rebase) January 17, 2024 11:52
@svenklemm svenklemm merged commit f57d584 into timescale:main Jan 17, 2024
43 checks passed
konskov added a commit to konskov/timescaledb that referenced this pull request Jan 23, 2024
The commit timescale#6513 removed some restricted chunk operations, enabling
adding constraints to OSM chunks directly. This operation is blocked
on OSM chunks. The present commit ensures that adding a constraint
directly on an OSM chunk is blocked.
@konskov konskov mentioned this pull request Jan 23, 2024
konskov added a commit to konskov/timescaledb that referenced this pull request Jan 23, 2024
The commit timescale#6513 removed some restricted chunk operations, enabling
adding constraints to OSM chunks directly. This operation is blocked
on OSM chunks. The present commit ensures that adding a constraint
directly on an OSM chunk is blocked.
konskov added a commit to konskov/timescaledb that referenced this pull request Jan 23, 2024
The commit timescale#6513 removed some restricted chunk operations, enabling
adding constraints to OSM chunks directly. This operation is blocked
on OSM chunks. The present commit ensures that adding a constraint
directly on an OSM chunk is blocked.
konskov added a commit to konskov/timescaledb that referenced this pull request Jan 23, 2024
The commit timescale#6513 removed some restricted chunk operations, enabling
adding constraints to OSM chunks directly. This operation is blocked
on OSM chunks. The present commit ensures that adding a constraint
directly on an OSM chunk is blocked.
konskov added a commit that referenced this pull request Jan 23, 2024
The commit #6513 removed some restricted chunk operations, enabling
adding constraints to OSM chunks directly. This operation is blocked
on OSM chunks. The present commit ensures that adding a constraint
directly on an OSM chunk is blocked.
antekresic added a commit that referenced this pull request Feb 7, 2024
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations (100k limit, configurable)
* Helper functions for determining compression settings

**Removal notice: Multi-node support**
TimescaleDB 2.13 is the last version that includes multi-node support. Multi-node
support is effectively removed in 2.14 version. Learn more about it
[here](docs/MultiNodeDeprecation.md).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include recompress_chunk procedure. Its
function will be replaced by compress_chunk function which should work on uncompressed
and partially compressed chunk. It should be used to fully compress all chunks.

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
@antekresic antekresic mentioned this pull request Feb 7, 2024
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit to antekresic/timescaledb that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* timescale#6325 Add plan-time chunk exclusion for real-time CAggs
* timescale#6360 Remove support for creating Continuous Aggregates with old format
* timescale#6386 Add functions for determining compression defaults
* timescale#6410 Remove multinode public API
* timescale#6440 Allow SQLValueFunction pushdown into compressed scan
* timescale#6463 Support approximate hypertable size
* timescale#6513 Make compression settings per chunk
* timescale#6529 Remove reindex_relation from recompression
* timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* timescale#6545 Remove restrictions for changing compression settings
* timescale#6566 Limit tuple decompression during DML operations
* timescale#6579 Change compress_chunk and decompress_chunk to idempotent version by default
* timescale#6608 Add LWLock for OSM usage in loader
* timescale#6609 Deprecate recompress_chunk
* timescale#6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* timescale#6541 Inefficient join plans on compressed hypertables.
* timescale#6491 Enable now() plantime constification with BETWEEN
* timescale#6494 Fix create_hypertable referenced by fk succeeds
* timescale#6498 Suboptimal query plans when using time_bucket with query parameters
* timescale#6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* timescale#6509 Make extension state available through function
* timescale#6512 Log extension state changes
* timescale#6522 Disallow triggers on CAggs
* timescale#6523 Reduce locking level on compressed chunk index during segmentwise recompression
* timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* timescale#6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* timescale#6575 Fix compressed chunk not found during upserts
* timescale#6592 Fix recompression policy ignoring partially compressed chunks
* timescale#6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants