Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow pushdown of reference table joins #5212

Merged

Conversation

jnidzwetzki
Copy link
Member

This patch adds the functionality that is needed to perform distributed, parallel joins on reference tables on access nodes. This code allows the pushdown of a join if:

  • (1) The setting "ts_guc_enable_per_data_node_queries" is enabled
  • (2) The outer relation is a distributed hypertable
  • (3) The inner relation is marked as a reference table
  • (4) The join is a left join or an inner join

@codecov
Copy link

codecov bot commented Jan 24, 2023

Codecov Report

Merging #5212 (788d671) into main (f12a361) will increase coverage by 0.17%.
The diff coverage is 97.17%.

@@            Coverage Diff             @@
##             main    #5212      +/-   ##
==========================================
+ Coverage   90.71%   90.89%   +0.17%     
==========================================
  Files         225      225              
  Lines       52056    52321     +265     
==========================================
+ Hits        47225    47557     +332     
+ Misses       4831     4764      -67     
Impacted Files Coverage Δ
src/cross_module_fn.c 66.84% <0.00%> (-0.72%) ⬇️
tsl/src/init.c 96.00% <ø> (ø)
src/planner/planner.c 95.82% <90.90%> (-0.09%) ⬇️
tsl/src/fdw/data_node_scan_plan.c 98.04% <97.95%> (-0.06%) ⬇️
tsl/src/fdw/fdw.c 94.32% <100.00%> (+0.12%) ⬆️
tsl/src/fdw/relinfo.c 97.51% <100.00%> (+1.63%) ⬆️
tsl/src/fdw/shippable.c 95.65% <100.00%> (-2.08%) ⬇️
src/bgw/scheduler.c 84.39% <0.00%> (-4.10%) ⬇️
src/loader/bgw_launcher.c 89.51% <0.00%> (-2.55%) ⬇️
... and 9 more

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@jnidzwetzki jnidzwetzki force-pushed the distributed_join_dict_squashed branch 7 times, most recently from a437f70 to 07098d1 Compare January 24, 2023 15:31
@jnidzwetzki jnidzwetzki force-pushed the distributed_join_dict_squashed branch 3 times, most recently from fea9905 to 4a68eb6 Compare January 27, 2023 13:32
@jnidzwetzki jnidzwetzki marked this pull request as ready for review January 27, 2023 14:04
tsl/src/fdw/data_node_scan_plan.c Outdated Show resolved Hide resolved
tsl/src/fdw/data_node_scan_plan.c Outdated Show resolved Hide resolved
tsl/src/fdw/data_node_scan_plan.c Outdated Show resolved Hide resolved
tsl/src/fdw/data_node_scan_plan.c Show resolved Hide resolved
tsl/src/fdw/data_node_scan_plan.c Outdated Show resolved Hide resolved
* This code does not work for joins with lateral references, since those
* must have parameterized paths, which we don't generate yet.
*/
if (!bms_is_empty(joinrel->lateral_relids))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a test case with join lateral?

tsl/src/fdw/data_node_scan_plan.c Outdated Show resolved Hide resolved
tsl/src/fdw/data_node_scan_plan.c Show resolved Hide resolved
DataNodeScanPath *scanpath = palloc0(sizeof(DataNodeScanPath));

if (rel->lateral_relids && !bms_is_subset(rel->lateral_relids, required_outer))
required_outer = bms_union(required_outer, rel->lateral_relids);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add test case?

* revisit this.
*/
if (!bms_is_empty(required_outer) || !bms_is_empty(rel->lateral_relids))
elog(ERROR, "parameterized foreign joins are not supported yet");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add test case?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These parameters are replaced before the data_node_scan_plan is generated in our implementation; this check is from PG upstream. However, I added a test case for a prepared statement to test that the join pushdown also works in this case.

We have a similar check also for regular scans in place. I wanted to check if we were covering this line in the existing tests. But unfortunately, codecov currently has an outage at the moment and I can't access the reports.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@erimatnor Codecov works now and I checked the coverage of the closely related line in the existing code. This line in the existing data_node_scan_path_create function is also one of these lines that are copied from PG upstream and cannot directly be tested in our implementation. So, it is hard to create a test case for the new/similar line.

https://app.codecov.io/gh/timescale/timescaledb/commit/0562c6d89aa757e7aa7f3acfaea3f3d386fe9f6c/blob/tsl/src/fdw/data_node_scan_plan.c#L1635

@jnidzwetzki jnidzwetzki force-pushed the distributed_join_dict_squashed branch 12 times, most recently from e5c16e0 to 0562c6d Compare February 3, 2023 16:11
#endif
join_partition->reltarget->exprs =
castNode(List,
adjust_appendrel_attrs(root, (Node *) joinrel->reltarget->exprs, 1, &appinfo));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have to do this? The data node joinrel and the unpartitioned joinrel have the same relid.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is needed. The join is performed partition-wise. Each of the partitions belongs to one data node and we are adjusting the expressions here using the appinfo of the data node.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's call it "data-node-wise" or something, because "partition" is ambiguous between chunks and data nodes. So which kind of adjustment exactly do we do? I see that adjust_apprendrel_attrs can change varno or convert row type. For data node joinrels, the varno and rowtype are the same, because we just copy them from the base joinrel. Does this function do anything else?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function is renamed as suggested.

With adjust_apprendrel_attrs, the relid in the expressions is translated from the hypertable relid to the relid of the data node that is responsible for this partition. Otherwise, the deparser could not pushdown these expressions properly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With adjust_apprendrel_attrs, the relid in the expressions is translated from the hypertable relid to the relid of the data node that is responsible for this partition.

Yes, but aren't they exactly equal? As set here: https://github.com/timescale/timescaledb/pull/5212/files#diff-47f57773f9558fb5ef0345e680eb19c0d1c9378c1701011b7a930a96e8a0813cR1135

This is relid in the sense of varno and RelOptInfo, not the oid from RTE.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, these values are equal. However, in this function, we have to adjust the expressions for the join partition and make sure that the expressions (they are generated for the unpartitioned join) belong to this partition. We do something similar in adjust_data_node_rel_attrs for the DataNodes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I still don't understand what exactly it adjusts, given that the varno and row type are exactly the same, but I'll have to trust you on this :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the imprecise answer. The varno is adjusted at this point

For example, here you see the exprs before and after the adjustment. The varno of the expressions is translated from the input table position to the proper position of this specific DataNode entry in the simple_rel_array (the data node is at position 8 in this example). This is needed for the deparser to generate the SQL for the partition properly.

Original List

(
   {VAR 
   :varno 1 
   :varattno 2 
   :vartype 23 
   :vartypmod -1 
   :varcollid 0 
   :varlevelsup 0 
   :varnosyn 1 
   :varattnosyn 2 
   :location -1
   }
   [...]
)

Adjusted List

(
   {VAR 
   :varno 8 
   :varattno 2 
   :vartype 23 
   :vartypmod -1 
   :varcollid 0 
   :varlevelsup 0 
   :varnosyn 8 
   :varattnosyn 2 
   :location -1
   }
   [...]
)

Comment on lines 1261 to 1269
fpinfo =
fdw_relinfo_create(root, joinrel, InvalidOid, InvalidOid, TS_FDW_RELINFO_UNINITIALIZED);
Assert(fpinfo->type == TS_FDW_RELINFO_UNINITIALIZED);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What type is it going to have if we succesfully push down the join? I think "uninitialized" might be confusing here, maybe makes sense to initialize it with normal type but set pushdown_safe = false. Maybe even without going through fdw_relinfo_create, that function is very big and confusing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PostgreSQL logic is to add a dummy value to joinrel->fpinfo to indicate that we have already processed this joinrel (see this and this).

I introduced the TS_FDW_RELINFO_JOIN type to distinguish this fpinfo type from an actual TS_FDW_RELINFO_UNINITIALIZED fpinfo.

@jnidzwetzki jnidzwetzki force-pushed the distributed_join_dict_squashed branch 5 times, most recently from 60ec326 to d34c8a5 Compare February 15, 2023 12:24
Copy link
Member

@akuzm akuzm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the fixes, I think it's almost ideal now :)

Copy link
Contributor

@erimatnor erimatnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving, although I have some inline comments.

@@ -555,6 +555,7 @@ TSDLLEXPORT CrossModuleFunctions ts_cm_functions_default = {
.hypertable_distributed_set_replication_factor = error_no_default_fn_pg_community,
.update_compressed_chunk_relstats = update_compressed_chunk_relstats_default,
.health_check = error_no_default_fn_pg_community,
.mn_get_foreign_join_paths = NULL,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following the pattern here, you might want to add a dummy function instead (I guess one that does nothing by default). Then you can avoid a NULL check when using the function.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@erimatnor Good point. I introduced a dummy function.

break;

case JOIN_RIGHT:
#if PG14_GE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make dead code clear, by wrapping it in #if ENABLE_DEAD_CODE or marking it with pg_unreachable(), or similar so that the code is there but you can avoid having code coverage complain and it is obvious to the reader that the code isn't executed.

tsl/src/fdw/data_node_scan_plan.c Show resolved Hide resolved
@jnidzwetzki jnidzwetzki force-pushed the distributed_join_dict_squashed branch 4 times, most recently from 5975570 to 2a4f0ec Compare February 23, 2023 10:33
@jnidzwetzki
Copy link
Member Author

@erimatnor I like the idea with the #ifdef ENABLE_DEAD_CODE statements. I added them to the larger blocks of currently unused code.

This patch adds the functionality that is needed to perform distributed,
parallel joins on reference tables on access nodes. This code allows the
pushdown of a join if:

 * (1) The setting "ts_guc_enable_per_data_node_queries" is enabled
 * (2) The outer relation is a distributed hypertable
 * (3) The inner relation is marked as a reference table
 * (4) The join is a left join or an inner join
@jnidzwetzki jnidzwetzki merged commit e0be9ea into timescale:main Feb 23, 2023
kgyrtkirk added a commit to kgyrtkirk/timescaledb that referenced this pull request May 12, 2023
This release includes these noteworthy features:
* compressed hypertable enhancements:
  * UPDATE/DELETE support
  * ON CONFLICT DO UPDATE
* Join support for hierarchical Continougs Aggregates
* performance improvements

**Features**
* timescale#5212 Allow pushdown of reference table joins
* timescale#5221 Improve Realtime Continuous Aggregate performance
* timescale#5252 Improve unique constraint support on compressed hypertables
* timescale#5339 Support UPDATE/DELETE on compressed hypertables
* timescale#5344 Enable JOINS for Hierarchical Continuous Aggregates
* timescale#5361 Add parallel support for partialize_agg()
* timescale#5417 Refactor and optimize distributed COPY
* timescale#5454 Add support for ON CONFLICT DO UPDATE for compressed hypertables
* timescale#5547 Skip Ordered Append when only 1 child node is present
* timescale#5510 Propagate vacuum/analyze to compressed chunks
* timescale#5584 Reduce decompression during constraint checking
* timescale#5530 Optimize compressed chunk resorting

**Bugfixes**
* timescale#5396 Fix SEGMENTBY columns predicates to be pushed down
* timescale#5427 Handle user-defined FDW options properly
* timescale#5442 Decompression may have lost DEFAULT values
* timescale#5459 Fix issue creating dimensional constraints
* timescale#5570 Improve interpolate error message on datatype mismatch
* timescale#5573 Fix unique constraint on compressed tables
* timescale#5615 Add permission checks to run_job()
* timescale#5614 Enable run_job() for telemetry job
* timescale#5578 Fix on-insert decompression after schema changes
* timescale#5613 Quote username identifier appropriately
* timescale#5525 Fix tablespace for compressed hypertable and corresponding toast
* timescale#5642 Fix ALTER TABLE SET with normal tables
* timescale#5666 Reduce memory usage for distributed analyze
* timescale#5668 Fix subtransaction resource owner

**Thanks**
* @kovetskiy and @DZDomi for reporting peformance regression in Realtime Continuous Aggregates
* @ollz272 for reporting an issue with interpolate error messages
kgyrtkirk added a commit to kgyrtkirk/timescaledb that referenced this pull request May 17, 2023
This release contains new features and bug fixes since the 2.10.3 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Support for DML operations on compressed chunks:
  * UPDATE/DELETE support
  * Support for unique constraints on compressed chunks
  * Support for `ON CONFLICT DO UPDATE`
  * Support for `ON CONFLICT DO NOTHING`
* Join support for hierarchical Continuous Aggregates

**Features**
* timescale#5212 Allow pushdown of reference table joins
* timescale#5221 Improve Realtime Continuous Aggregate performance
* timescale#5252 Improve unique constraint support on compressed hypertables
* timescale#5339 Support UPDATE/DELETE on compressed hypertables
* timescale#5344 Enable JOINS for Hierarchical Continuous Aggregates
* timescale#5361 Add parallel support for partialize_agg()
* timescale#5417 Refactor and optimize distributed COPY
* timescale#5454 Add support for ON CONFLICT DO UPDATE for compressed hypertables
* timescale#5547 Skip Ordered Append when only 1 child node is present
* timescale#5510 Propagate vacuum/analyze to compressed chunks
* timescale#5584 Reduce decompression during constraint checking
* timescale#5530 Optimize compressed chunk resorting
* timescale#5639 Support sending telemetry event reports

**Bugfixes**
* timescale#5396 Fix SEGMENTBY columns predicates to be pushed down
* timescale#5427 Handle user-defined FDW options properly
* timescale#5442 Decompression may have lost DEFAULT values
* timescale#5459 Fix issue creating dimensional constraints
* timescale#5570 Improve interpolate error message on datatype mismatch
* timescale#5573 Fix unique constraint on compressed tables
* timescale#5615 Add permission checks to run_job()
* timescale#5614 Enable run_job() for telemetry job
* timescale#5578 Fix on-insert decompression after schema changes
* timescale#5613 Quote username identifier appropriately
* timescale#5525 Fix tablespace for compressed hypertable and corresponding toast
* timescale#5642 Fix ALTER TABLE SET with normal tables
* timescale#5666 Reduce memory usage for distributed analyze
* timescale#5668 Fix subtransaction resource owner

**Thanks**
* @kovetskiy and @DZDomi for reporting peformance regression in Realtime Continuous Aggregates
* @ollz272 for reporting an issue with interpolate error messages
@kgyrtkirk kgyrtkirk mentioned this pull request May 17, 2023
kgyrtkirk added a commit to kgyrtkirk/timescaledb that referenced this pull request May 19, 2023
This release contains new features and bug fixes since the 2.10.3 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Support for DML operations on compressed chunks:
  * UPDATE/DELETE support
  * Support for unique constraints on compressed chunks
  * Support for `ON CONFLICT DO UPDATE`
  * Support for `ON CONFLICT DO NOTHING`
* Join support for hierarchical Continuous Aggregates

**Features**
* timescale#5212 Allow pushdown of reference table joins
* timescale#5221 Improve Realtime Continuous Aggregate performance
* timescale#5252 Improve unique constraint support on compressed hypertables
* timescale#5339 Support UPDATE/DELETE on compressed hypertables
* timescale#5344 Enable JOINS for Hierarchical Continuous Aggregates
* timescale#5361 Add parallel support for partialize_agg()
* timescale#5417 Refactor and optimize distributed COPY
* timescale#5454 Add support for ON CONFLICT DO UPDATE for compressed hypertables
* timescale#5547 Skip Ordered Append when only 1 child node is present
* timescale#5510 Propagate vacuum/analyze to compressed chunks
* timescale#5584 Reduce decompression during constraint checking
* timescale#5530 Optimize compressed chunk resorting
* timescale#5639 Support sending telemetry event reports

**Bugfixes**
* timescale#5396 Fix SEGMENTBY columns predicates to be pushed down
* timescale#5427 Handle user-defined FDW options properly
* timescale#5442 Decompression may have lost DEFAULT values
* timescale#5459 Fix issue creating dimensional constraints
* timescale#5570 Improve interpolate error message on datatype mismatch
* timescale#5573 Fix unique constraint on compressed tables
* timescale#5615 Add permission checks to run_job()
* timescale#5614 Enable run_job() for telemetry job
* timescale#5578 Fix on-insert decompression after schema changes
* timescale#5613 Quote username identifier appropriately
* timescale#5525 Fix tablespace for compressed hypertable and corresponding toast
* timescale#5642 Fix ALTER TABLE SET with normal tables
* timescale#5666 Reduce memory usage for distributed analyze
* timescale#5668 Fix subtransaction resource owner

**Thanks**
* @kovetskiy and @DZDomi for reporting peformance regression in Realtime Continuous Aggregates
* @ollz272 for reporting an issue with interpolate error messages
kgyrtkirk added a commit that referenced this pull request May 19, 2023
This release contains new features and bug fixes since the 2.10.3 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Support for DML operations on compressed chunks:
  * UPDATE/DELETE support
  * Support for unique constraints on compressed chunks
  * Support for `ON CONFLICT DO UPDATE`
  * Support for `ON CONFLICT DO NOTHING`
* Join support for hierarchical Continuous Aggregates

**Features**
* #5212 Allow pushdown of reference table joins
* #5221 Improve Realtime Continuous Aggregate performance
* #5252 Improve unique constraint support on compressed hypertables
* #5339 Support UPDATE/DELETE on compressed hypertables
* #5344 Enable JOINS for Hierarchical Continuous Aggregates
* #5361 Add parallel support for partialize_agg()
* #5417 Refactor and optimize distributed COPY
* #5454 Add support for ON CONFLICT DO UPDATE for compressed hypertables
* #5547 Skip Ordered Append when only 1 child node is present
* #5510 Propagate vacuum/analyze to compressed chunks
* #5584 Reduce decompression during constraint checking
* #5530 Optimize compressed chunk resorting
* #5639 Support sending telemetry event reports

**Bugfixes**
* #5396 Fix SEGMENTBY columns predicates to be pushed down
* #5427 Handle user-defined FDW options properly
* #5442 Decompression may have lost DEFAULT values
* #5459 Fix issue creating dimensional constraints
* #5570 Improve interpolate error message on datatype mismatch
* #5573 Fix unique constraint on compressed tables
* #5615 Add permission checks to run_job()
* #5614 Enable run_job() for telemetry job
* #5578 Fix on-insert decompression after schema changes
* #5613 Quote username identifier appropriately
* #5525 Fix tablespace for compressed hypertable and corresponding toast
* #5642 Fix ALTER TABLE SET with normal tables
* #5666 Reduce memory usage for distributed analyze
* #5668 Fix subtransaction resource owner

**Thanks**
* @kovetskiy and @DZDomi for reporting peformance regression in Realtime Continuous Aggregates
* @ollz272 for reporting an issue with interpolate error messages
kgyrtkirk added a commit to kgyrtkirk/timescaledb that referenced this pull request May 19, 2023
This release contains new features and bug fixes since the 2.10.3 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Support for DML operations on compressed chunks:
  * UPDATE/DELETE support
  * Support for unique constraints on compressed chunks
  * Support for `ON CONFLICT DO UPDATE`
  * Support for `ON CONFLICT DO NOTHING`
* Join support for hierarchical Continuous Aggregates

**Features**
* timescale#5212 Allow pushdown of reference table joins
* timescale#5221 Improve Realtime Continuous Aggregate performance
* timescale#5252 Improve unique constraint support on compressed hypertables
* timescale#5339 Support UPDATE/DELETE on compressed hypertables
* timescale#5344 Enable JOINS for Hierarchical Continuous Aggregates
* timescale#5361 Add parallel support for partialize_agg()
* timescale#5417 Refactor and optimize distributed COPY
* timescale#5454 Add support for ON CONFLICT DO UPDATE for compressed hypertables
* timescale#5547 Skip Ordered Append when only 1 child node is present
* timescale#5510 Propagate vacuum/analyze to compressed chunks
* timescale#5584 Reduce decompression during constraint checking
* timescale#5530 Optimize compressed chunk resorting
* timescale#5639 Support sending telemetry event reports

**Bugfixes**
* timescale#5396 Fix SEGMENTBY columns predicates to be pushed down
* timescale#5427 Handle user-defined FDW options properly
* timescale#5442 Decompression may have lost DEFAULT values
* timescale#5459 Fix issue creating dimensional constraints
* timescale#5570 Improve interpolate error message on datatype mismatch
* timescale#5573 Fix unique constraint on compressed tables
* timescale#5615 Add permission checks to run_job()
* timescale#5614 Enable run_job() for telemetry job
* timescale#5578 Fix on-insert decompression after schema changes
* timescale#5613 Quote username identifier appropriately
* timescale#5525 Fix tablespace for compressed hypertable and corresponding toast
* timescale#5642 Fix ALTER TABLE SET with normal tables
* timescale#5666 Reduce memory usage for distributed analyze
* timescale#5668 Fix subtransaction resource owner

**Thanks**
* @kovetskiy and @DZDomi for reporting peformance regression in Realtime Continuous Aggregates
* @ollz272 for reporting an issue with interpolate error messages
kgyrtkirk added a commit to kgyrtkirk/timescaledb that referenced this pull request May 19, 2023
This release contains new features and bug fixes since the 2.10.3 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Support for DML operations on compressed chunks:
  * UPDATE/DELETE support
  * Support for unique constraints on compressed chunks
  * Support for `ON CONFLICT DO UPDATE`
  * Support for `ON CONFLICT DO NOTHING`
* Join support for hierarchical Continuous Aggregates

**Features**
* timescale#5212 Allow pushdown of reference table joins
* timescale#5221 Improve Realtime Continuous Aggregate performance
* timescale#5252 Improve unique constraint support on compressed hypertables
* timescale#5339 Support UPDATE/DELETE on compressed hypertables
* timescale#5344 Enable JOINS for Hierarchical Continuous Aggregates
* timescale#5361 Add parallel support for partialize_agg()
* timescale#5417 Refactor and optimize distributed COPY
* timescale#5454 Add support for ON CONFLICT DO UPDATE for compressed hypertables
* timescale#5547 Skip Ordered Append when only 1 child node is present
* timescale#5510 Propagate vacuum/analyze to compressed chunks
* timescale#5584 Reduce decompression during constraint checking
* timescale#5530 Optimize compressed chunk resorting
* timescale#5639 Support sending telemetry event reports

**Bugfixes**
* timescale#5396 Fix SEGMENTBY columns predicates to be pushed down
* timescale#5427 Handle user-defined FDW options properly
* timescale#5442 Decompression may have lost DEFAULT values
* timescale#5459 Fix issue creating dimensional constraints
* timescale#5570 Improve interpolate error message on datatype mismatch
* timescale#5573 Fix unique constraint on compressed tables
* timescale#5615 Add permission checks to run_job()
* timescale#5614 Enable run_job() for telemetry job
* timescale#5578 Fix on-insert decompression after schema changes
* timescale#5613 Quote username identifier appropriately
* timescale#5525 Fix tablespace for compressed hypertable and corresponding toast
* timescale#5642 Fix ALTER TABLE SET with normal tables
* timescale#5666 Reduce memory usage for distributed analyze
* timescale#5668 Fix subtransaction resource owner

**Thanks**
* @kovetskiy and @DZDomi for reporting peformance regression in Realtime Continuous Aggregates
* @ollz272 for reporting an issue with interpolate error messages
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants