Skip to content

Commit

Permalink
Fix formatting.
Browse files Browse the repository at this point in the history
Fix code formatting.

Set vartype=ANYENUMOID

Previous patch was causing regression, thus took a different approach. Now with current patch i check for enum type and set vartype to ANYENUMOID.

Validate invalid column OID for segmentby column.

Group by/Order by on enum type columns for a compression table reports error.

Updated CHANGELOG.md

Added a thanks note to reporter.

Fix comments from San and Fabrizio

Undo changes in tsl/src/fdw/scan_plan.c

Fix review comment from Sven.

Instead of updating vartype we first check if operator lookup works fine with explicit type. If not we fallback to more generic type(ANYENUMOID).

Also report an error if it is not an enum type after fallback.

Fix formatting.

Fix formatting

Fix formatting

Removed unwanted file named fix
  • Loading branch information
sb230132 committed Aug 17, 2022
1 parent 876a210 commit ba9950c
Show file tree
Hide file tree
Showing 5 changed files with 192 additions and 76 deletions.
148 changes: 76 additions & 72 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,22 +343,22 @@ The experimental features in this release are:
* APIs for chunk manipulation across data nodes in a distributed
hypertable setup. This includes the ability to add a data node and move
chunks to the new data node for cluster rebalancing.
* The `time_bucket_ng` function, a newer version of `time_bucket`. This
* The `time_bucket_ng` function, a newer version of `time_bucket`. This
function supports years, months, days, hours, minutes, and seconds.

We’re committed to developing these experiments, giving the community
a chance to provide early feedback and influence the direction of
TimescaleDB’s development. We’ll travel faster with your input!
Please create your feedback as a GitHub issue (using the
experimental-schema label), describe what you found, and tell us the
TimescaleDB’s development. We’ll travel faster with your input!

Please create your feedback as a GitHub issue (using the
experimental-schema label), describe what you found, and tell us the
steps or share the code snip to recreate it.

This release also includes several bug fixes.
This release also includes several bug fixes.

PostgreSQL 11 deprecation announcement
Timescale is working hard on our next exciting features. To make that
possible, we require functionality that is available in Postgres 12 and
Timescale is working hard on our next exciting features. To make that
possible, we require functionality that is available in Postgres 12 and
above. Postgres 11 is not supported with TimescaleDB 2.4.

**Experimental Features**
Expand Down Expand Up @@ -420,7 +420,7 @@ This release adds major new features since the 2.2.1 release.
We deem it moderate priority for upgrading.

This release adds support for inserting data into compressed chunks
and improves performance when inserting data into distributed hypertables.
and improves performance when inserting data into distributed hypertables.
Distributed hypertables now also support triggers and compression policies.

The bug fixes in this release address issues related to the handling
Expand Down Expand Up @@ -476,7 +476,7 @@ of indexes, in compression, and in policies.
* #3151 Fix `fdw_relinfo_get` assertion failure on `DELETE`
* #3155 Inherit `CFLAGS` from PostgreSQL
* #3169 Fix incorrect type cast in compression policy
* #3183 Fix segfault in calculate_chunk_interval
* #3183 Fix segfault in calculate_chunk_interval
* #3185 Fix wrong datatype for integer based retention policy

**Thanks**
Expand All @@ -491,27 +491,27 @@ of indexes, in compression, and in policies.
This release adds major new features since the 2.1.1 release.
We deem it moderate priority for upgrading.

This release adds the Skip Scan optimization, which significantly
improves the performance of queries with DISTINCT ON. This
optimization is not yet available for queries on distributed
This release adds the Skip Scan optimization, which significantly
improves the performance of queries with DISTINCT ON. This
optimization is not yet available for queries on distributed
hypertables.

This release also adds a function to create a distributed
restore point, which allows performing a consistent restore of a
This release also adds a function to create a distributed
restore point, which allows performing a consistent restore of a
multi-node cluster from a backup.

The bug fixes in this release address issues with size and stats
functions, high memory usage in distributed inserts, slow distributed
ORDER BY queries, indexes involving INCLUDE, and single chunk query
The bug fixes in this release address issues with size and stats
functions, high memory usage in distributed inserts, slow distributed
ORDER BY queries, indexes involving INCLUDE, and single chunk query
planning.

**PostgreSQL 11 deprecation announcement**

Timescale is working hard on our next exciting features. To make that
possible, we require functionality that is unfortunately absent on
PostgreSQL 11. For this reason, we will continue supporting PostgreSQL
11 until mid-June 2021. Sooner to that time, we will announce the
specific version of TimescaleDB in which PostgreSQL 11 support will
Timescale is working hard on our next exciting features. To make that
possible, we require functionality that is unfortunately absent on
PostgreSQL 11. For this reason, we will continue supporting PostgreSQL
11 until mid-June 2021. Sooner to that time, we will announce the
specific version of TimescaleDB in which PostgreSQL 11 support will
not be included going forward.

**Major Features**
Expand Down Expand Up @@ -540,10 +540,10 @@ not be included going forward.
This maintenance release contains bugfixes since the 2.1.0 release. We
deem it high priority for upgrading.

The bug fixes in this release address issues with CREATE INDEX and
The bug fixes in this release address issues with CREATE INDEX and
UPSERT for hypertables, custom jobs, and gapfill queries.

This release marks TimescaleDB as a trusted extension in PG13, so that
This release marks TimescaleDB as a trusted extension in PG13, so that
superuser privileges are not required anymore to install the extension.

**Minor features**
Expand Down Expand Up @@ -628,7 +628,7 @@ and when upgrading from previous versions.
**Bugfixes**
* #2502 Replace check function when updating
* #2558 Repair dimension slice table on update
* #2619 Fix segfault in decompress_chunk for chunks with dropped
* #2619 Fix segfault in decompress_chunk for chunks with dropped
columns
* #2664 Fix support for complex aggregate expression
* #2800 Lock dimension slices when creating new chunk
Expand Down Expand Up @@ -689,46 +689,46 @@ and when upgrading from previous versions.

## 2.0.0 (2020-12-18)

With this release, we are officially moving TimescaleDB 2.0 to GA,
With this release, we are officially moving TimescaleDB 2.0 to GA,
concluding several release candidates.

TimescaleDB 2.0 adds the much-anticipated support for distributed
hypertables (multi-node TimescaleDB), as well as new features and
enhancements to core functionality to give users better clarity and
TimescaleDB 2.0 adds the much-anticipated support for distributed
hypertables (multi-node TimescaleDB), as well as new features and
enhancements to core functionality to give users better clarity and
more control and flexibility over their data.

Multi-node architecture: In particular, with TimescaleDB 2.0, users
can now create distributed hypertables across multiple instances of
TimescaleDB, configured so that one instance serves as an access node
and multiple others as data nodes. All queries for a distributed
Multi-node architecture: In particular, with TimescaleDB 2.0, users
can now create distributed hypertables across multiple instances of
TimescaleDB, configured so that one instance serves as an access node
and multiple others as data nodes. All queries for a distributed
hypertable are issued to the access node, but inserted data and queries
are pushed down across data nodes for greater scale and performance.

Multi-node TimescaleDB can be self managed or, for easier operation,
Multi-node TimescaleDB can be self managed or, for easier operation,
launched within Timescale's fully-managed cloud services.

This release also adds:

* Support for user-defined actions, allowing users to define,
customize, and schedule automated tasks, which can be run by the
* Support for user-defined actions, allowing users to define,
customize, and schedule automated tasks, which can be run by the
built-in jobs scheduling framework now exposed to users.
* Significant changes to continuous aggregates, which now separate the
view creation from the policy. Users can now refresh individual
regions of the continuous aggregate materialized view, or schedule
* Significant changes to continuous aggregates, which now separate the
view creation from the policy. Users can now refresh individual
regions of the continuous aggregate materialized view, or schedule
automated refreshing via policy.
* Redesigned informational views, including new (and more general)
views for information about hypertable's dimensions and chunks,
policies and user-defined actions, as well as support for multi-node
* Redesigned informational views, including new (and more general)
views for information about hypertable's dimensions and chunks,
policies and user-defined actions, as well as support for multi-node
TimescaleDB.
* Moving all formerly enterprise features into our Community Edition,
and updating Timescale License, which now provides additional (more
* Moving all formerly enterprise features into our Community Edition,
and updating Timescale License, which now provides additional (more
permissive) rights to users and developers.

Some of the changes above (e.g., continuous aggregates, updated
informational views) do introduce breaking changes to APIs and are not
backwards compatible. While the update scripts in TimescaleDB 2.0 will
upgrade databases running TimescaleDB 1.x automatically, some of these
API and feature changes may require changes to clients and/or upstream
Some of the changes above (e.g., continuous aggregates, updated
informational views) do introduce breaking changes to APIs and are not
backwards compatible. While the update scripts in TimescaleDB 2.0 will
upgrade databases running TimescaleDB 1.x automatically, some of these
API and feature changes may require changes to clients and/or upstream
scripts that rely on the previous APIs. Before upgrading, we recommend
reviewing upgrade documentation at docs.timescale.com for more details.

Expand All @@ -751,7 +751,7 @@ TimescaleDB 2.0 moves the following major features to GA:

**Minor Features**

Since the last release candidate 4, there are several minor
Since the last release candidate 4, there are several minor
improvements:
* #2746 Optimize locking for create chunk API
* #2705 Block tableoid access on distributed hypertable
Expand All @@ -761,7 +761,7 @@ improvements:
**Bugfixes**

Since the last release candidate 4, there are several bugfixes:
* #2719 Support disabling compression on distributed hypertables
* #2719 Support disabling compression on distributed hypertables
* #2742 Fix compression status in chunks view for distributed chunks
* #2751 Fix crash and cancel when adding data node
* #2763 Fix check constraint on hypertable metadata table
Expand All @@ -772,38 +772,38 @@ Thanks to all contributors for the TimescaleDB 2.0 release:
* @airton-neto for reporting a bug in executing some queries with UNION
* @nshah14285 for reporting an issue with propagating privileges
* @kalman5 for reporting an issue with renaming constraints
* @LbaNeXte for reporting a bug in decompression for queries with
* @LbaNeXte for reporting a bug in decompression for queries with
subqueries
* @semtexzv for reporting an issue with continuous aggregates on
* @semtexzv for reporting an issue with continuous aggregates on
int-based hypertables
* @mr-ns for reporting an issue with privileges for creating chunks
* @cloud-rocket for reporting an issue with setting an owner on
* @cloud-rocket for reporting an issue with setting an owner on
continuous aggregate
* @jocrau for reporting a bug during creating an index with transaction
per chunk
* @fvannee for reporting an issue with custom time types
* @ArtificialPB for reporting a bug in executing queries with
* @ArtificialPB for reporting a bug in executing queries with
conditional ordering on compressed hypertable
* @dutchgecko for reporting an issue with continuous aggregate datatype
handling
* @lambdaq for suggesting to improve error message in continuous
* @lambdaq for suggesting to improve error message in continuous
aggregate creation
* @francesco11112 for reporting memory issue on COPY
* @Netskeh for reporting bug on time_bucket problem in continuous
* @Netskeh for reporting bug on time_bucket problem in continuous
aggregates
* @mr-ns for reporting the issue with CTEs on distributed hypertables
* @akamensky for reporting an issue with recursive cache invalidation
* @ryanbooz for reporting slow queries with real-time aggregation on
* @ryanbooz for reporting slow queries with real-time aggregation on
continuous aggregates
* @cevian for reporting an issue with disabling compression on
* @cevian for reporting an issue with disabling compression on
distributed hypertables

## 2.0.0-rc4 (2020-12-02)

This release candidate contains bugfixes since the previous release
candidate, as well as additional minor features. It improves
validation of configuration changes for background jobs, adds support
for gapfill on distributed tables, contains improvements to the memory
for gapfill on distributed tables, contains improvements to the memory
handling for large COPY, and contains improvements to compression for
distributed hypertables.

Expand Down Expand Up @@ -844,16 +844,16 @@ chunks.
**Bugfixes**
* #2560 Fix SCHEMA DROP CASCADE with continuous aggregates
* #2593 Set explicitly all lock parameters in alter_job
* #2604 Fix chunk creation on hypertables with foreign key constraints
* #2604 Fix chunk creation on hypertables with foreign key constraints
* #2610 Support analyze of internal compression table
* #2612 Optimize internal cagg_watermark function
* #2613 Refresh correct partial during refresh on drop
* #2617 Fix validation of available extensions on data node
* #2619 Fix segfault in decompress_chunk for chunks with dropped columns
* #2620 Fix DROP CASCADE for continuous aggregate
* #2619 Fix segfault in decompress_chunk for chunks with dropped columns
* #2620 Fix DROP CASCADE for continuous aggregate
* #2625 Fix subquery errors when using AsyncAppend
* #2626 Fix incorrect total_table_pages setting for compressed scan
* #2628 Stop recursion in cache invalidation
* #2628 Stop recursion in cache invalidation

**Thanks**
* @mr-ns for reporting the issue with CTEs on distributed hypertables
Expand Down Expand Up @@ -948,17 +948,17 @@ _before_ upgrading.
**For beta releases**, upgrading from an earlier version of the
extension (including previous beta releases) is not supported.

This beta release includes breaking changes to APIs. The most
notable changes since the beta-5 release are the following, which will
This beta release includes breaking changes to APIs. The most
notable changes since the beta-5 release are the following, which will
be reflected in forthcoming documentation for the 2.0 release.

* Existing information views were reorganized. Retrieving information
about sizes and statistics was moved to functions. New views were added
* Existing information views were reorganized. Retrieving information
about sizes and statistics was moved to functions. New views were added
to expose information, which was previously available only internally.
* New ability to create custom jobs was added.
* Continuous aggregate API was redesigned. Its policy creation is separated
* Continuous aggregate API was redesigned. Its policy creation is separated
from the view creation.
* compress_chunk_policy and drop_chunk_policy were renamed to compression_policy and
* compress_chunk_policy and drop_chunk_policy were renamed to compression_policy and
retention_policy.

## 1.7.4 (2020-09-07)
Expand Down Expand Up @@ -2337,6 +2337,7 @@ complete, depending on the size of your database**
**Thanks**
* @yadid for reporting a segfault (fixed in 50c8c4c)
* @ryan-shaw for reporting tuples not being correctly converted to a chunk's rowtype (fixed in 645b530)
* @yuezhihan for reporting GROUP BY error when setting compress_segmentby with an enum column

## 0.4.0 (2017-08-21)

Expand Down Expand Up @@ -2495,3 +2496,6 @@ the next release.
* [72f754a] use PostgreSQL's own `hash_any` function as default partfunc (thanks @robin900)
* [39f4c0f] Remove sample data instructions and point to docs site
* [9015314] Revised the `get_general_index_definition` function to handle cases where indexes have definitions other than just `CREATE INDEX` (thanks @bricklen)

**Bugfixes**
* #3481 GROUP BY error when setting compress_segmentby with an enum column
3 changes: 0 additions & 3 deletions tsl/src/fdw/scan_plan.c
Original file line number Diff line number Diff line change
Expand Up @@ -228,9 +228,6 @@ evaluate_stable_function(Oid funcid, Oid result_type, int32 result_typmod, Oid r
else
has_nonconst_input = true;
}
if (has_null_input)
{
}
/*
* The simplification of strict functions with constant NULL inputs must
* have been already performed by eval_const_expressions().
Expand Down
16 changes: 16 additions & 0 deletions tsl/src/nodes/decompress_chunk/decompress_chunk.c
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,8 @@ build_compressed_scan_pathkeys(SortInfo *sort_info, PlannerInfo *root, List *chu
ListCell *lc;
char *column_name;
Oid sortop;
Oid opfamily, opcintype;
int16 strategy;

for (lc = list_head(chunk_pathkeys);
lc != NULL && bms_num_members(segmentby_columns) < info->num_segmentby_columns;
Expand Down Expand Up @@ -210,6 +212,20 @@ build_compressed_scan_pathkeys(SortInfo *sort_info, PlannerInfo *root, List *chu

sortop =
get_opfamily_member(pk->pk_opfamily, var->vartype, var->vartype, pk->pk_strategy);
if (!get_ordering_op_properties(sortop, &opfamily, &opcintype, &strategy))
{
if (type_is_enum(var->vartype))
{
sortop = get_opfamily_member(pk->pk_opfamily,
ANYENUMOID,
ANYENUMOID,
pk->pk_strategy);
}
else
{
elog(ERROR, "Invalid segmentby column");
}
}
pk = make_pathkey_from_compressed(root,
info->compressed_rel->relid,
(Expr *) var,
Expand Down
Loading

0 comments on commit ba9950c

Please sign in to comment.