Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix SCHEMA DROP CASCADE with continuous aggregates #2560

Merged
merged 1 commit into from Nov 10, 2020

Conversation

pmwkaa
Copy link
Contributor

@pmwkaa pmwkaa commented Oct 16, 2020

This change fixes the situation when schema object is dropped
before the cagg which leads to an error when it tries to
resolve it.

Fixes: #2350

@pmwkaa pmwkaa requested a review from a team as a code owner October 16, 2020 17:36
@pmwkaa pmwkaa requested review from k-rus, WireBaron, svenklemm and a team and removed request for a team October 16, 2020 17:36
@codecov
Copy link

codecov bot commented Oct 16, 2020

Codecov Report

Merging #2560 (1b5d1c7) into master (0e57992) will increase coverage by 0.09%.
The diff coverage is 92.26%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2560      +/-   ##
==========================================
+ Coverage   90.03%   90.13%   +0.09%     
==========================================
  Files         212      212              
  Lines       34304    34286      -18     
==========================================
+ Hits        30885    30903      +18     
+ Misses       3419     3383      -36     
Impacted Files Coverage Δ
src/guc.c 97.22% <ø> (ø)
src/time_utils.c 97.50% <ø> (ø)
tsl/src/fdw/option.c 42.18% <ø> (+9.37%) ⬆️
tsl/src/fdw/scan_plan.c 92.89% <ø> (ø)
tsl/src/remote/data_fetcher.h 100.00% <ø> (ø)
tsl/src/remote/row_by_row_fetcher.c 71.42% <62.50%> (-4.77%) ⬇️
tsl/src/fdw/relinfo.c 96.47% <70.58%> (+0.66%) ⬆️
tsl/src/remote/cursor_fetcher.c 72.89% <83.33%> (+2.58%) ⬆️
tsl/src/remote/async.c 86.43% <85.00%> (-2.42%) ⬇️
src/bgw_policy/policy.c 96.42% <94.44%> (-3.58%) ⬇️
... and 22 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fbf412f...1b5d1c7. Read the comment docs.

--
-- Issue: #2350
CREATE SCHEMA my_cool_schema;
SET search_path to my_cool_schema, public; -- timescaledb is created in the public schema
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please skip the search_path messing and specify the schema directly in the create statements:
CREATE TABLE my_cool_schema.telemetry_raw ...
SELECT create_hypertable('my_cool_schema.telemetry_raw' ...
CREATE MATERIALIZED VIEW my_cool_schema.telemetry_1s ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or if you think search_path interaction is worth testing have extra tests that test it in addition to specifying schema directly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, just thought it makes sense to keep it since it is the same case as in the issue description

@svenklemm svenklemm added this to the 2.0.0 milestone Oct 19, 2020
@pmwkaa pmwkaa requested a review from a team October 19, 2020 13:32
ca.relid = get_relname_relid(NameStr(data->user_view_name), nspid);
else
ca.relid = InvalidOid;
ca.data = *data;
Copy link
Contributor

@gayyappan gayyappan Oct 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't these 2 lines be conditional? i.e.
i.e. this is valid only if you can find a view object.

Suggested change
ca.data = *data;
if( OidIsValid(...) )
{
ca.relid = ...
ca.data = *data;
drop_continuous_agg(...)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you right. Missed that. But it should not be used anyway I believe since the relation marked as not found.

Copy link
Contributor

@erimatnor erimatnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should try to hook into DROP SCHEMA before it is executed by PostgreSQL so that we can drop dependent objects while they are still valid.

I think it is problematic to have functions that can create both fully valid objects (e.g., Hypertable) and "partial" objects (without all fields set) because it is hard to know which version of the object is produced for a particular case.

@@ -577,7 +577,15 @@ ts_continuous_agg_drop_hypertable_callback(int32 hypertable_id)

if (data->raw_hypertable_id == hypertable_id)
{
continuous_agg_init(&ca, data);
/* schema might not exists at this point */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/* schema might not exists at this point */
/* schema might already be dropped at this point */

/* schema might not exists at this point */
Oid nspid = get_namespace_oid(NameStr(data->user_view_schema), true);

if (OidIsValid(nspid))
Copy link
Contributor

@erimatnor erimatnor Oct 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem like the right approach to me. Unless I misunderstand what is going on here, it seems like the schema could already have been dropped at this point, and therefore we accept that we cannot know the relid and will create a cagg object without the proper relids set.

I think that a better way to do this is to make sure that we hook into DROP SCHEMA prior to the call being processed by PostgreSQL and perform the cleanup of our objects at that point when the relations still exist. I believe that is how we deal with many dependency drops in other cases.

Copy link
Contributor Author

@pmwkaa pmwkaa Oct 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your concern. I tried to make this fix as smaller as possible because we already have a lot of assumption that this functionality can be called during CASCADE which expects some of the objects to be deleted: https://github.com/timescale/timescaledb/blob/master/src/continuous_agg.c#L459. Otherwise this will require more changes and reworking of the drop path I believe, basically as you suggested.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, those assumptions aren't great either, TBH.

I am actually most concerned about this PR introducing an ambiguous state for the Hypertable object, in a change below. We've had issues related to such things before where some fields could be set or not set, and we've fixed most of them to always be consistent. This change would move us back towards a situation where objects can again be in different "states".

src/hypertable.c Outdated
if (OidIsValid(namespace_oid))
h->main_table_relid = get_relname_relid(NameStr(h->fd.table_name), namespace_oid);
else
h->main_table_relid = InvalidOid;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allowing the creation of Hypertable objects without a valid relid I think is problematic, in particular since this function makes no distinction between the cases the object is created for (i.e., valid use or drop).

I think it is safer to only allow creation of a fully "valid" object or have it fail. Allowing the creation of objects with different state is often a source of subtle errors.

@pmwkaa pmwkaa force-pushed the cagg_drop_schema_fix branch 2 times, most recently from d19c765 to 52b1410 Compare October 27, 2020 15:10
@pmwkaa
Copy link
Contributor Author

pmwkaa commented Oct 27, 2020

drop_continuous_agg() refactored and now tries to work with catalog data only assuming that depending objects (including associated schema) might not exists anymore. Also seems like drop_user_view argument was never used before, fixed that.

@pmwkaa pmwkaa requested review from gayyappan, erimatnor and a team October 27, 2020 15:22
@k-rus
Copy link
Contributor

k-rus commented Oct 29, 2020

@pmwkaa It seems that you have a linking issue on Windows. From Appveyor log:

  timescaledb-tests.vcxproj -> C:\projects\timescaledb\build\test\src\timescaledb-tests.dir\Debug\timescaledb-tests.lib
550     Creating library C:/projects/timescaledb/build/src/Debug/timescaledb-2.1.0-dev.lib and object C:/projects/timescaledb/build/src/Debug/timescaledb-2.1.0-dev.exp
551continuous_agg.obj : error LNK2001: unresolved external symbol InvalidObjectAddress [C:\projects\timescaledb\build\src\timescaledb.vcxproj]
552C:\projects\timescaledb\build\src\Debug\timescaledb-2.1.0-dev.dll : fatal error LNK1120: 1 unresolved externals [C:\projects\timescaledb\build\src\timescaledb.vcxproj]

Comment on lines 455 to 460
ObjectAddress user_view = InvalidObjectAddress;
ObjectAddress partial_view = InvalidObjectAddress;
ObjectAddress direct_view = InvalidObjectAddress;
ObjectAddress rawht_trig = InvalidObjectAddress;
ObjectAddress raw_hypertable = InvalidObjectAddress;
ObjectAddress mat_hypertable = InvalidObjectAddress;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess these lines introduce the linking error on Windows.

@pmwkaa pmwkaa force-pushed the cagg_drop_schema_fix branch 3 times, most recently from a53e73c to 4199b75 Compare October 29, 2020 11:54
@pmwkaa pmwkaa requested a review from k-rus October 29, 2020 12:12
Copy link
Contributor

@k-rus k-rus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a comment about the signature helper functions. See the comments.

@@ -405,37 +405,64 @@ ts_continuous_agg_find_by_rv(const RangeVar *rv)
return ts_continuous_agg_find_by_relid(relid);
}

static void
get_rel_by_name(ObjectAddress *addr, const char *schema, const char *name, LOCKMODE mode)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This signature is unintuitive and not easy to use. Usually the variable to output is either the last in the signature or is returned by the function, which is preferable. Cannot you return it? I believe C compiler make sure that the struct will be allocated in the caller.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also updated the argument types to avoid Name casting

}

static void
get_rel_by_hypertable_id(ObjectAddress *addr, int32 hypertable_id, LOCKMODE mode)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment, it is better to return or at least the output is the last.

*
* These objects are: the user view itself, the catalog entry in continuous-agg,
* the partial view, the materialization hypertable, trigger on the raw hypertable
* (hypertable specified in the user view) copy of the user view query
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* (hypertable specified in the user view) copy of the user view query
* (hypertable specified in the user view), copy of the user view query

/* AccessExclusiveLock is needed to drop this table. */
LockRelationOid(mat_hypertable->main_table_relid, AccessExclusiveLock);
/* NOTE: the lock order matters, see tsl/src/materialization.c.
* Perform all locking upfront
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Perform all locking upfront
* Perform all locking upfront.

/* raw hypertable is locked above */
LockRelationOid(rawht_trigoid, AccessExclusiveLock);
/* The trigger will be dropped if the hypertable still exists and no other
* caggs attached */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* caggs attached */
* caggs attached. */

src/continuous_agg.c Show resolved Hide resolved
@@ -405,37 +405,68 @@ ts_continuous_agg_find_by_rv(const RangeVar *rv)
return ts_continuous_agg_find_by_relid(relid);
}

static ObjectAddress
get_rel_by_name(Name schema, Name name, LOCKMODE mode)
Copy link
Contributor

@gayyappan gayyappan Nov 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename as lock_rel_by_name as that is the primary function here?

Copy link
Contributor Author

@pmwkaa pmwkaa Nov 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is accurate, because the relation might not exists. In such case semantically it is more close to the get (or get and lock if exists) operation and it returns its address too. Maybe get_and_lock_rel_by_name? not fan of long names

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, get_and_lock_rel_by_name sounds good.

}

static ObjectAddress
get_rel_by_hypertable_id(int32 hypertable_id, LOCKMODE mode)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
get_rel_by_hypertable_id(int32 hypertable_id, LOCKMODE mode)
lock_rel_by_hypertable_id(int32 hypertable_id, LOCKMODE mode)

could you rename this as well?

LockRelationOid(user_view.objectId, AccessExclusiveLock);

raw_hypertable = ts_hypertable_get_by_id(agg->data.raw_hypertable_id);
/* The raw hypertable might be already dropped if this is a cascade from that drop */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments about locking and situtaions related to dependent objects being dropped, should be retained. Otherwise, it is not clear why it is ok to not throw an error if the object is not found.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the function description in order to describe the situation. There were couple of duplicate messages about the locking, not sure how much value they have. I can put them back, if you would like

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is still good to keep these comments about locking and objects that could have been potentially dropped (line 459 , 463 , 464 and 500)..

LockRelationOid(direct_view.objectId, AccessExclusiveLock);

/* END OF LOCKING. Perform actual deletions now. */
partial_view = get_rel_by_name(&cadata->partial_view_schema,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please retain the comment at line 500 (from the old code)


SELECT create_hypertable('test_schema.telemetry_raw', 'ts');

CREATE MATERIALIZED VIEW test_schema.telemetry_1s
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add 1 more mat view for the test case so that we have multiple caggs attached to the hypertable?

@@ -405,37 +405,68 @@ ts_continuous_agg_find_by_rv(const RangeVar *rv)
return ts_continuous_agg_find_by_relid(relid);
}

static ObjectAddress
get_rel_by_name(Name schema, Name name, LOCKMODE mode)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to make objects const?

Suggested change
get_rel_by_name(Name schema, Name name, LOCKMODE mode)
get_rel_by_name(const Name schema, const Name name, LOCKMODE mode)

*
* drop_user_view indicates whether to drop the user view.
* (should be false if called as part of the drop-user-view callback)
*/
static void
drop_continuous_agg(ContinuousAgg *agg, bool drop_user_view)
drop_continuous_agg(FormData_continuous_agg *cadata, bool drop_user_view)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this function is a bit overloaded and also has a misleading name. Ideally we should better distinguish between dropping the metadata (which is not done in this function) and related internal objects that implement the cagg (tables, views, etc.), and dependent objects that the drop "cascades" to (e.g., jobs). This function does a little bit of everything and it makes it harder to follow.

To fix this, the function might need further refactoring, which is not necessary for this PR. But I think a better function name might be:

Suggested change
drop_continuous_agg(FormData_continuous_agg *cadata, bool drop_user_view)
drop_continuous_agg_objects(FormData_continuous_agg *cadata, bool drop_user_view)

The comment should also be update to reflect this (i.e., that only the objects, such as tables and views are dropped, and not the metadata).

*
* drop_user_view indicates whether to drop the user view.
* (should be false if called as part of the drop-user-view callback)
*/
static void
drop_continuous_agg(ContinuousAgg *agg, bool drop_user_view)
drop_continuous_agg(FormData_continuous_agg *cadata, bool drop_user_view)
{
ScanIterator iterator =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest moving this initialization to where the iterator is used.

Copy link
Contributor Author

@pmwkaa pmwkaa Nov 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do it, but was worried if that has something to do with the locking order


if (OidIsValid(mat_hypertable.objectId))
{
performDeletion(&mat_hypertable, DROP_CASCADE, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this need CASCADE, while others don't? It would be good to at least explain this.

Copy link
Contributor Author

@pmwkaa pmwkaa Nov 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a good explanation for that, rather then it was there before. I can only imaging some user-case depending on this object

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CASCADE is required there, it will not work with RESTRICT. Likely for the hypertable it is expected to have depenent objects? (chunks, etc)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think cascade is required normally when dropping a hypertable. This looks potentially dangerous, but I guess it is mitigated by this being an internal table.

* which implies that most of the dependent objects potentially could be
* dropped including associated schema.
*
* These objects are: the user view itself, the catalog entry in continuous-agg,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be better described with a bullet-list of all objects.

* These objects are: the user view itself, the catalog entry in continuous-agg,
* the partial view, the materialization hypertable, trigger on the raw hypertable
* (hypertable specified in the user view), copy of the user view query
* (aka the direct view).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* (aka the direct view).
* (AKA the direct view).


DROP SCHEMA test_schema CASCADE;

SELECT true FROM pg_catalog.pg_namespace WHERE nspname = 'test_schema';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you show that the internal objects (e.g., the materialized hypertable and views) are also removed from the internal namespaces (or whatever namespaces they existed in).

@pmwkaa pmwkaa linked an issue Nov 4, 2020 that may be closed by this pull request
@pmwkaa pmwkaa force-pushed the cagg_drop_schema_fix branch 2 times, most recently from 3da0ec7 to 88201c1 Compare November 9, 2020 13:17
Copy link
Contributor

@erimatnor erimatnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One comment nit.

* trigger on the raw hypertable (hypertable specified in the user view )
* copy of the user view query (aka the direct view)
* NOTE: The order in which the objects are dropped should be EXACTLY the same as in materialize.c"
* This function is intedended to be run by event trigger during CASCADE,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* This function is intedended to be run by event trigger during CASCADE,
* This function is intended to be run by event trigger during CASCADE,


if (OidIsValid(mat_hypertable.objectId))
{
performDeletion(&mat_hypertable, DROP_CASCADE, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think cascade is required normally when dropping a hypertable. This looks potentially dangerous, but I guess it is mitigated by this being an internal table.

Copy link
Contributor

@gayyappan gayyappan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good. Request to retain additional comments.

This change fixes the situation when schema object is dropped
before the cagg which leads to an error when it tries to
resolve it.

Issue: timescale#2350
@pmwkaa pmwkaa merged commit bbcb2b2 into timescale:master Nov 10, 2020
@pmwkaa pmwkaa mentioned this pull request Nov 12, 2020
erimatnor added a commit to erimatnor/timescaledb that referenced this pull request Nov 12, 2020
This release candidate contains bugfixes since the previous release
candidate, as well as additional minor features including support for
"user-mapping" authentication between access/data nodes and an
experimental API for refreshing continuous aggregates on individual
chunks.

**Minor Features**
* timescale#2627 Add optional user mappings support
* timescale#2635 Add API to refresh continuous aggregate on chunk

**Bugfixes**
* timescale#2560 Fix SCHEMA DROP CASCADE with continuous aggregates
* timescale#2593 Set explicitly all lock parameters in alter_job
* timescale#2604 Fix chunk creation on hypertables with foreign key constraints
* timescale#2610 Support analyze of internal compression table
* timescale#2612 Optimize internal cagg_watermark function
* timescale#2613 Refresh correct partial during refresh on drop
* timescale#2617 Fix validation of available extensions on data node
* timescale#2619 Fix segfault in decompress_chunk for chunks with dropped columns
* timescale#2620 Fix DROP CASCADE for continuous aggregate
* timescale#2625 Fix subquery errors when using AsyncAppend
* timescale#2626 Fix incorrect total_table_pages setting for compressed scan
* timescale#2628 Stop recursion in cache invalidation
erimatnor added a commit that referenced this pull request Nov 12, 2020
This release candidate contains bugfixes since the previous release
candidate, as well as additional minor features including support for
"user-mapping" authentication between access/data nodes and an
experimental API for refreshing continuous aggregates on individual
chunks.

**Minor Features**
* #2627 Add optional user mappings support
* #2635 Add API to refresh continuous aggregate on chunk

**Bugfixes**
* #2560 Fix SCHEMA DROP CASCADE with continuous aggregates
* #2593 Set explicitly all lock parameters in alter_job
* #2604 Fix chunk creation on hypertables with foreign key constraints
* #2610 Support analyze of internal compression table
* #2612 Optimize internal cagg_watermark function
* #2613 Refresh correct partial during refresh on drop
* #2617 Fix validation of available extensions on data node
* #2619 Fix segfault in decompress_chunk for chunks with dropped columns
* #2620 Fix DROP CASCADE for continuous aggregate
* #2625 Fix subquery errors when using AsyncAppend
* #2626 Fix incorrect total_table_pages setting for compressed scan
* #2628 Stop recursion in cache invalidation
erimatnor added a commit that referenced this pull request Nov 12, 2020
This release candidate contains bugfixes since the previous release
candidate, as well as additional minor features including support for
"user-mapping" authentication between access/data nodes and an
experimental API for refreshing continuous aggregates on individual
chunks.

**Minor Features**
* #2627 Add optional user mappings support
* #2635 Add API to refresh continuous aggregate on chunk

**Bugfixes**
* #2560 Fix SCHEMA DROP CASCADE with continuous aggregates
* #2593 Set explicitly all lock parameters in alter_job
* #2604 Fix chunk creation on hypertables with foreign key constraints
* #2610 Support analyze of internal compression table
* #2612 Optimize internal cagg_watermark function
* #2613 Refresh correct partial during refresh on drop
* #2617 Fix validation of available extensions on data node
* #2619 Fix segfault in decompress_chunk for chunks with dropped columns
* #2620 Fix DROP CASCADE for continuous aggregate
* #2625 Fix subquery errors when using AsyncAppend
* #2626 Fix incorrect total_table_pages setting for compressed scan
* #2628 Stop recursion in cache invalidation
erimatnor added a commit that referenced this pull request Nov 12, 2020
This release candidate contains bugfixes since the previous release
candidate, as well as additional minor features including support for
"user-mapping" authentication between access/data nodes and an
experimental API for refreshing continuous aggregates on individual
chunks.

**Minor Features**
* #2627 Add optional user mappings support
* #2635 Add API to refresh continuous aggregate on chunk

**Bugfixes**
* #2560 Fix SCHEMA DROP CASCADE with continuous aggregates
* #2593 Set explicitly all lock parameters in alter_job
* #2604 Fix chunk creation on hypertables with foreign key constraints
* #2610 Support analyze of internal compression table
* #2612 Optimize internal cagg_watermark function
* #2613 Refresh correct partial during refresh on drop
* #2617 Fix validation of available extensions on data node
* #2619 Fix segfault in decompress_chunk for chunks with dropped columns
* #2620 Fix DROP CASCADE for continuous aggregate
* #2625 Fix subquery errors when using AsyncAppend
* #2626 Fix incorrect total_table_pages setting for compressed scan
* #2628 Stop recursion in cache invalidation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Can't drop schema containing continuous aggregates
5 participants