Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 2.8.0 #4653

Merged
merged 1 commit into from
Aug 31, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
70 changes: 59 additions & 11 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,32 +4,80 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**

## Unreleased
## 2.8.0 (2022-08-30)

This release adds major new features since the 2.7.2 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* time_bucket now supports bucketing by month, year and timezone
* Improve performance of bulk SELECT and COPY for distributed hypertables
* 1 step CAgg policy management
svenklemm marked this conversation as resolved.
Show resolved Hide resolved
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be more explicit about the new functions:

#4563 Experimental functions to facilitate Continuous Aggregates policies management: alter_policies, remove_policies, remove_all_policies, show_policies

* Migrate Continuous Aggregates to the new format
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add the name of the new function introduced to migrate: cagg_migrate_to_finals_form ?


**Features**
* #4374 Remove constified now() constraints from plan
* #4188 Use COPY protocol in row-by-row fetcher
* #4307 Mark partialize_agg as parallel safe
* #4380 Enable chunk exclusion for space dimensions in UPDATE/DELETE
* #4384 Add schedule_interval to policies
* #4390 Faster lookup of chunks by point
* #4393 Support intervals with day component when constifying now()
* #4397 Support intervals with month component when constifying now()
* #4641 Allow bucketing by month in time_bucket
* #4405 Support ON CONFLICT ON CONSTRAINT for hypertables
* #4412 Add telemetry about replication
* #4415 Drop remote data when detaching data node
* #4416 Handle TRUNCATE TABLE on chunks
* #4425 Add parameter check_config to alter_job
* #4430 Create index on Continuous Aggregates
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it impact performances? we may want to highlight this.

* #4439 Allow ORDER BY on continuous aggregates
* #4443 Add stateful partition mappings
* #4484 Use non-blocking data node connections for COPY
* #4495 Support add_dimension() with existing data
* #4502 Add chunks to baserel cache on chunk exclusion
svenklemm marked this conversation as resolved.
Show resolved Hide resolved
* #4545 Add hypertable distributed argument and defaults
* #4552 Migrate Continuous Aggregates to the new format
* #4556 Add runtime exclusion for hypertables
* #4561 Change get_git_commit to return full commit hash
* #4563 1 step CAgg policy management
* #4641 Allow bucketing by month, year, century in time_bucket and time_bucket_gapfill
* #4642 Add timezone support to time_bucket

**Bugfixes**
* #4359 Create composite index on segmentby columns
* #4374 Remove constified now() constraints from plan
* #4416 Handle TRUNCATE TABLE on chunks
* #4478 Synchronize chunk cache sizes
* #4486 Adding boolean column with default value doesn't work on compressed table
* #4555 Handle properly default privileges on Continuous Aggregates
* #4512 Fix unaligned pointer access
* #4519 Throw better error message on incompatible row fetcher settings
* #4549 Fix dump_meta_data for windows
* #4553 Fix timescaledb_post_restore GUC handling
* #4573 Load TSL library on compressed_data_out call
* #4575 Fix use of `get_partition_hash` and `get_partition_for_key` inside an IMMUTABLE function
* #4577 Fix segfaults in compression code with corrupt data
* #4580 Handle default privileges on CAggs properly
* #4582 Fix assertion in GRANT .. ON ALL TABLES IN SCHEMA
* #4583 Fix partitioning functions
* #4589 Fix rename for distributed hypertable
* #4601 Reset compression sequence when group resets
* #4611 Fix a potential OOM when loading large data sets into a hypertable
* #4624 Fix heap buffer overflow
* #4627 Fix telemetry initialization
* #4631 Ensure TSL library is loaded on database upgrades
* #4646 Fix time_bucket_ng origin handling
* #4647 Fix the error "SubPlan found with no parent plan" that occurred if using joins in RETURNING clause.

**Thanks**
@AlmiS for reporting error on `get_partition_hash` executed inside an IMMUTABLE function
@janko for reporting an issue when adding bool column with default value to compressed hypertable
@jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
@michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
@ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
@ssmoss for reporting an issue with time_bucket_ng origin handling
@mwahlhuetter for reporting error in joins in RETURNING clause
* @AlmiS for reporting error on `get_partition_hash` executed inside an IMMUTABLE function
* @Creatation for reporting an issue with renaming hypertables
* @janko for reporting an issue when adding bool column with default value to compressed hypertable
* @jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
* @michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
* @mwahlhuetter for reporting error in joins in RETURNING clause
* @ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
* @PBudmark for reporting an issue with dump_meta_data.sql on Windows
* @ssmoss for reporting an issue with time_bucket_ng origin handling

## 2.7.2 (2022-07-26)

Expand Down
3 changes: 2 additions & 1 deletion sql/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@ set(MOD_FILES
updates/2.6.0--2.6.1.sql
updates/2.6.1--2.7.0.sql
updates/2.7.0--2.7.1.sql
updates/2.7.1--2.7.2.sql)
updates/2.7.1--2.7.2.sql
updates/2.7.2--2.8.0.sql)

# The downgrade file to generate a downgrade script for the current version, as
# specified in version.config
Expand Down
151 changes: 151 additions & 0 deletions sql/updates/2.7.2--2.8.0.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
DROP FUNCTION IF EXISTS @extschema@.add_retention_policy(REGCLASS, "any", BOOL);
DROP FUNCTION IF EXISTS @extschema@.add_compression_policy(REGCLASS, "any", BOOL);
DROP FUNCTION IF EXISTS @extschema@.detach_data_node;
CREATE TABLE _timescaledb_catalog.dimension_partition (
dimension_id integer NOT NULL REFERENCES _timescaledb_catalog.dimension (id) ON DELETE CASCADE,
range_start bigint NOT NULL,
data_nodes name[] NULL,
UNIQUE (dimension_id, range_start)
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.dimension_partition', '');
GRANT SELECT ON _timescaledb_catalog.dimension_partition TO PUBLIC;
DROP FUNCTION IF EXISTS @extschema@.remove_continuous_aggregate_policy(REGCLASS, BOOL);

-- add a new column to chunk catalog table
ALTER TABLE _timescaledb_catalog.chunk ADD COLUMN osm_chunk boolean ;
UPDATE _timescaledb_catalog.chunk SET osm_chunk = FALSE;

ALTER TABLE _timescaledb_catalog.chunk
ALTER COLUMN osm_chunk SET NOT NULL;
ALTER TABLE _timescaledb_catalog.chunk
ALTER COLUMN osm_chunk SET DEFAULT FALSE;

CREATE INDEX chunk_osm_chunk_idx ON _timescaledb_catalog.chunk (osm_chunk, hypertable_id);

DROP FUNCTION IF EXISTS @extschema@.add_job(REGPROC, INTERVAL, JSONB, TIMESTAMPTZ, BOOL);
DROP FUNCTION IF EXISTS @extschema@.alter_job(INTEGER, INTERVAL, INTERVAL, INTEGER, INTERVAL, BOOL, JSONB, TIMESTAMPTZ, BOOL);

-- add fields for check function
ALTER TABLE _timescaledb_config.bgw_job
ADD COLUMN check_schema NAME,
ADD COLUMN check_name NAME;

-- no need to touch the telemetry jobs since they do not have a check
-- function.

-- add check function to reorder jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_reorder_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_reorder';

-- add check function to compression jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_compression_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_compression';

-- add check function to retention jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_retention_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_retention';

-- add check function to continuous aggregate refresh jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_refresh_continuous_aggregate_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_refresh_continuous_aggregate';

DROP VIEW IF EXISTS timescaledb_information.jobs;-- cagg migration catalog relations

CREATE TABLE _timescaledb_catalog.continuous_agg_migrate_plan (
mat_hypertable_id integer NOT NULL,
start_ts TIMESTAMPTZ NOT NULL DEFAULT pg_catalog.now(),
end_ts TIMESTAMPTZ,
-- table constraints
CONSTRAINT continuous_agg_migrate_plan_pkey PRIMARY KEY (mat_hypertable_id),
CONSTRAINT continuous_agg_migrate_plan_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.continuous_agg (mat_hypertable_id)
);

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg_migrate_plan', '');

CREATE TABLE _timescaledb_catalog.continuous_agg_migrate_plan_step (
mat_hypertable_id integer NOT NULL,
step_id serial NOT NULL,
status TEXT NOT NULL DEFAULT 'NOT STARTED', -- NOT STARTED, STARTED, FINISHED, CANCELED
start_ts TIMESTAMPTZ,
end_ts TIMESTAMPTZ,
type TEXT NOT NULL,
config JSONB,
-- table constraints
CONSTRAINT continuous_agg_migrate_plan_step_pkey PRIMARY KEY (mat_hypertable_id, step_id),
CONSTRAINT continuous_agg_migrate_plan_step_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.continuous_agg_migrate_plan (mat_hypertable_id) ON DELETE CASCADE,
CONSTRAINT continuous_agg_migrate_plan_step_check CHECK (start_ts <= end_ts),
CONSTRAINT continuous_agg_migrate_plan_step_check2 CHECK (type IN ('CREATE NEW CAGG', 'DISABLE POLICIES', 'COPY POLICIES', 'ENABLE POLICIES', 'SAVE WATERMARK', 'REFRESH NEW CAGG', 'COPY DATA'))
);

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg_migrate_plan_step', '');

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq', '');

-- in tables.sql the same is done with GRANT SELECT ON ALL TABLES IN SCHEMA
GRANT SELECT ON _timescaledb_catalog.continuous_agg_migrate_plan TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.continuous_agg_migrate_plan_step TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq TO PUBLIC;

-- add distributed argument
DROP FUNCTION IF EXISTS @extschema@.create_hypertable;
DROP FUNCTION IF EXISTS @extschema@.create_distributed_hypertable;

CREATE FUNCTION @extschema@.create_hypertable(
relation REGCLASS,
time_column_name NAME,
partitioning_column NAME = NULL,
number_partitions INTEGER = NULL,
associated_schema_name NAME = NULL,
associated_table_prefix NAME = NULL,
chunk_time_interval ANYELEMENT = NULL::bigint,
create_default_indexes BOOLEAN = TRUE,
if_not_exists BOOLEAN = FALSE,
partitioning_func REGPROC = NULL,
migrate_data BOOLEAN = FALSE,
chunk_target_size TEXT = NULL,
chunk_sizing_func REGPROC = '_timescaledb_internal.calculate_chunk_interval'::regproc,
time_partitioning_func REGPROC = NULL,
replication_factor INTEGER = NULL,
data_nodes NAME[] = NULL,
distributed BOOLEAN = NULL
) RETURNS TABLE(hypertable_id INT, schema_name NAME, table_name NAME, created BOOL) AS '@MODULE_PATHNAME@', 'ts_hypertable_create' LANGUAGE C VOLATILE;

-- change replication_factor to NULL by default
CREATE FUNCTION @extschema@.create_distributed_hypertable(
relation REGCLASS,
time_column_name NAME,
partitioning_column NAME = NULL,
number_partitions INTEGER = NULL,
associated_schema_name NAME = NULL,
associated_table_prefix NAME = NULL,
chunk_time_interval ANYELEMENT = NULL::bigint,
create_default_indexes BOOLEAN = TRUE,
if_not_exists BOOLEAN = FALSE,
partitioning_func REGPROC = NULL,
migrate_data BOOLEAN = FALSE,
chunk_target_size TEXT = NULL,
chunk_sizing_func REGPROC = '_timescaledb_internal.calculate_chunk_interval'::regproc,
time_partitioning_func REGPROC = NULL,
replication_factor INTEGER = NULL,
data_nodes NAME[] = NULL
) RETURNS TABLE(hypertable_id INT, schema_name NAME, table_name NAME, created BOOL) AS '@MODULE_PATHNAME@', 'ts_hypertable_distributed_create' LANGUAGE C VOLATILE;