Skip to content

Commit

Permalink
Release 2.8.0
Browse files Browse the repository at this point in the history
  • Loading branch information
svenklemm committed Aug 31, 2022
1 parent 1fa8373 commit 4d7af52
Show file tree
Hide file tree
Showing 5 changed files with 213 additions and 164 deletions.
68 changes: 58 additions & 10 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,30 +4,78 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**

## Unreleased
## 2.8.0 (2022-08-30)

This release adds major new features since the 2.7.2 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* time_bucket now supports bucketing by month, year and timezone
* Improve performance of bulk SELECT and COPY for distributed hypertables
* 1 step CAgg policy management
* Migrate Continuous Aggregates to the new format

**Features**
* #4374 Remove constified now() constraints from plan
* #4188 Use COPY protocol in row-by-row fetcher
* #4307 Mark partialize_agg as parallel safe
* #4380 Enable chunk exclusion for space dimensions in UPDATE/DELETE
* #4384 Add schedule_interval to policies
* #4390 Faster lookup of chunks by point
* #4393 Support intervals with day component when constifying now()
* #4397 Support intervals with month component when constifying now()
* #4641 Allow bucketing by month in time_bucket
* #4405 Support ON CONFLICT ON CONSTRAINT for hypertables
* #4412 Add telemetry about replication
* #4415 Drop remote data when detaching data node
* #4416 Handle TRUNCATE TABLE on chunks
* #4425 Add parameter check_config to alter_job
* #4430 Create index on Continuous Aggregates
* #4439 Allow ORDER BY on continuous aggregates
* #4443 Add stateful partition mappings
* #4484 Use non-blocking data node connections for COPY
* #4495 Support add_dimension() with existing data
* #4502 Add chunks to baserel cache on chunk exclusion
* #4545 Add hypertable distributed argument and defaults
* #4552 Migrate Continuous Aggregates to the new format
* #4556 Add runtime exclusion for hypertables
* #4561 Change get_git_commit to return full commit hash
* #4563 1 step CAgg policy management
* #4641 Allow bucketing by month, year, century in time_bucket and time_bucket_gapfill
* #4642 Add timezone support to time_bucket

**Bugfixes**
* #4359 Create composite index on segmentby columns
* #4374 Remove constified now() constraints from plan
* #4416 Handle TRUNCATE TABLE on chunks
* #4478 Synchronize chunk cache sizes
* #4486 Adding boolean column with default value doesn't work on compressed table
* #4555 Handle properly default privileges on Continuous Aggregates
* #4512 Fix unaligned pointer access
* #4519 Throw better error message on incompatible row fetcher settings
* #4549 Fix dump_meta_data for windows
* #4553 Fix timescaledb_post_restore GUC handling
* #4573 Load TSL library on compressed_data_out call
* #4575 Fix use of `get_partition_hash` and `get_partition_for_key` inside an IMMUTABLE function
* #4577 Fix segfaults in compression code with corrupt data
* #4580 Handle default privileges on CAggs properly
* #4582 Fix assertion in GRANT .. ON ALL TABLES IN SCHEMA
* #4583 Fix partitioning functions
* #4589 Fix rename for distributed hypertable
* #4601 Reset compression sequence when group resets
* #4611 Fix a potential OOM when loading large data sets into a hypertable
* #4624 Fix heap buffer overflow
* #4627 Fix telemetry initialization
* #4631 Ensure TSL library is loaded on database upgrades
* #4646 Fix time_bucket_ng origin handling

**Thanks**
@AlmiS for reporting error on `get_partition_hash` executed inside an IMMUTABLE function
@janko for reporting an issue when adding bool column with default value to compressed hypertable
@jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
@michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
@ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
@ssmoss for reporting an issue with time_bucket_ng origin handling
* @AlmiS for reporting error on `get_partition_hash` executed inside an IMMUTABLE function
* @Creatation for reporting an issue with renaming hypertables
* @janko for reporting an issue when adding bool column with default value to compressed hypertable
* @jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
* @michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
* @ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
* @PBudmark for reporting an issue with dump_meta_data.sql on Windows
* @ssmoss for reporting an issue with time_bucket_ng origin handling

## 2.7.2 (2022-07-26)

Expand Down
3 changes: 2 additions & 1 deletion sql/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@ set(MOD_FILES
updates/2.6.0--2.6.1.sql
updates/2.6.1--2.7.0.sql
updates/2.7.0--2.7.1.sql
updates/2.7.1--2.7.2.sql)
updates/2.7.1--2.7.2.sql
updates/2.7.2--2.8.0.sql)

# The downgrade file to generate a downgrade script for the current version, as
# specified in version.config
Expand Down
151 changes: 151 additions & 0 deletions sql/updates/2.7.2--2.8.0.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
DROP FUNCTION IF EXISTS @extschema@.add_retention_policy(REGCLASS, "any", BOOL);
DROP FUNCTION IF EXISTS @extschema@.add_compression_policy(REGCLASS, "any", BOOL);
DROP FUNCTION IF EXISTS @extschema@.detach_data_node;
CREATE TABLE _timescaledb_catalog.dimension_partition (
dimension_id integer NOT NULL REFERENCES _timescaledb_catalog.dimension (id) ON DELETE CASCADE,
range_start bigint NOT NULL,
data_nodes name[] NULL,
UNIQUE (dimension_id, range_start)
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.dimension_partition', '');
GRANT SELECT ON _timescaledb_catalog.dimension_partition TO PUBLIC;
DROP FUNCTION IF EXISTS @extschema@.remove_continuous_aggregate_policy(REGCLASS, BOOL);

-- add a new column to chunk catalog table
ALTER TABLE _timescaledb_catalog.chunk ADD COLUMN osm_chunk boolean ;
UPDATE _timescaledb_catalog.chunk SET osm_chunk = FALSE;

ALTER TABLE _timescaledb_catalog.chunk
ALTER COLUMN osm_chunk SET NOT NULL;
ALTER TABLE _timescaledb_catalog.chunk
ALTER COLUMN osm_chunk SET DEFAULT FALSE;

CREATE INDEX chunk_osm_chunk_idx ON _timescaledb_catalog.chunk (osm_chunk, hypertable_id);

DROP FUNCTION IF EXISTS @extschema@.add_job(REGPROC, INTERVAL, JSONB, TIMESTAMPTZ, BOOL);
DROP FUNCTION IF EXISTS @extschema@.alter_job(INTEGER, INTERVAL, INTERVAL, INTEGER, INTERVAL, BOOL, JSONB, TIMESTAMPTZ, BOOL);

-- add fields for check function
ALTER TABLE _timescaledb_config.bgw_job
ADD COLUMN check_schema NAME,
ADD COLUMN check_name NAME;

-- no need to touch the telemetry jobs since they do not have a check
-- function.

-- add check function to reorder jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_reorder_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_reorder';

-- add check function to compression jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_compression_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_compression';

-- add check function to retention jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_retention_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_retention';

-- add check function to continuous aggregate refresh jobs
UPDATE
_timescaledb_config.bgw_job job
SET
check_schema = '_timescaledb_internal',
check_name = 'policy_refresh_continuous_aggregate_check'
WHERE proc_schema = '_timescaledb_internal'
AND proc_name = 'policy_refresh_continuous_aggregate';

DROP VIEW IF EXISTS timescaledb_information.jobs;-- cagg migration catalog relations

CREATE TABLE _timescaledb_catalog.continuous_agg_migrate_plan (
mat_hypertable_id integer NOT NULL,
start_ts TIMESTAMPTZ NOT NULL DEFAULT pg_catalog.now(),
end_ts TIMESTAMPTZ,
-- table constraints
CONSTRAINT continuous_agg_migrate_plan_pkey PRIMARY KEY (mat_hypertable_id),
CONSTRAINT continuous_agg_migrate_plan_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.continuous_agg (mat_hypertable_id)
);

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg_migrate_plan', '');

CREATE TABLE _timescaledb_catalog.continuous_agg_migrate_plan_step (
mat_hypertable_id integer NOT NULL,
step_id serial NOT NULL,
status TEXT NOT NULL DEFAULT 'NOT STARTED', -- NOT STARTED, STARTED, FINISHED, CANCELED
start_ts TIMESTAMPTZ,
end_ts TIMESTAMPTZ,
type TEXT NOT NULL,
config JSONB,
-- table constraints
CONSTRAINT continuous_agg_migrate_plan_step_pkey PRIMARY KEY (mat_hypertable_id, step_id),
CONSTRAINT continuous_agg_migrate_plan_step_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.continuous_agg_migrate_plan (mat_hypertable_id) ON DELETE CASCADE,
CONSTRAINT continuous_agg_migrate_plan_step_check CHECK (start_ts <= end_ts),
CONSTRAINT continuous_agg_migrate_plan_step_check2 CHECK (type IN ('CREATE NEW CAGG', 'DISABLE POLICIES', 'COPY POLICIES', 'ENABLE POLICIES', 'SAVE WATERMARK', 'REFRESH NEW CAGG', 'COPY DATA'))
);

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg_migrate_plan_step', '');

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq', '');

-- in tables.sql the same is done with GRANT SELECT ON ALL TABLES IN SCHEMA
GRANT SELECT ON _timescaledb_catalog.continuous_agg_migrate_plan TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.continuous_agg_migrate_plan_step TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq TO PUBLIC;

-- add distributed argument
DROP FUNCTION IF EXISTS @extschema@.create_hypertable;
DROP FUNCTION IF EXISTS @extschema@.create_distributed_hypertable;

CREATE FUNCTION @extschema@.create_hypertable(
relation REGCLASS,
time_column_name NAME,
partitioning_column NAME = NULL,
number_partitions INTEGER = NULL,
associated_schema_name NAME = NULL,
associated_table_prefix NAME = NULL,
chunk_time_interval ANYELEMENT = NULL::bigint,
create_default_indexes BOOLEAN = TRUE,
if_not_exists BOOLEAN = FALSE,
partitioning_func REGPROC = NULL,
migrate_data BOOLEAN = FALSE,
chunk_target_size TEXT = NULL,
chunk_sizing_func REGPROC = '_timescaledb_internal.calculate_chunk_interval'::regproc,
time_partitioning_func REGPROC = NULL,
replication_factor INTEGER = NULL,
data_nodes NAME[] = NULL,
distributed BOOLEAN = NULL
) RETURNS TABLE(hypertable_id INT, schema_name NAME, table_name NAME, created BOOL) AS '@MODULE_PATHNAME@', 'ts_hypertable_create' LANGUAGE C VOLATILE;

-- change replication_factor to NULL by default
CREATE FUNCTION @extschema@.create_distributed_hypertable(
relation REGCLASS,
time_column_name NAME,
partitioning_column NAME = NULL,
number_partitions INTEGER = NULL,
associated_schema_name NAME = NULL,
associated_table_prefix NAME = NULL,
chunk_time_interval ANYELEMENT = NULL::bigint,
create_default_indexes BOOLEAN = TRUE,
if_not_exists BOOLEAN = FALSE,
partitioning_func REGPROC = NULL,
migrate_data BOOLEAN = FALSE,
chunk_target_size TEXT = NULL,
chunk_sizing_func REGPROC = '_timescaledb_internal.calculate_chunk_interval'::regproc,
time_partitioning_func REGPROC = NULL,
replication_factor INTEGER = NULL,
data_nodes NAME[] = NULL
) RETURNS TABLE(hypertable_id INT, schema_name NAME, table_name NAME, created BOOL) AS '@MODULE_PATHNAME@', 'ts_hypertable_distributed_create' LANGUAGE C VOLATILE;

0 comments on commit 4d7af52

Please sign in to comment.