Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.6.0 #1631

Merged
merged 2 commits into from Jan 15, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions .travis.yml
Expand Up @@ -123,7 +123,7 @@ jobs:
# Now build with OpenSSL
- docker exec -it pgbuild /bin/sh -c "cd /build/debug && cmake .. -DCMAKE_BUILD_TYPE=Debug -DUSE_OPENSSL=true -DENABLE_CODECOVERAGE=TRUE -DPG_SOURCE_DIR=/postgres ${OTHER_CMAKE_FLAGS:-} && make install && chown -R postgres:postgres /build/debug/"
# Now run all tests
- ${RETRY_PREFIX} docker exec -u postgres -it pgbuild /bin/sh -c "make -k -C /build/debug installcheck IGNORES='append-10 bgw_db_scheduler chunk_adaptive ordered_append-10 transparent_decompression-10' PG_REGRESS_OPTS='--temp-instance=/tmp/pgdata'"
- ${RETRY_PREFIX} docker exec -u postgres -it pgbuild /bin/sh -c "make -k -C /build/debug installcheck IGNORES='append-10 bgw_db_scheduler chunk_adaptive ordered_append-10 parallel-10 transparent_decompression-10' PG_REGRESS_OPTS='--temp-instance=/tmp/pgdata'"

- if: (type = cron) OR (branch = prerelease_test)
stage: test
Expand All @@ -141,7 +141,7 @@ jobs:
# Now build with OpenSSL
- docker exec -it pgbuild /bin/sh -c "cd /build/debug && cmake .. -DCMAKE_BUILD_TYPE=Debug -DUSE_OPENSSL=true -DENABLE_CODECOVERAGE=TRUE -DPG_SOURCE_DIR=/postgres ${OTHER_CMAKE_FLAGS:-} && make install && chown -R postgres:postgres /build/debug/"
# Now run all tests
- ${RETRY_PREFIX} docker exec -u postgres -it pgbuild /bin/sh -c "make -k -C /build/debug installcheck IGNORES='append-11 bgw_db_scheduler chunk_adaptive ordered_append-11 transparent_decompression-11' PG_REGRESS_OPTS='--temp-instance=/tmp/pgdata'"
- ${RETRY_PREFIX} docker exec -u postgres -it pgbuild /bin/sh -c "make -k -C /build/debug installcheck IGNORES='append-11 bgw_db_scheduler chunk_adaptive ordered_append-11 parallel-11 transparent_decompression-11' PG_REGRESS_OPTS='--temp-instance=/tmp/pgdata'"

# This runs tests on ARM32 emulation
- if: branch = arm_test
Expand Down
24 changes: 22 additions & 2 deletions CHANGELOG.md
Expand Up @@ -4,7 +4,14 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**

## 1.6.0 (unreleased)
## 1.6.0 (2020-01-14)

This release adds major new features and bugfixes since the 1.5.1 release.
We deem it moderate priority for upgrading.

The major new feature in this release allows users to keep the aggregated
data in a continuous aggregate while dropping the raw data with drop_chunks.
This allows users to save storage by keeping only the aggregates.

The semantics of the refresh_lag parameter for continuous aggregates has
been changed to be relative to the current timestamp instead of the maximum
Expand All @@ -20,18 +27,31 @@ for data older than 1 month from the current timestamp at modification time may
not cause continuous aggregate to be updated. This limits the amount of work
that a backfill can trigger. By default, all invalidations are processed.

**Major Features**
* #1589 Allow drop_chunks while keeping continuous aggregates

**Minor Features**
* #1568 Add ignore_invalidation_older_than option to continuous aggs
* #1575 Reorder group-by clause for continuous aggregates
* #1592 Improve continuous agg user messages

svenklemm marked this conversation as resolved.
Show resolved Hide resolved
**Bugfixes**
* #1565 Fix partial select query for continuous aggregate
* #1591 Fix locf treat_null_as_missing option
* #1594 Fix error in compression constraint check
* #1603 Add join info to compressed chunk
* #1606 Fix constify params during runtime exclusion
* #1607 Delete compression policy when drop hypertable
* #1608 Add jobs to timescaledb_information.policy_stats
* #1609 Fix bug with parent table in decompression
* #1609 Fix bug with parent table in decompression
* #1624 Fix drop_chunks for ApacheOnly
* #1632 Check for NULL before dereferencing variable

**Thanks**
* @optijon for reporting an issue with locf treat_null_as_missing option
* @acarrera42 for reporting an issue with constify params during runtime exclusion
* @ChristopherZellermann for reporting an issue with the compression constraint check
* @SimonDelamare for reporting an issue with joining hypertables with compression

## 1.5.1 (2019-11-12)

Expand Down
1 change: 1 addition & 0 deletions sql/CMakeLists.txt
Expand Up @@ -100,6 +100,7 @@ set(MOD_FILES
updates/1.4.1--1.4.2.sql
updates/1.4.2--1.5.0.sql
updates/1.5.0--1.5.1.sql
updates/1.5.1--1.6.0.sql
)

set(MODULE_PATHNAME "$libdir/timescaledb-${PROJECT_VERSION_MOD}")
Expand Down
91 changes: 91 additions & 0 deletions sql/updates/1.5.1--1.6.0.sql
@@ -0,0 +1,91 @@
DO
$BODY$
DECLARE
hypertable_name TEXT;
BEGIN
SELECT first_dim.schema_name || '.' || first_dim.table_name
FROM _timescaledb_catalog.continuous_agg ca
INNER JOIN LATERAL (
SELECT dim.*, h.*
FROM _timescaledb_catalog.hypertable h
INNER JOIN _timescaledb_catalog.dimension dim ON (dim.hypertable_id = h.id)
WHERE ca.raw_hypertable_id = h.id
ORDER by dim.id ASC
LIMIT 1
) first_dim ON true
WHERE first_dim.column_type IN (REGTYPE 'int2', REGTYPE 'int4', REGTYPE 'int8')
AND (first_dim.integer_now_func_schema IS NULL OR first_dim.integer_now_func IS NULL)
INTO hypertable_name;

IF hypertable_name is not null AND (current_setting('timescaledb.ignore_update_errors', true) is null OR current_setting('timescaledb.ignore_update_errors', true) != 'on') THEN
RAISE 'The continuous aggregate on hypertable "%" will break unless an integer_now func is set using set_integer_now_func().', hypertable_name;
END IF;
END
$BODY$;


ALTER TABLE _timescaledb_catalog.continuous_agg
ADD COLUMN ignore_invalidation_older_than BIGINT NOT NULL DEFAULT BIGINT '9223372036854775807';
UPDATE _timescaledb_catalog.continuous_agg SET ignore_invalidation_older_than = BIGINT '9223372036854775807';

CLUSTER _timescaledb_catalog.continuous_agg USING continuous_agg_pkey;
ALTER TABLE _timescaledb_catalog.continuous_agg SET WITHOUT CLUSTER;

CREATE INDEX IF NOT EXISTS continuous_agg_raw_hypertable_id_idx
ON _timescaledb_catalog.continuous_agg(raw_hypertable_id);


--Add modification_time column
CREATE TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log_tmp AS SELECT * FROM _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log;
DROP TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log;
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log
(
hypertable_id INTEGER NOT NULL,
modification_time BIGINT NOT NULL, --time at which the raw table was modified
lowest_modified_value BIGINT NOT NULL,
greatest_modified_value BIGINT NOT NULL
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_aggs_hypertable_invalidation_log', '');
--modification_time == INT_MIN to cause these invalidations to be processed
INSERT INTO _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log
SELECT hypertable_id, BIGINT '-9223372036854775808', lowest_modified_value, greatest_modified_value
FROM _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log_tmp;
DROP TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log_tmp;
CREATE INDEX continuous_aggs_hypertable_invalidation_log_idx
ON _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log (hypertable_id, lowest_modified_value ASC);
GRANT SELECT ON _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log TO PUBLIC;

--Add modification_time column
CREATE TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log_tmp AS SELECT * FROM _timescaledb_catalog.continuous_aggs_materialization_invalidation_log;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log;
DROP TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log;
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
(
materialization_id INTEGER
REFERENCES _timescaledb_catalog.continuous_agg (mat_hypertable_id)
ON DELETE CASCADE,
modification_time BIGINT NOT NULL, --time at which the raw table was modified
lowest_modified_value BIGINT NOT NULL,
greatest_modified_value BIGINT NOT NULL
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_aggs_materialization_invalidation_log', '');
--modification_time == INT_MIN to cause these invalidations to be processed
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
SELECT materialization_id, BIGINT '-9223372036854775808', lowest_modified_value, greatest_modified_value
FROM _timescaledb_catalog.continuous_aggs_materialization_invalidation_log_tmp;
DROP TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log_tmp;
CREATE INDEX continuous_aggs_materialization_invalidation_log_idx
ON _timescaledb_catalog.continuous_aggs_materialization_invalidation_log (materialization_id, lowest_modified_value ASC);
GRANT SELECT ON _timescaledb_catalog.continuous_aggs_materialization_invalidation_log TO PUBLIC;

ALTER TABLE _timescaledb_config.bgw_policy_drop_chunks ALTER COLUMN cascade_to_materializations DROP NOT NULL;

UPDATE _timescaledb_config.bgw_policy_drop_chunks SET cascade_to_materializations = NULL WHERE cascade_to_materializations = false;

ALTER TABLE _timescaledb_catalog.chunk ADD COLUMN dropped BOOLEAN DEFAULT false;
UPDATE _timescaledb_catalog.chunk SET dropped = false;
ALTER TABLE _timescaledb_catalog.chunk ALTER COLUMN dropped SET NOT NULL;

CLUSTER _timescaledb_catalog.chunk USING chunk_pkey;
ALTER TABLE _timescaledb_catalog.chunk SET WITHOUT CLUSTER;
91 changes: 0 additions & 91 deletions sql/updates/latest-dev.sql
@@ -1,91 +0,0 @@
DO
$BODY$
DECLARE
hypertable_name TEXT;
BEGIN
SELECT first_dim.schema_name || '.' || first_dim.table_name
FROM _timescaledb_catalog.continuous_agg ca
INNER JOIN LATERAL (
SELECT dim.*, h.*
FROM _timescaledb_catalog.hypertable h
INNER JOIN _timescaledb_catalog.dimension dim ON (dim.hypertable_id = h.id)
WHERE ca.raw_hypertable_id = h.id
ORDER by dim.id ASC
LIMIT 1
) first_dim ON true
WHERE first_dim.column_type IN (REGTYPE 'int2', REGTYPE 'int4', REGTYPE 'int8')
AND (first_dim.integer_now_func_schema IS NULL OR first_dim.integer_now_func IS NULL)
INTO hypertable_name;

IF hypertable_name is not null AND (current_setting('timescaledb.ignore_update_errors', true) is null OR current_setting('timescaledb.ignore_update_errors', true) != 'on') THEN
RAISE 'The continuous aggregate on hypertable "%" will break unless an integer_now func is set using set_integer_now_func().', hypertable_name;
END IF;
END
$BODY$;


ALTER TABLE _timescaledb_catalog.continuous_agg
ADD COLUMN ignore_invalidation_older_than BIGINT NOT NULL DEFAULT BIGINT '9223372036854775807';
UPDATE _timescaledb_catalog.continuous_agg SET ignore_invalidation_older_than = BIGINT '9223372036854775807';

CLUSTER _timescaledb_catalog.continuous_agg USING continuous_agg_pkey;
ALTER TABLE _timescaledb_catalog.continuous_agg SET WITHOUT CLUSTER;

CREATE INDEX IF NOT EXISTS continuous_agg_raw_hypertable_id_idx
ON _timescaledb_catalog.continuous_agg(raw_hypertable_id);


--Add modification_time column
CREATE TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log_tmp AS SELECT * FROM _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log;
DROP TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log;
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log
(
hypertable_id INTEGER NOT NULL,
modification_time BIGINT NOT NULL, --time at which the raw table was modified
lowest_modified_value BIGINT NOT NULL,
greatest_modified_value BIGINT NOT NULL
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_aggs_hypertable_invalidation_log', '');
--modification_time == INT_MIN to cause these invalidations to be processed
INSERT INTO _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log
SELECT hypertable_id, BIGINT '-9223372036854775808', lowest_modified_value, greatest_modified_value
FROM _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log_tmp;
DROP TABLE _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log_tmp;
CREATE INDEX continuous_aggs_hypertable_invalidation_log_idx
ON _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log (hypertable_id, lowest_modified_value ASC);
GRANT SELECT ON _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log TO PUBLIC;

--Add modification_time column
CREATE TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log_tmp AS SELECT * FROM _timescaledb_catalog.continuous_aggs_materialization_invalidation_log;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log;
DROP TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log;
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
(
materialization_id INTEGER
REFERENCES _timescaledb_catalog.continuous_agg (mat_hypertable_id)
ON DELETE CASCADE,
modification_time BIGINT NOT NULL, --time at which the raw table was modified
lowest_modified_value BIGINT NOT NULL,
greatest_modified_value BIGINT NOT NULL
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_aggs_materialization_invalidation_log', '');
--modification_time == INT_MIN to cause these invalidations to be processed
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
SELECT materialization_id, BIGINT '-9223372036854775808', lowest_modified_value, greatest_modified_value
FROM _timescaledb_catalog.continuous_aggs_materialization_invalidation_log_tmp;
DROP TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log_tmp;
CREATE INDEX continuous_aggs_materialization_invalidation_log_idx
ON _timescaledb_catalog.continuous_aggs_materialization_invalidation_log (materialization_id, lowest_modified_value ASC);
GRANT SELECT ON _timescaledb_catalog.continuous_aggs_materialization_invalidation_log TO PUBLIC;

ALTER TABLE _timescaledb_config.bgw_policy_drop_chunks ALTER COLUMN cascade_to_materializations DROP NOT NULL;

UPDATE _timescaledb_config.bgw_policy_drop_chunks SET cascade_to_materializations = NULL WHERE cascade_to_materializations = false;

ALTER TABLE _timescaledb_catalog.chunk ADD COLUMN dropped BOOLEAN DEFAULT false;
UPDATE _timescaledb_catalog.chunk SET dropped = false;
ALTER TABLE _timescaledb_catalog.chunk ALTER COLUMN dropped SET NOT NULL;

CLUSTER _timescaledb_catalog.chunk USING chunk_pkey;
ALTER TABLE _timescaledb_catalog.chunk SET WITHOUT CLUSTER;
4 changes: 2 additions & 2 deletions version.config
@@ -1,2 +1,2 @@
version = 1.6.0-dev
update_from_version = 1.5.1
version = 1.7.0-dev
update_from_version = 1.6.0