Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.7.5 #2935

Merged
merged 1 commit into from
Feb 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
38 changes: 38 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,44 @@ accidentally triggering the load of a previous DB version.**
**Thanks**
* @zeeshanshabbir93 for reporting an issue with joins

## 1.7.5 (2021-02-12)

This maintenance release contains bugfixes since the 1.7.4 release.
Most of these fixes were backported from the 2.0.0 and 2.0.1 releases.
We deem it high priority for upgrading for users on TimescaleDB 1.7.4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We deem it high priority for upgrading for users on TimescaleDB 1.7.4
We deem it high priority to upgrade for users on TimescaleDB 1.7.4

or previous versions.

In particular the fixes contained in this maintenance release address
issues in continuous aggregates, compression, JOINs with hypertables,
erimatnor marked this conversation as resolved.
Show resolved Hide resolved
and when upgrading from previous versions.

**Bugfixes**
* #2502 Replace check function when updating
* #2558 Repair dimension slice table on update
* #2619 Fix segfault in decompress_chunk for chunks with dropped
columns
* #2664 Fix support for complex aggregate expression
* #2800 Lock dimension slices when creating new chunk
* #2860 Fix projection in ChunkAppend nodes
* #2865 Apply volatile function quals at decompresschunk
* #2851 Fix nested loop joins that involve compressed chunks
* #2868 Fix corruption in gapfill plan
* #2883 Fix join qual propagation for nested joins
* #2885 Fix compressed chunk check when disabling compression
* #2920 Fix repair in update scripts

**Thanks**
* @akamensky for reporting several issues including segfaults after
version update
* @alex88 for reporting an issue with joined hypertables
* @dhodyn for reporting an issue when joining compressed chunks
* @diego-hermida for reporting an issue with disabling compression
* @Netskeh for reporting bug on time_bucket problem in continuous
aggregates
* @WarriorOfWire for reporting the bug with gapfill queries not being
able to find pathkey item to sort
* @zeeshanshabbir93 for reporting an issue with joins

## 2.0.1 (2021-01-28)

This maintenance release contains bugfixes since the 2.0.0 release.
Expand Down
3 changes: 2 additions & 1 deletion sql/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,8 @@ set(MOD_FILES
updates/1.7.1--1.7.2.sql
updates/1.7.2--1.7.3.sql
updates/1.7.3--1.7.4.sql
updates/1.7.4--2.0.0-rc1.sql
updates/1.7.4--1.7.5.sql
updates/1.7.5--2.0.0-rc1.sql
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we know that the 1.7.4--2.0.0-rc1 script works without modification for 1.7.5?

updates/2.0.0-rc1--2.0.0-rc2.sql
updates/2.0.0-rc2--2.0.0-rc3.sql
updates/2.0.0-rc3--2.0.0-rc4.sql
Expand Down
81 changes: 81 additions & 0 deletions sql/updates/1.7.4--1.7.5.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
-- Recreate missing dimension slices that might be missing. If the
-- dimension slice table is broken and there are dimension slices
-- missing from the table, we will repair it by:
--
-- 1. Finding all chunk constraints that have missing dimension
-- slices and extract the constraint expression from the
-- associated constraint.
--
-- 2. Parse the constraint expression and extract the column name,
-- and upper and lower range values as text or, if it is a
-- partition constraint, pick the existing constraint (either
-- uppper or lower end of range) and make the other end open.
--
-- 3. Use the column type to construct the range values (UNIX
-- microseconds) from these strings.
INSERT INTO _timescaledb_catalog.dimension_slice
WITH
-- All dimension slices that are mentioned in the chunk_constraint
-- table but are missing from the dimension_slice table.
missing_slices AS (
SELECT hypertable_id,
chunk_id,
dimension_slice_id,
constraint_name,
attname AS column_name,
pg_get_expr(conbin, conrelid) AS constraint_expr
FROM _timescaledb_catalog.chunk_constraint cc
JOIN _timescaledb_catalog.chunk ch ON cc.chunk_id = ch.id
JOIN pg_constraint ON conname = constraint_name
JOIN pg_namespace ns ON connamespace = ns.oid AND ns.nspname = ch.schema_name
JOIN pg_attribute ON attnum = conkey[1] AND attrelid = conrelid
WHERE
dimension_slice_id NOT IN (SELECT id FROM _timescaledb_catalog.dimension_slice)
),

-- Unparsed range start and end for each dimension slice id that
-- is missing.
unparsed_missing_slices AS (
SELECT di.id AS dimension_id,
dimension_slice_id,
constraint_name,
column_type,
column_name,
(SELECT SUBSTRING(constraint_expr, $$>=\s*'?([\w\d\s:+-]+)'?$$)) AS range_start,
(SELECT SUBSTRING(constraint_expr, $$<\s*'?([\w\d\s:+-]+)'?$$)) AS range_end
FROM missing_slices JOIN _timescaledb_catalog.dimension di USING (hypertable_id, column_name)
)
SELECT DISTINCT
dimension_slice_id,
dimension_id,
CASE
WHEN column_type = 'timestamptz'::regtype THEN
EXTRACT(EPOCH FROM range_start::timestamptz)::bigint * 1000000
WHEN column_type = 'timestamp'::regtype THEN
EXTRACT(EPOCH FROM range_start::timestamp)::bigint * 1000000
WHEN column_type = 'date'::regtype THEN
EXTRACT(EPOCH FROM range_start::date)::bigint * 1000000
ELSE
CASE
WHEN range_start IS NULL
THEN (-9223372036854775808)::bigint
ELSE range_start::bigint
END
END AS range_start,
CASE
WHEN column_type = 'timestamptz'::regtype THEN
EXTRACT(EPOCH FROM range_end::timestamptz)::bigint * 1000000
WHEN column_type = 'timestamp'::regtype THEN
EXTRACT(EPOCH FROM range_end::timestamp)::bigint * 1000000
WHEN column_type = 'date'::regtype THEN
EXTRACT(EPOCH FROM range_end::date)::bigint * 1000000
ELSE
CASE WHEN range_end IS NULL
THEN 9223372036854775807::bigint
ELSE range_end::bigint
END
END AS range_end
FROM unparsed_missing_slices;

-- set compressed_chunk_id to NULL for dropped chunks
UPDATE _timescaledb_catalog.chunk SET compressed_chunk_id = NULL WHERE dropped = true AND compressed_chunk_id IS NOT NULL;
File renamed without changes.