Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.0.2 cherry pick #2960

Merged
merged 19 commits into from
Feb 19, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/alpine-32bit-build-and-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ jobs:
strategy:
fail-fast: false
matrix:
pg: [ "11.10", "12.5" ]
pg: [ "11.11", "12.6" ]
build_type: [ Debug ]
include:
- pg: 11.10
- pg: 11.11
ignores: append-11 chunk_adaptive-11 continuous_aggs_bgw_drop_chunks remote_txn transparent_decompression-11 continuous_aggs_insert continuous_aggs_multi continuous_aggs_concurrent_refresh
- pg: 12.5
- pg: 12.6
ignores: append-12 chunk_adaptive-12 continuous_aggs_bgw_drop_chunks remote_txn transparent_decompression-12

steps:
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/coverity.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
strategy:
fail-fast: false
matrix:
pg: ["11.10", "12.5"]
pg: ["11.11", "12.6"]
os: [ubuntu-18.04]
env:
PG_SRC_DIR: pgbuild
Expand Down Expand Up @@ -55,7 +55,7 @@ jobs:
- name: Build TimescaleDB
run: |
PATH="$GITHUB_WORKSPACE/coverity/bin:$PATH"
./bootstrap -DCMAKE_BUILD_TYPE=Release -DPG_SOURCE_DIR=~/$PG_SRC_DIR -DPG_PATH=~/$PG_INSTALL_DIR -DWARNINGS_AS_ERRORS=OFF
./bootstrap -DCMAKE_BUILD_TYPE=Release -DPG_SOURCE_DIR=~/$PG_SRC_DIR -DPG_PATH=~/$PG_INSTALL_DIR
cov-build --dir cov-int make -C build

- name: Upload report
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/cron-tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ jobs:
strategy:
fail-fast: false
env:
PG_VERSION: 12.5
PG_VERSION: 12.6

steps:
- name: Checkout TimescaleDB
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/update-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ jobs:
runs-on: 'ubuntu-18.04'
strategy:
matrix:
pg: ["11.10","12.5"]
pg: ["11.11","12.6"]
opt: ["", "-r"]
include:
- pg: 11.10
- pg: 11.11
pg_major: 11
- pg: 12.5
- pg: 12.6
pg_major: 12
- opt: "-r"
kind: "with repair test"
Expand Down
54 changes: 54 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,60 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**

## Unreleased

**Minor features**
* #2736 Support adding columns to hypertables with compression enabled
* #2926 Optimize cagg refresh for small invalidations

**Bugfixes**
* #2883 Fix join qual propagation for nested joins
* #2908 Fix changing column type of clustered hypertables
* #2942 Validate continuous aggregate policy

Comment on lines +13 to +17
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't you list all issues that are cherry-picked? I'm missing issue #2831 and issue #2770 (or PR #2893)

**Thanks**
* @zeeshanshabbir93 for reporting an issue with joins
* @Antiarchitect for reporting the issue with slow refreshes of
continuous aggregates.

## 1.7.5 (2021-02-12)

This maintenance release contains bugfixes since the 1.7.4 release.
Most of these fixes were backported from the 2.0.0 and 2.0.1 releases.
We deem it high priority for upgrading for users on TimescaleDB 1.7.4
or previous versions.

In particular the fixes contained in this maintenance release address
issues in continuous aggregates, compression, JOINs with hypertables,
and when upgrading from previous versions.

**Bugfixes**
* #2502 Replace check function when updating
* #2558 Repair dimension slice table on update
* #2619 Fix segfault in decompress_chunk for chunks with dropped
columns
* #2664 Fix support for complex aggregate expression
* #2800 Lock dimension slices when creating new chunk
* #2860 Fix projection in ChunkAppend nodes
* #2865 Apply volatile function quals at decompresschunk
* #2851 Fix nested loop joins that involve compressed chunks
* #2868 Fix corruption in gapfill plan
* #2883 Fix join qual propagation for nested joins
* #2885 Fix compressed chunk check when disabling compression
* #2920 Fix repair in update scripts

**Thanks**
* @akamensky for reporting several issues including segfaults after
version update
* @alex88 for reporting an issue with joined hypertables
* @dhodyn for reporting an issue when joining compressed chunks
* @diego-hermida for reporting an issue with disabling compression
* @Netskeh for reporting bug on time_bucket problem in continuous
aggregates
* @WarriorOfWire for reporting the bug with gapfill queries not being
able to find pathkey item to sort
* @zeeshanshabbir93 for reporting an issue with joins

## 2.0.1 (2021-01-28)

This maintenance release contains bugfixes since the 2.0.0 release.
Expand Down
26 changes: 23 additions & 3 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,15 @@ if (NOT CMAKE_C_COMPILER_ID IN_LIST SUPPORTED_COMPILERS)
message(FATAL_ERROR "Unsupported compiler ${CMAKE_C_COMPILER_ID}. Supported compilers are: ${SUPPORTED_COMPILERS}")
endif ()

# Option to treat warnings as errors when compiling (default on)
option(WARNINGS_AS_ERRORS "Make compiler warnings into errors (default ON)" ON)
# Option to treat warnings as errors when compiling (default on for
# debug builds, off for all other build types)
if (CMAKE_BUILD_TYPE STREQUAL Debug)
message(STATUS "CMAKE_BUILD_TYPE matches Debug")
option(WARNINGS_AS_ERRORS "Make compiler warnings into errors (default ON)" ON)
else()
message(STATUS "CMAKE_BUILD_TYPE does not match Debug")
option(WARNINGS_AS_ERRORS "Make compiler warnings into errors (default ON)" OFF)
endif()

if (WARNINGS_AS_ERRORS)
if (CMAKE_C_COMPILER_ID MATCHES "GNU|Clang|AppleClang")
Expand All @@ -116,7 +123,8 @@ if(CMAKE_C_COMPILER_ID MATCHES "GNU|AppleClang|Clang")
# These flags are supported on all compilers.
add_compile_options(
-Wempty-body -Wvla -Wall -Wmissing-prototypes -Wpointer-arith
-Werror=vla -Wendif-labels -fno-strict-aliasing -fno-omit-frame-pointer)
-Werror=vla -Wendif-labels
-fno-strict-aliasing -fno-omit-frame-pointer)

# These flags are just supported on some of the compilers, so we
# check them before adding them.
Expand All @@ -127,13 +135,25 @@ if(CMAKE_C_COMPILER_ID MATCHES "GNU|AppleClang|Clang")
message(STATUS "Compiler does not support -Wno-format-truncation")
endif()

check_c_compiler_flag(-Wstringop-truncation CC_STRINGOP_TRUNCATION)
if(CC_STRINGOP_TRUNCATION)
add_compile_options(-Wno-stringop-truncation)
else()
message(STATUS "Compiler does not support -Wno-stringop-truncation")
endif()

check_c_compiler_flag(-Wimplicit-fallthrough CC_SUPPORTS_IMPLICIT_FALLTHROUGH)
if(CC_SUPPORTS_IMPLICIT_FALLTHROUGH)
add_compile_options( -Wimplicit-fallthrough)
else()
message(STATUS "Compiler does not support -Wimplicit-fallthrough")
endif()

# strict overflow check produces false positives on gcc < 8
if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_C_COMPILER_VERSION VERSION_LESS 8)
add_compile_options(-Wno-strict-overflow)
endif()

# On UNIX, the compiler needs to support -fvisibility=hidden to hide symbols by default
check_c_compiler_flag(-fvisibility=hidden CC_SUPPORTS_VISIBILITY_HIDDEN)

Expand Down
2 changes: 1 addition & 1 deletion appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ test_script:

#right now we only run timescale regression tests, others will be set up later

docker exec -e IGNORES="chunk_utils loader" -e TEST_TABLESPACE1_PREFIX="C:\Users\$env:UserName\Documents\tablespace1\" -e TEST_TABLESPACE2_PREFIX="C:\Users\$env:UserName\Documents\tablespace2\" -e TEST_SPINWAIT_ITERS=10000 -e USER=postgres -it pgregress /bin/bash -c "cd /timescaledb/build && make regresschecklocal"
erimatnor marked this conversation as resolved.
Show resolved Hide resolved
docker exec -e IGNORES="chunk_utils cluster loader" -e TEST_TABLESPACE1_PREFIX="C:\Users\$env:UserName\Documents\tablespace1\" -e TEST_TABLESPACE2_PREFIX="C:\Users\$env:UserName\Documents\tablespace2\" -e TEST_SPINWAIT_ITERS=10000 -e USER=postgres -it pgregress /bin/bash -c "cd /timescaledb/build && make regresschecklocal"

$TESTS1 = $?

Expand Down
30 changes: 22 additions & 8 deletions scripts/gh_matrix_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@
event_type = sys.argv[1]

PG11_EARLIEST = "11.0"
PG11_LATEST = "11.10"
PG11_LATEST = "11.11"
PG12_EARLIEST = "12.0"
PG12_LATEST = "12.5"
PG12_LATEST = "12.6"

m = {"include": [],}

Expand All @@ -33,12 +33,18 @@
def build_debug_config(overrides):
# llvm version and clang versions must match otherwise
# there will be build errors this is true even when compiling
# with gcc as clang is used to compile the llvm parts
# with gcc as clang is used to compile the llvm parts.
#
# Strictly speaking, WARNINGS_AS_ERRORS=ON is not needed here, but
# we add it as a precation. Intention is to have at least one
# release and one debug build with WARNINGS_AS_ERRORS=ON so that we
# capture warnings generated due to changes in the code base or the
# compiler.
base_config = dict({
"name": "Debug",
"build_type": "Debug",
"pg_build_args": "--enable-debug --enable-cassert",
"tsdb_build_args": "-DCODECOVERAGE=ON",
"tsdb_build_args": "-DCODECOVERAGE=ON -DWARNINGS_AS_ERRORS=ON",
"installcheck_args": "IGNORES='bgw_db_scheduler'",
"coverage": True,
"llvm_config": "llvm-config-9",
Expand All @@ -50,13 +56,17 @@ def build_debug_config(overrides):
base_config.update(overrides)
return base_config

# We build this release configuration with WARNINGS_AS_ERRORS=ON to
# make sure that we can build with -Werrors even for release
# builds. This will capture some cases where warnings are generated
# for release builds but not for debug builds.
def build_release_config(overrides):
base_config = build_debug_config({})
release_config = dict({
"name": "Release",
"build_type": "Release",
"pg_build_args": "",
"tsdb_build_args": "-DWARNINGS_AS_ERRORS=OFF",
"tsdb_build_args": "-DWARNINGS_AS_ERRORS=ON",
"coverage": False,
})
base_config.update(release_config)
Expand All @@ -68,7 +78,7 @@ def build_apache_config(overrides):
apache_config = dict({
"name": "ApacheOnly",
"build_type": "Release",
"tsdb_build_args": "-DAPACHE_ONLY=1 -DWARNINGS_AS_ERRORS=OFF",
"tsdb_build_args": "-DAPACHE_ONLY=1",
"pg_build_args": "",
"coverage": False,
})
Expand Down Expand Up @@ -111,12 +121,16 @@ def macos_config(overrides):
"llvm_config": "/usr/bin/llvm-config-8",
"clang": "clang-8",
"extra_packages": "llvm-8 llvm-8-dev llvm-8-tools",
"installcheck_args": "IGNORES='continuous_aggs_insert continuous_aggs_multi continuous_aggs_concurrent_refresh'"
"installcheck_args": "IGNORES='cluster continuous_aggs_insert continuous_aggs_multi continuous_aggs_concurrent_refresh'"
}
m["include"].append(build_debug_config(pg11_debug_earliest))

# add debug test for first supported PG12 version
m["include"].append(build_debug_config({"pg":PG12_EARLIEST}))
pg12_debug_earliest = {
"pg": PG12_EARLIEST,
"installcheck_args": "IGNORES='cluster'"
}
m["include"].append(build_debug_config(pg12_debug_earliest))

# add debug test for MacOS
m["include"].append(build_debug_config(macos_config({})))
Expand Down
6 changes: 3 additions & 3 deletions scripts/test_sanitizers.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ cleanup() {
else
# docker logs timescaledb-san
# only print respective postmaster.log when regression.diffs exists
docker_exec timescaledb-san "cat /tsdb_build/timescaledb/build/test/regression.diffs && cat /tsdb_build/timescaledb/build/test/log/postmaster.log"
docker_exec timescaledb-san "cat /tsdb_build/timescaledb/build/tsl/test/regression.diffs && cat /tsdb_build/timescaledb/build/tsl/test/log/postmaster.log"
docker_exec timescaledb-san "if [ -f /tsdb_build/timescaledb/build/test/regression.diffs ]; then cat /tsdb_build/timescaledb/build/test/regression.diffs /tsdb_build/timescaledb/build/test/log/postmaster.log; fi"
docker_exec timescaledb-san "if [ -f /tsdb_build/timescaledb/build/tsl/test/regression.diffs ]; then cat /tsdb_build/timescaledb/build/tsl/test/regression.diffs /tsdb_build/timescaledb/build/tsl/test/log/postmaster.log; fi"
fi

echo "Exit status is $status"
Expand Down Expand Up @@ -101,5 +101,5 @@ echo "Testing"
# postmaster.c is not atomic but read/written across signal handlers
# and ServerLoop.
docker exec -i -u postgres -w /tsdb_build/timescaledb/build timescaledb-san /bin/bash <<EOF
make -k regresscheck regresscheck-t SKIPS='remote_txn' IGNORES='bgw_db_scheduler bgw_launcher continuous_aggs_ddl-11'
erimatnor marked this conversation as resolved.
Show resolved Hide resolved
make -k regresscheck regresscheck-t SKIPS='remote_txn' IGNORES='bgw_db_scheduler bgw_launcher cluster continuous_aggs_ddl-11'
EOF
2 changes: 1 addition & 1 deletion scripts/test_updates_pg11.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ if [ $EXIT_CODE -ne 0 ]; then
exit $EXIT_CODE
fi

TAGS="1.7.0-pg11 1.7.1-pg11 1.7.2-pg11 1.7.3-pg11 1.7.4-pg11 2.0.0-rc1-pg11 2.0.0-rc2-pg11 2.0.0-rc3-pg11 2.0.0-rc4-pg11 2.0.0-pg11"
TAGS="1.7.0-pg11 1.7.1-pg11 1.7.2-pg11 1.7.3-pg11 1.7.4-pg11 1.7.5-pg11 2.0.0-rc1-pg11 2.0.0-rc2-pg11 2.0.0-rc3-pg11 2.0.0-rc4-pg11 2.0.0-pg11 2.0.1-pg11"
TEST_VERSION="v6"

TAGS=$TAGS TEST_VERSION=$TEST_VERSION bash ${SCRIPT_DIR}/test_updates.sh "$@"
Expand Down
2 changes: 1 addition & 1 deletion scripts/test_updates_pg12.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ set -o pipefail
SCRIPT_DIR=$(dirname $0)
echo $SCRIPT_DIR

TAGS="1.7.0-pg12 1.7.1-pg12 1.7.2-pg12 1.7.3-pg12 1.7.4-pg12 2.0.0-rc1-pg12 2.0.0-rc2-pg12 2.0.0-rc3-pg12 2.0.0-rc4-pg12 2.0.0-pg12"
TAGS="1.7.0-pg12 1.7.1-pg12 1.7.2-pg12 1.7.3-pg12 1.7.4-pg12 1.7.5-pg12 2.0.0-rc1-pg12 2.0.0-rc2-pg12 2.0.0-rc3-pg12 2.0.0-rc4-pg12 2.0.0-pg12 2.0.1-pg12"
TEST_VERSION="v6"

TAGS=$TAGS TEST_VERSION=$TEST_VERSION bash ${SCRIPT_DIR}/test_updates.sh "$@"
Expand Down
3 changes: 2 additions & 1 deletion sql/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,8 @@ set(MOD_FILES
updates/1.7.1--1.7.2.sql
updates/1.7.2--1.7.3.sql
updates/1.7.3--1.7.4.sql
updates/1.7.4--2.0.0-rc1.sql
updates/1.7.4--1.7.5.sql
updates/1.7.5--2.0.0-rc1.sql
updates/2.0.0-rc1--2.0.0-rc2.sql
updates/2.0.0-rc2--2.0.0-rc3.sql
updates/2.0.0-rc3--2.0.0-rc4.sql
Expand Down
81 changes: 81 additions & 0 deletions sql/updates/1.7.4--1.7.5.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
-- Recreate missing dimension slices that might be missing. If the
-- dimension slice table is broken and there are dimension slices
-- missing from the table, we will repair it by:
--
-- 1. Finding all chunk constraints that have missing dimension
-- slices and extract the constraint expression from the
-- associated constraint.
--
-- 2. Parse the constraint expression and extract the column name,
-- and upper and lower range values as text or, if it is a
-- partition constraint, pick the existing constraint (either
-- uppper or lower end of range) and make the other end open.
--
-- 3. Use the column type to construct the range values (UNIX
-- microseconds) from these strings.
INSERT INTO _timescaledb_catalog.dimension_slice
WITH
-- All dimension slices that are mentioned in the chunk_constraint
-- table but are missing from the dimension_slice table.
missing_slices AS (
SELECT hypertable_id,
chunk_id,
dimension_slice_id,
constraint_name,
attname AS column_name,
pg_get_expr(conbin, conrelid) AS constraint_expr
FROM _timescaledb_catalog.chunk_constraint cc
JOIN _timescaledb_catalog.chunk ch ON cc.chunk_id = ch.id
JOIN pg_constraint ON conname = constraint_name
JOIN pg_namespace ns ON connamespace = ns.oid AND ns.nspname = ch.schema_name
JOIN pg_attribute ON attnum = conkey[1] AND attrelid = conrelid
WHERE
dimension_slice_id NOT IN (SELECT id FROM _timescaledb_catalog.dimension_slice)
),

-- Unparsed range start and end for each dimension slice id that
-- is missing.
unparsed_missing_slices AS (
SELECT di.id AS dimension_id,
dimension_slice_id,
constraint_name,
column_type,
column_name,
(SELECT SUBSTRING(constraint_expr, $$>=\s*'?([\w\d\s:+-]+)'?$$)) AS range_start,
(SELECT SUBSTRING(constraint_expr, $$<\s*'?([\w\d\s:+-]+)'?$$)) AS range_end
FROM missing_slices JOIN _timescaledb_catalog.dimension di USING (hypertable_id, column_name)
)
SELECT DISTINCT
dimension_slice_id,
dimension_id,
CASE
WHEN column_type = 'timestamptz'::regtype THEN
EXTRACT(EPOCH FROM range_start::timestamptz)::bigint * 1000000
WHEN column_type = 'timestamp'::regtype THEN
EXTRACT(EPOCH FROM range_start::timestamp)::bigint * 1000000
WHEN column_type = 'date'::regtype THEN
EXTRACT(EPOCH FROM range_start::date)::bigint * 1000000
ELSE
CASE
WHEN range_start IS NULL
THEN (-9223372036854775808)::bigint
ELSE range_start::bigint
END
END AS range_start,
CASE
WHEN column_type = 'timestamptz'::regtype THEN
EXTRACT(EPOCH FROM range_end::timestamptz)::bigint * 1000000
WHEN column_type = 'timestamp'::regtype THEN
EXTRACT(EPOCH FROM range_end::timestamp)::bigint * 1000000
WHEN column_type = 'date'::regtype THEN
EXTRACT(EPOCH FROM range_end::date)::bigint * 1000000
ELSE
CASE WHEN range_end IS NULL
THEN 9223372036854775807::bigint
ELSE range_end::bigint
END
END AS range_end
FROM unparsed_missing_slices;

-- set compressed_chunk_id to NULL for dropped chunks
UPDATE _timescaledb_catalog.chunk SET compressed_chunk_id = NULL WHERE dropped = true AND compressed_chunk_id IS NOT NULL;
File renamed without changes.
4 changes: 4 additions & 0 deletions sql/updates/latest-dev.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@

-- set compressed_chunk_id to NULL for dropped chunks
UPDATE _timescaledb_catalog.chunk SET compressed_chunk_id = NULL WHERE dropped = true AND compressed_chunk_id IS NOT NULL;

Loading