-
Notifications
You must be signed in to change notification settings - Fork 849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.0.2 cherry pick #2960
2.0.2 cherry pick #2960
Conversation
The check for existence of compressed chunks when disabling compression would not ignore dropped chunks making it impossible to disable compression on hypertables with continuous aggregates that had dropped chunks. This patch ignores dropped chunks in this check and also sets compressed_chunk_id to NULL in the metadata for deleted chunks.
The status of background workers is recorded in PostgreSQL using the function `pgstat_record_activity` which allow the status of the background worker to be inspected using the `status` field in `pg_stat_activity`. This is used by the TimescaleDB `job_stats` view to show if a job is running, scheduled, or paused. However, the activity was never recorded in the code and hence the field was always NULL. Since this was the default for the `job_status` field of `timescaledb_information.job_stats` it would always show as `Scheduled`. This commit fixes this by adding calls to `pgstat_record_activity` at suitable locations. Fixes timescale#2831
This patch adds a `enable_qual_propagation` GUC to control propagation of JOIN quals. Since there have been a few instances where JOIN qual propagation was too aggressive this GUC can be used to disable qual propagation.
Codecov Report
@@ Coverage Diff @@
## 2.0.x #2960 +/- ##
==========================================
+ Coverage 90.14% 90.26% +0.12%
==========================================
Files 212 212
Lines 34678 34784 +106
==========================================
+ Hits 31260 31398 +138
+ Misses 3418 3386 -32
Continue to review full report at Codecov.
|
needs #2957 for sanitizer test to pass |
8520f46
to
eed6cf5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
**Bugfixes** | ||
* #2883 Fix join qual propagation for nested joins | ||
* #2908 Fix changing column type of clustered hypertables | ||
* #2942 Validate continuous aggregate policy | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When changing the column type of a column that is part of an index that is being clustered on with either postgres CLUSTER or reorder the alter type operation would fail with a segfault because it couldn't lookup the index.
When adding the bugfix for clustered hypertables a typo was introduced making the PG12.0 test run with PG11 settings.
Join propagation would propagate join quals for nested joins even when it was not safe to do so leading to wrong query results.
In contrast to the default for CMake files, `Release` is used as the default in `CMakeFiles.txt` which causes the `bootstrap` script to do a release build with development flags, in particular `-Werror`. Since warnings are triggered in a release build, this cause the build to fail while a debug build works fine. This commit fixes this by removing the `-Werror` flag (by setting `WARNINGS_AS_ERRORS` to `OFF`) on anything that is not a debug build and also disable the warnings that (currently) trigger the warnings in the release build. The commit also changes some of the GitHub workflows to run without `WARNINGS_AS_ERRORS` since it should always work without this option regardless of build type (on release build this should be disabled, on debug builds this should be enabled). But it is set to `ON` for the full release and debug builds to ensure that we do not generate any warnings, which will capture new surfacing warnings. Fixes timescale#2770
Older versions of GCC do not have this flag, so check for it explicitly before adding it. Fix by Mats Kindahl
This maintenance release contains bugfixes since the 1.7.4 release. Most of these fixes were backported from the 2.0.0 and 2.0.1 releases. We deem it high priority for upgrading for users on TimescaleDB 1.7.4 or previous versions. In particular the fixes contained in this maintenance release address issues in continuous aggregates, compression, JOINs with hypertables, and when upgrading from previous versions. **Bugfixes** * timescale#2502 Replace check function when updating * timescale#2558 Repair dimension slice table on update * timescale#2619 Fix segfault in decompress_chunk for chunks with dropped columns * timescale#2664 Fix support for complex aggregate expression * timescale#2800 Lock dimension slices when creating new chunk * timescale#2860 Fix projection in ChunkAppend nodes * timescale#2865 Apply volatile function quals at decompresschunk * timescale#2851 Fix nested loop joins that involve compressed chunks * timescale#2868 Fix corruption in gapfill plan * timescale#2883 Fix join qual propagation for nested joins * timescale#2885 Fix compressed chunk check when disabling compression * timescale#2920 Fix repair in update scripts **Thanks** * @akamensky for reporting several issues including segfaults after version update * @alex88 for reporting an issue with joined hypertables * @dhodyn for reporting an issue when joining compressed chunks * @diego-hermida for reporting an issue with disabling compression * @Netskeh for reporting bug on time_bucket problem in continuous aggregates * @WarriorOfWire for reporting the bug with gapfill queries not being able to find pathkey item to sort * @zeeshanshabbir93 for reporting an issue with joins
Run tests on PG 11.11 and 12.6.
When checking for -Wno-stringop-truncation gcc reports success even though it doesn't support the flag. This patch changes the check to check for actual flag instead of the flag that turns it off which produces the correct result.
The strict-overflow check on gcc < 8 produces false positives leading to build failures when compiling with -Werror
The sanitizer test is running on PG11.1 where the cluster test is expected to fail. This patch changes the sanitizer test to ignore the test result of the cluster test.
When refreshing a continuous aggregate, we only materialize the buckets that are fully enclosed by the refresh window. Therefore, we should generate an error if the window is smaller than one bucket.
This change adds validation of the settings for a continuous aggregate policy when the policy is created. Previously it was possible to create policies that would either fail at runtime or never refresh anything due to bad configuration. In particular, the refresh window (start and end offsets for refreshing) must now be at least two buckets in size or an error is generated when the policy is created. The policy must cover at least two buckets to ensure that at least one bucket is refreshed when the policy runs, since it is unlikely that the policy runs at a time that is perfectly aligned with the beginning of a bucket. Note that it is still possible to create policies that might not refresh anything depending on the time when it runs. For instance, if the "current" time is close to the minimum allowed time value, the refresh window can lag enough to fall outside the valid time range (e.g., the end offset is big enough to push the window outside the valid time range). As time moves on, the window would eventually move into the valid time range, however. Fixes timescale#2929
The refreshing of a continuous aggregate is slow when many small invalidations are generated by frequent single row insert backfills. This change adds an optimization that merges small invalidations by first expanding invalidations to full bucket boundaries. There is really no reason to maintain invalidations that aren't covering full buckets since refresh windows are already aligned to buckets anyway. Fixes timescale#2867
When there are many small (e.g., single timestamp) invalidations that cannot be merged despite expanding invalidations to full buckets (e.g., invalidations are spread across every second bucket in the worst case), it might no longer be beneficial to materialize every invalidation separately. Instead, this change adds a threshold for the number of invalidations used by the refresh (currently 10 by default) above which invalidations are merged into one range based on the lowest and greatest invalidated time value. The limit can be controlled by an anonymous session variable for debugging and tweaking purposes. It might be considered for promotion to an official GUC in the future. Fixes timescale#2867
eed6cf5
to
94b0ca4
Compare
Cherry-picked commits for the 2.0.2 release.