New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cherry picked commits for next 1.7 release #2035
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The extension_current_state is called in the cache_invalidate_callback which might be called in a background worker when the database has not been initialized yet leading to a "cannot read pg_class without having selected a database" error. This patch adds a check for this condition to prevent this error.
This patch changes chunk index creation to use the same functions for creating index in one transaction and using multiple transactions. The single transaction index creation used to adjust the original stmt and adjusted it for the chunk which lead to problems with table references not being adjusted properly for the chunk.
This patch changes the order in which locks are taken during compression to avoid taking strong locks for long periods on referenced tables. Previously, constraints from the uncompressed chunk were copied to the compressed chunk before compressing the data. When the uncompressed chunk had foreign key constraints, this resulted in a ShareRowExclusiveLock being held on the referenced table for the remainder of the transaction, which includes the (potentially long) period while the data is compressed, and prevented any INSERTs/UPDATEs/DELETEs on the referenced table during the remainder of the time it took the compression transaction to complete. Copying constraints after completing the actual data compression does not pose safety issues (as any updates to referenced keys are caught by the FK constraint on the uncompressed chunk), and it enables the compression job to minimize the time during which strong locks are held on referenced tables. Fixes timescale#1614.
When using time_bucket_gapfill with the treat_null_as_missing option we did not properly set the number of valid values when generating our own virtual tuples leading to "cannot extract attribute from empty tuple slot" when the number of values got reset. This patch changes the gapfill code to always set the number of valid values when generating virtual tuples.
Updates CHANGELOG with changes from PR "Change compression locking order"(timescale#1932).
Inlineable functions used to be slow to plan, because the query preprocessing function could not find the relations inside the functions, as they haven't been inlined yet at that point. This commit adds a separate check in the get_relation_info_hook to optimize pruning of hypertables.
If a hypertable is created with an index on it and a continuous aggregate is further defined on the hypertable, it will create an internal dependency between the chunks of the hypertable and the chunks of the continuous aggregate. When dropping chunks with `cascade_to_materialization` set to `FALSE`, this will generate an error since the delete is not cascaded to the internal dependencies. This commit fixes this by collecting the internal dependencies and using `performMultipleDelete` rather than `performDelete` to delete several objects as one operation. Fixes timescale#1889
If telemetry is disabled, it is checked inside the `ts_telemetry_main` function, which starts to execute after a background worker has been scheduled to execute the function. This means that any errors occurring when starting the job will trigger errors and generate lines in the log. This commit moves the check to before a background worker is scheduled for executing the job and allow the telemetry job to trivially succeed without actually invoking the worker, hence no errors will be generated for the telemetry job if it is disabled. Also, the privacy test result changes and it is updated to check that the telemetry job cannot leak information regardless of whether telemetry is on or off. Fixes timescale#1934 timescale#1788
If `bits_used` is not exactly 64, a shift will be attempted, even when `bits_used > 64`. According to the standard "[...] if the value of the right operand is negative or is greater or equal to the number of bits in the promoted left operand, the behavior is undefined." Hence we change the code to return `PG_UINT64_MAX` if a request to shift more bits than the number of bits in the type is requested, otherwise we perform the shift.
The compression_ddl test had a permutation that depended on the PGISOLATIONTIMEOUT to cancel the test leading to unreasonably long running and flaky test. This patch changes the test to set lock_timeout instead to cancel the blocking much earlier.
This change ensures that API functions and DDL operations which modify data respects read-only transaction state set by default_transaction_read_only option.
Setting the `timescaledb.restoring` guc explicitly to 'off' for the db meant that the setting got exported in `pg_dumpall` and some other cases where that setting would then conflict with the setting set by the pre_restore function causing it to be overridden and causing errors on restore. This changes to `RESET` so that instead it will take the system default and not be dumped separately as an override.
PostgreSQL 12 is preinstalled, while 11 is not. To unify different paths of PG 11 and 12 binaries, this commit implements workaround by forcing installation of PostgreSQL 12, so it is in the same path as PostgreSQL 11.
Codecov Report
@@ Coverage Diff @@
## 1.7.x #2035 +/- ##
===========================================
- Coverage 89.61% 14.01% -75.61%
===========================================
Files 150 143 -7
Lines 22799 21283 -1516
===========================================
- Hits 20432 2982 -17450
- Misses 2367 18301 +15934
Continue to review full report at Codecov.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR suggests commits, which were cherry picked from the master. More commits will be added after they are implemented and merged to master.