Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cherry picked commits for next 1.7 release #2035

Merged
merged 13 commits into from Jun 30, 2020

Conversation

k-rus
Copy link
Contributor

@k-rus k-rus commented Jun 29, 2020

This PR suggests commits, which were cherry picked from the master. More commits will be added after they are implemented and merged to master.

svenklemm and others added 5 commits June 26, 2020 13:17
The extension_current_state is called in the cache_invalidate_callback
which might be called in a background worker when the database has not
been initialized yet leading to a "cannot read pg_class without having
selected a database" error.
This patch adds a check for this condition to prevent this error.
This patch changes chunk index creation to use the same functions
for creating index in one transaction and using multiple transactions.
The single transaction index creation used to adjust the original
stmt and adjusted it for the chunk which lead to problems with table
references not being adjusted properly for the chunk.
This patch changes the order in which locks are taken during
compression to avoid taking strong locks for long periods on referenced
tables.

Previously, constraints from the uncompressed chunk were copied to the
compressed chunk before compressing the data. When the uncompressed
chunk had foreign key constraints, this resulted in a
ShareRowExclusiveLock being held on the referenced table for the
remainder of the transaction, which includes the (potentially long)
period while the data is compressed, and prevented any
INSERTs/UPDATEs/DELETEs on the referenced table during the remainder of
the time it took the compression transaction to complete.

Copying constraints after completing the actual data compression does
not pose safety issues (as any updates to referenced keys are caught by
the FK constraint on the uncompressed chunk), and it enables the
compression job to minimize the time during which strong locks are held
on referenced tables.

Fixes timescale#1614.
When using time_bucket_gapfill with the treat_null_as_missing
option we did not properly set the number of valid values when
generating our own virtual tuples leading to "cannot extract
attribute from empty tuple slot" when the number of values got
reset. This patch changes the gapfill code to always set
the number of valid values when generating virtual tuples.
Updates CHANGELOG with changes from PR "Change compression locking order"(timescale#1932).
@k-rus k-rus requested a review from a team as a code owner June 29, 2020 12:54
@k-rus k-rus requested review from pmwkaa, mkindahl, WireBaron, erimatnor, gayyappan and svenklemm and removed request for a team June 29, 2020 12:54
fvannee and others added 7 commits June 29, 2020 16:19
Inlineable functions used to be slow to plan, because the query
preprocessing function could not find the relations inside the
functions, as they haven't been inlined yet at that point.
This commit adds a separate check in the get_relation_info_hook
to optimize pruning of hypertables.
If a hypertable is created with an index on it and a continuous
aggregate is further defined on the hypertable, it will create an
internal dependency between the chunks of the hypertable and the chunks
of the continuous aggregate.

When dropping chunks with `cascade_to_materialization` set to `FALSE`,
this will generate an error since the delete is not cascaded to the
internal dependencies.

This commit fixes this by collecting the internal dependencies and
using `performMultipleDelete` rather than `performDelete` to delete
several objects as one operation.

Fixes timescale#1889
If telemetry is disabled, it is checked inside the `ts_telemetry_main`
function, which starts to execute after a background worker has been
scheduled to execute the function. This means that any errors occurring
when starting the job will trigger errors and generate lines in the
log.

This commit moves the check to before a background worker is scheduled
for executing the job and allow the telemetry job to trivially succeed
without actually invoking the worker, hence no errors will be
generated for the telemetry job if it is disabled.

Also, the privacy test result changes and it is updated to check that
the telemetry job cannot leak information regardless of whether
telemetry is on or off.

Fixes timescale#1934 timescale#1788
If `bits_used` is not exactly 64, a shift will be attempted, even when
`bits_used > 64`. According to the standard "[...] if the value of the
right operand is negative or is greater or equal to the number of bits
in the promoted left operand, the behavior is undefined."

Hence we change the code to return `PG_UINT64_MAX` if a request to
shift more bits than the number of bits in the type is requested,
otherwise we perform the shift.
The compression_ddl test had a permutation that depended on the
PGISOLATIONTIMEOUT to cancel the test leading to unreasonably long
running and flaky test. This patch changes the test to set
lock_timeout instead to cancel the blocking much earlier.
This change ensures that API functions and DDL operations
which modify data respects read-only transaction state
set by default_transaction_read_only option.
Setting the `timescaledb.restoring` guc explicitly to 'off'
for the db meant that the setting got exported in `pg_dumpall`
and some other cases where that setting would then conflict
with the setting set by the pre_restore function causing it to
be overridden and causing errors on restore. This changes to
`RESET` so that instead it will take the system default and not
be dumped separately as an override.
PostgreSQL 12 is preinstalled, while 11 is not. To unify different
paths of PG 11 and 12 binaries, this commit implements workaround by
forcing installation of PostgreSQL 12, so it is in the same path as
PostgreSQL 11.
@codecov
Copy link

codecov bot commented Jun 29, 2020

Codecov Report

Merging #2035 into 1.7.x will decrease coverage by 75.60%.
The diff coverage is 12.24%.

Impacted file tree graph

@@             Coverage Diff             @@
##            1.7.x    #2035       +/-   ##
===========================================
- Coverage   89.61%   14.01%   -75.61%     
===========================================
  Files         150      143        -7     
  Lines       22799    21283     -1516     
===========================================
- Hits        20432     2982    -17450     
- Misses       2367    18301    +15934     
Flag Coverage Δ
#cron ?
#pr 14.01% <12.24%> (-75.59%) ⬇️
Impacted Files Coverage Δ
src/adts/bit_array_impl.h 0.00% <ø> (-99.05%) ⬇️
src/chunk.c 0.92% <0.00%> (-92.42%) ⬇️
src/chunk_adaptive.c 11.88% <0.00%> (-76.02%) ⬇️
src/chunk_index.c 0.58% <0.00%> (-94.81%) ⬇️
src/chunk_index.h 0.00% <ø> (-100.00%) ⬇️
src/dimension.c 23.43% <0.00%> (-67.64%) ⬇️
src/planner.c 28.14% <0.00%> (-64.90%) ⬇️
src/tablespace.c 1.82% <0.00%> (-89.78%) ⬇️
src/telemetry/telemetry.c 79.65% <ø> (-2.33%) ⬇️
tsl/src/compression/compress_utils.c 0.00% <0.00%> (-91.79%) ⬇️
... and 148 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2c1e998...f2d1b46. Read the comment docs.

@erimatnor erimatnor added this to the 1.7.2 milestone Jun 30, 2020
@k-rus k-rus merged commit 6c300a1 into timescale:1.7.x Jun 30, 2020
@k-rus k-rus deleted the 1.7.2-cherry-pick branch June 30, 2020 08:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants