Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dropping a hypertable does not drop its compression job #1570

Closed
feikesteenbergen opened this issue Dec 4, 2019 · 0 comments · Fixed by #1607
Closed

Dropping a hypertable does not drop its compression job #1570

feikesteenbergen opened this issue Dec 4, 2019 · 0 comments · Fixed by #1607
Assignees
Labels
Milestone

Comments

@feikesteenbergen
Copy link
Member

Relevant system information:

  • OS: Ubuntu 18.04
  • PostgreSQL version: 11.6
  • TimescaleDB version: 1.5.1
  • Installation method: apt (PostgreSQL), source (TimescaleDB)

Describe the bug
Dropping a hypertable does not drop the associated jobs.

To Reproduce

create extension if not exists timescaledb;
create table buggy(inserted timestamptz);
select create_hypertable('buggy', 'inserted');

-- Add a compress policy
alter table buggy set (timescaledb.compress);
select add_compress_chunks_policy('buggy', interval '1 day');

-- Insert some data to have multiple chunks created
INSERT INTO buggy
SELECT
    now() - random() * interval '3 months'
FROM
    generate_series(1,1000);

-- Show the compress job
SELECT
    bpcc.*,
    schema_name,
    table_name
FROM
    _timescaledb_config.bgw_policy_compress_chunks AS bpcc
LEFT JOIN
    _timescaledb_catalog.hypertable on (hypertable_id=id)
;
 job_id | hypertable_id |  older_than  | schema_name | table_name 
--------+---------------+--------------+-------------+------------
   1000 |             1 | (t,"1 day",) | public      | buggy
(1 row)
DROP TABLE buggy CASCADE;

Compress job is still there, but the hypertable is gone

SELECT
    bpcc.*,
    schema_name,
    table_name
FROM
    _timescaledb_config.bgw_policy_compress_chunks AS bpcc
LEFT JOIN
    _timescaledb_catalog.hypertable on (hypertable_id=id)
;
 job_id | hypertable_id |  older_than  | schema_name | table_name 
--------+---------------+--------------+-------------+------------
   1000 |             1 | (t,"1 day",) | (null)      | (null)
(1 row)

Trying to do anything with the now broken job fails

SELECT
    alter_job_schedule(1000, schedule_interval => '1 minute');
ERROR:  42P01: unable to get owner for relation with OID 0: invalid OID
LOCATION:  ts_rel_get_owner, hypertable.c:75

We see a similar error message in the PostgreSQL logfiles

SELECT
    log_time,
    sql_state_code,
    message,
    location
FROM
    pglog
WHERE
    log_time > '2019-12-04 15:44:48.576+01'
AND
    user_name IS NULL
ORDER BY
    log_time ASC;
-[ RECORD 1 ]--+------------------------------------------------------------------------------------------------
log_time       | 2019-12-04 15:45:39.719+01
sql_state_code | 42P01
message        | unable to get owner for relation with OID 0: invalid OID
location       | ts_rel_get_owner, hypertable.c:75
-[ RECORD 2 ]--+------------------------------------------------------------------------------------------------
log_time       | 2019-12-04 15:45:39.72+01
sql_state_code | 00000
message        | background worker "TimescaleDB Background Worker Scheduler" (PID 30063) exited with exit code 1
location       | LogChildExit, postmaster.c:3564

And the job is also not scheduled anymore

SELECT
    *,
    next_start - now() AS next_run
FROM
    _timescaledb_internal.bgw_job_stat
WHERE
    job_id = 1000;
-[ RECORD 1 ]----------+------------------------------
job_id                 | 1000
last_start             | 2019-12-04 15:44:39.706912+01
last_finish            | 2019-12-04 15:44:39.717425+01
next_start             | 2019-12-04 15:45:39.717425+01
last_successful_finish | 2019-12-04 15:44:39.717425+01
last_run_success       | t
total_runs             | 14
total_duration         | 00:00:00.179653
total_successes        | 14
total_failures         | 0
total_crashes          | 0
consecutive_failures   | 0
consecutive_crashes    | 0
next_run               | -00:09:38.748018

Expected behavior
Dropping a hypertable (CASCADE) drops the dependent objects;

Actual behavior
A compression job (if defined) sticks around, this also violates the
bgw_policy_compress_chunks_hypertable_id_fkey.

This means a restore of this may also cause issues, for example:

deployer=# SELECT timescaledb_pre_restore();
 timescaledb_pre_restore
-------------------------
 t
(1 row)
deployer=# \! pg_restore -Fc -d deployer deployer.bak
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3389; 0 173717 TABLE DATA bgw_policy_compress_chunks postgres
pg_restore: [archiver (db)] COPY failed for table "bgw_policy_compress_chunks": ERROR:  insert or update on table "bgw_policy_compress_chunks" violates foreign key constraint "bgw_policy_compress_chunks_hypertable_id_fkey"
DETAIL:  Key (hypertable_id)=(5) is not present in table "hypertable".
@bboule bboule added the bug label Dec 4, 2019
@bboule bboule added this to the 1.6.0 milestone Dec 4, 2019
cevian added a commit to cevian/timescaledb that referenced this issue Jan 2, 2020
Previously we could have a dangling policy and job referring
to a now-dropped hypertable.

We also block changing the compression options if a policy exists.

Fixes timescale#1570
cevian added a commit to cevian/timescaledb that referenced this issue Jan 2, 2020
Previously we could have a dangling policy and job referring
to a now-dropped hypertable.

We also block changing the compression options if a policy exists.

Fixes timescale#1570
cevian added a commit that referenced this issue Jan 2, 2020
Previously we could have a dangling policy and job referring
to a now-dropped hypertable.

We also block changing the compression options if a policy exists.

Fixes #1570
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants