New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calling refresh_continuous_aggregate under a User-Defined-Action causes segmentation fault #3145
Labels
bgw
The background worker subsystem, including the scheduler
bug
continuous_aggregate
segfault
Segmentation fault
Comments
Stack trace:
|
@hardikm10 can you please verify against 2.4.2? |
@NunoFilipeSantos it's not working yet... checked now and still failing. Assigning myself and will have a look.
|
Thank you @fabriziomello . :) |
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Sep 24, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Sep 24, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Sep 24, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Sep 24, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Oct 1, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Oct 4, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Oct 4, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
fabriziomello
added a commit
to fabriziomello/timescaledb
that referenced
this issue
Oct 4, 2021
Segmentation fault was ocurring when calling the procedure `refresh_continous_aggregate` from an user defined action (job). Fixed it by adding the `SPI_connect_ext/SPI_finish` during the execution because there are underlying SPI calls that was leading us to an invalid SPI state (nonexistent `_SPI_current` global). Fixes timescale#3145
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
bgw
The background worker subsystem, including the scheduler
bug
continuous_aggregate
segfault
Segmentation fault
Relevant system information:
OS: [e.g. Ubuntu 16.04, Windows 10 x64, etc]: Docker / timescale/timescaledb:2.2.0-pg11
Describe the bug
Executing CALL refresh_continuous_aggregate() under a user defined action causes segmentation fault.
To Reproduce
Steps to reproduce the behavior:
drop table sensor_data;
create table sensor_data( time timestamptz not null, sensor_id integer not null, cpu double precision null, temperature double precision null );
select from create_hypertable('sensor_data','time');
insert into sensor_data select time + (interval '1 minute' * random()) as time, sensor_id, random() as cpu, random()*100 as temperature from generate_series(now() - interval '1 months', now() - interval '1 week', interval '10 minute') as g1(time), generate_series(1, 100, 1 ) as g2(sensor_id) order by time;
create materialized view sensor_summary_hourly with (timescaledb.continuous) as select time_bucket('1 hour', time) one_hour, sensor_id, avg(cpu) as avg_cpu from sensor_data group by one_hour, sensor_id with no data;
CREATE OR REPLACE PROCEDURE refresh_via_job(job_id int, config jsonb) LANGUAGE PLPGSQL AS $$ BEGIN CALL refresh_continuous_aggregate('sensor_summary_hourly', null,null); END $$;
SELECT add_job('refresh_via_job','2 day', config => '{}');
CALL run_job(1000); -- -- or whatever the job id returned but it crashes during here
Expected behavior
It should execute under the User Defined Action and the CAGG should be updated
Actual behavior
Server crashes with a segfault when the job is called.
Additional context
The problem I am trying to solve is to write a UDA which refreshes a cagg on a large hypertable in smaller batches, programmatically since it couldn't be written into .
The text was updated successfully, but these errors were encountered: