Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't leak Chunks in classify_relation #4029

Merged
merged 1 commit into from Jan 31, 2022
Merged

Conversation

akuzm
Copy link
Member

@akuzm akuzm commented Jan 27, 2022

This function is called often, at least 4 times per chunk, so these add
up. Freeing these chunks allows us to save memory. Ideally, we should
fix the function not to look up the chunk anew each time.

Also make a couple other tweaks that reduce memory usage for planning.

Fixes #2486

For a simple query explain delete from t where value = 1.1, where t is a distributed hypertable with 106 partitions on two data nodes, the memory usage at query completion is reduced from 6.7 MB baseline to 5.5 MB with this patch.

@akuzm akuzm requested a review from a team as a code owner January 27, 2022 14:04
@akuzm akuzm requested review from berkley, jerryxwu and konskov and removed request for a team January 27, 2022 14:04
@codecov
Copy link

codecov bot commented Jan 27, 2022

Codecov Report

Merging #4029 (494f461) into master (ae02934) will decrease coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4029      +/-   ##
==========================================
- Coverage   90.70%   90.68%   -0.02%     
==========================================
  Files         214      214              
  Lines       38726    38732       +6     
==========================================
- Hits        35126    35124       -2     
- Misses       3600     3608       +8     
Impacted Files Coverage Δ
src/chunk.h 100.00% <ø> (ø)
src/chunk.c 95.55% <100.00%> (+0.02%) ⬆️
src/planner.c 94.95% <100.00%> (+0.01%) ⬆️
src/ts_catalog/chunk_data_node.c 98.98% <100.00%> (ø)
tsl/src/fdw/modify_plan.c 90.16% <100.00%> (+0.16%) ⬆️
tsl/src/fdw/relinfo.c 96.22% <100.00%> (-0.10%) ⬇️
src/loader/bgw_launcher.c 89.50% <0.00%> (-2.47%) ⬇️
src/bgw/scheduler.c 85.67% <0.00%> (-0.88%) ⬇️
tsl/src/reorder.c 84.79% <0.00%> (-0.30%) ⬇️
tsl/src/bgw_policy/job.c 88.01% <0.00%> (-0.05%) ⬇️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ae02934...494f461. Read the comment docs.

@akuzm
Copy link
Member Author

akuzm commented Jan 27, 2022

Flaky test:

     bgw_custom                   ... FAILED   305534 ms
diff -u /home/runner/work/timescaledb/timescaledb/tsl/test/expected/bgw_custom.out /home/runner/work/timescaledb/timescaledb/build/tsl/test/results/bgw_custom.out
--- /home/runner/work/timescaledb/timescaledb/tsl/test/expected/bgw_custom.out	2022-01-27 15:18:08.083515219 +0000
+++ /home/runner/work/timescaledb/timescaledb/build/tsl/test/results/bgw_custom.out	2022-01-27 15:26:09.889365828 +0000
@@ -371,6 +371,7 @@
 
 -- Delete previous jobs
 SELECT delete_job(:job_id_1);
+NOTICE:  cancelling the background worker for job 1000 (pid 11127)
  delete_job 
 ------------

@akuzm akuzm self-assigned this Jan 28, 2022
@afiskon afiskon requested review from afiskon and removed request for berkley January 28, 2022 12:19
@akuzm
Copy link
Member Author

akuzm commented Jan 28, 2022

+ERROR: could not close pipe to external command: Broken pipe

Not sure what it is... there seems to be no core dumps.

@akuzm
Copy link
Member Author

akuzm commented Jan 31, 2022

+ERROR: could not close pipe to external command: Broken pipe

Not sure what it is... there seems to be no core dumps.

Ran the tests under asan just in case, no failures.

@akuzm
Copy link
Member Author

akuzm commented Jan 31, 2022

Flaky test cagg_bgw_dist_ht:

     cagg_bgw_dist_ht             ... FAILED   105330 ms
diff -u /home/runner/work/timescaledb/timescaledb/tsl/test/expected/cagg_bgw_dist_ht.out /home/runner/work/timescaledb/timescaledb/build/tsl/test/results/cagg_bgw_dist_ht.out
--- /home/runner/work/timescaledb/timescaledb/tsl/test/expected/cagg_bgw_dist_ht.out	2022-01-31 11:04:05.569033779 +0000
+++ /home/runner/work/timescaledb/timescaledb/build/tsl/test/results/cagg_bgw_dist_ht.out	2022-01-31 11:08:39.252015982 +0000
@@ -184,12 +184,11 @@
 (1 row)
 
 SELECT * FROM sorted_bgw_log;
- msg_no | mock_time |              application_name              |                                           msg                                           
---------+-----------+--------------------------------------------+-----------------------------------------------------------------------------------------
-      0 |         0 | DB Scheduler                               | [TESTING] Registered new background worker
-      1 |         0 | DB Scheduler                               | [TESTING] Wait until 25000, started at 0
-      0 |         0 | Refresh Continuous Aggregate Policy [1000] | refreshing continuous aggregate "test_continuous_agg_view" in window [ -2147483648, 6 ]
-(3 rows)
+ msg_no | mock_time | application_name |                                                msg                                                
+--------+-----------+------------------+---------------------------------------------------------------------------------------------------
+      0 |         0 | DB Scheduler     | failed to launch job 1000 "Refresh Continuous Aggregate Policy [1000]": out of background workers
+      1 |         0 | DB Scheduler     | [TESTING] Wait until 25000, started at 0
+(2 rows)

This function is called often, at least 4 times per chunk, so these add
up. Freeing these chunks allows us to save memory. Ideally, we should
fix the function not to look up the chunk anew each time.

Also make a couple other tweaks that reduce memory usage for planning.
@akuzm akuzm merged commit 22f9cf6 into timescale:master Jan 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Delete query consuming all the memory available
3 participants