Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too many dynamic shared memory segments error at select #345

Closed
pashkinelfe opened this issue Feb 13, 2024 · 4 comments
Closed

Too many dynamic shared memory segments error at select #345

pashkinelfe opened this issue Feb 13, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@pashkinelfe
Copy link
Collaborator

at 9c34d00

Reproduction from #333.
Insert tuples into 10000 partitions (2000 sec, maybe less)
In parallel backend run:

select count(*) from data;
ERROR:  too many dynamic shared memory segments
select count(*) from data_1;
 count
-------
  6954
select count(*) from data_2;
 count
-------
  6846
@pashkinelfe pashkinelfe added the bug Something isn't working label Feb 13, 2024
@pashkinelfe
Copy link
Collaborator Author

pashkinelfe commented Feb 20, 2024

At doing select it was doing a parallel scan of partitions.
Parallel worker stack for 60s: (200 samples/s)
count-parallel

@pashkinelfe
Copy link
Collaborator Author

error is reproduced with max_connections = 300. With max_connections = 3000 there is no error

@pashkinelfe
Copy link
Collaborator Author

pashkinelfe commented Feb 23, 2024

Plan (fails at max_connections = 300, succeds at max_connections = 3000)

 postgres=# explain select count(*) from data;
                                          QUERY PLAN
----------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=166875.22..166875.23 rows=1 width=8)
   ->  Gather  (cost=166875.00..166875.21 rows=2 width=8)
         Workers Planned: 2
         ->  Partial Aggregate  (cost=165875.00..165875.01 rows=1 width=8)
               ->  Parallel Append  (cost=0.00..158250.00 rows=3050000 width=0)
                     ->  Parallel Seq Scan on data_1  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_2  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_3  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_4  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_5  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_6  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_7  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_8  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_9  (cost=0.00..14.30 rows=430 width=0)
                     ->  Parallel Seq Scan on data_10  (cost=0.00..14.30 rows=430 width=0)
<10000 table scans continued>

Plan with

@pashkinelfe
Copy link
Collaborator Author

Rechecked at fix 32d4e91 with max_parallel_workers_per_gather = 2 (default) and 15.
Error doesn't appear anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant